Evaluation Design Toolkit

advertisement
Biodiversity Communication, Education and Public Awareness (CEPA)
Evaluation Design Toolkit
Version 1 – May 2012
Commissioned by
ICLEI LAB and City of Cape Town
Authors
Eureta Rosenberg, Claire Janisch and Nirmala Nair
Local Government Contributors
Cape Town, Edmonton, Nagoya, São Paulo
We value feedback and contributions.
Lindie Buirski
Lindie.Buirski@capetown.gov.za
Contents of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table of Contents on CD
FOLDER 1
Orientation
FOLDER 2
Key Ideas Pages
Approaches to CEPA and Change
Approaches to Evaluation
Understanding Complex Systems
FOLDER 3
Case Studies
Cape Town 1 – Evaluation of the Green Audit and Retrofit Programme for Schools
Edmonton – Evaluation of the Master Naturalist Programme
Nagoya – Evaluation of Nagoya Open University of the Environment
São Paulo – Evaluation of the Reintroduction of Howler Monkeys
Cape Town 2 – Evaluation of the ERMD’s EE Projects and Programmes
FOLDER 4
Evaluation Design Steps
FOLDER 5
Appendices
Appendix 1 – Leverage Points in Systems
Appendix 2 – Indicators for Reviewing Environmental Education Content
Appendix 3 – Guidelines for Environmental Education
Appendix 4 – Guidelines for Biodiversity Communication
Appendix 5 – Stories of Most Significant Change Methodology
Appendix 6 – Alternative Logical Framework Planning Guide
Executive Summary (brochure)
Section 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
SECTION 1: ORIENTATION
1
Section 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Why this Toolkit?
Evaluation is now a regular part of our professional lives. It is however not a simple process,
and practitioners and managers often look for evaluation guidelines. There are therefore
many resources on how to do M&E (monitoring and evaluation), including some excellent
online toolkits for evaluating environmental education and public awareness programmes.
Why then, another evaluation resource?
The City of Cape Town found that although they were starting to do regular evaluations of
their Communication, Education and Public Awareness (CEPA) activities for biodiversity,
these evaluations were too limited. They did not tell the City enough about their CEPA
programmes, and whether they were on-target and worthwhile. In particular, evaluation
teams seldom used indicators that seemed particularly suitable to CEPA.
“At the end of each project we work
so hard to set up I wonder why we
haven’t got the results we wanted.”
(CEPA Manager)
When Cape Town approached ICLEI (International Council of Local Government
Environmental Initiatives, see www.iclei.org), they found there was also wider interest in
developing better ways to get the measure of CEPA in a local government context. ICLEI’s
Local Action for Biodiversity (LAB) programme therefore decided to partner the City of Cape
Town in funding the development of a toolkit that would focus on indicators and processes
that are particularly suitable for CEPA evaluations.
Four pioneering local governments from around the world signed up to contribute to the
development of this resource.
If you are a CEPA manager or practitioner who wants to use evaluation to account for your
activities and improve your programmes, this resource will be useful to you.
What can you expect from this toolkit?
•
•
A guide to planning evaluations that extend the widely used linear logical framework, in
order to better serve complex, non-linear social and ecological change processes.
Contexts related to biodiversity action. Although the toolkit can also be used for CEPA
programmes in other contexts, biodiversity and CEPA specific examples help to clarify
evaluation guidelines for those familiar with this context.
2
Section 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
•
•
•
•
•
Case studies that illustrate local governments’ biodiversity related CEPA programmes
and the ways in which they are being evaluated.
Detailed guidelines on indicator development.
A view of CEPA as inputs into complex ecological, institutional and social systems.
An approach to evaluation as a tool for ongoing learning and adaptation.
An approach to evaluation design that is participatory – we recommend that evaluation
designers involve evaluation stakeholders in determining roles, approaches, evaluation
questions and indicators, not only because this would make the evaluation outcomes
more relevant to these stakeholders, but because the process of evaluation design can
in itself generate and share useful insights.
A framework for communicating insights about the values and processes of CEPA
programmes to funders, management and the public in ways that are clear, credible,
and powerful enough to contribute to broader change.
“Prepare for a practice-based
learning experience. Expect to
learn from the process of
designing the evaluation, not
just from the evaluation
results!” (Evaluator)
3
Section 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
When to Use this Toolkit
 Use this resource whenever you start planning a CEPA programme, or a new phase in a
current CEPA programme. Don’t wait until the CEPA programme is designed – there is
much value in designing the CEPA programme and its evaluation together. Also consider
designing the CEPA programme and evaluation, at the same time as a broader
biodiversity programme is being designed. Otherwise, the CEPA activities may fail to be
integrated, adequately valued and resourced, or appropriately focused.
 Many of us reach for evaluation guidelines when our CEPA programme is half-way or
completed, when we need to report to sponsors, or to decide on a next phase. We have
found that the tools in this resource and the steps outlined are still valuable at such a
stage, particularly when one needs suitable indicators.
 Surprisingly, this toolkit can even be useful when a CEPA evaluation has already been
concluded, but somehow failed to satisfy the information needs. As some of the case
examples demonstrate, this toolkit can help identify what was lacking or limiting in a
previous evaluation design. The tools included here help us to add probing questions
that can make a big difference to the levels of insight gained from an evaluation.
Thanks to the generous contributions of four cities around the world, the Toolkit can draw
on several real-life case studies. These case examples illustrate some of the diverse contexts
in which biodiversity CEPA programmes take place.
The role of the case studies is not to present ‘best practice’. Their role is to provide an actual
situation, with some contextual detail, in which to explain the design steps and ideas in this
toolkit. When explaining an idea or step, we may give an actual, or more often than not, a
possible example from a case study. In the brief analysis of the case studies, we refer to
some of the steps the evaluators could have added, or approached differently. This simply
helps one to consider how the ideas and tools might work in one’s own context.
This toolkit is not an academic resource. It will not help anyone who needs an overview of
the intellectual paradigms and political contexts which have influenced the different
generations of evaluation in the fields of education, environment and development over the
years.
On the other hand, the toolkit is also not an introduction to evaluation. It is probably most
useful to those who already having some experience in evaluation, and in running or
managing CEPA programmes. For a basic starter-pack that introduces evaluation, see the
City of Cape Town’s Into Evaluation: A Start-Up Toolkit, which is the predecessor of this
resource (www.capetown.gov.za). 1
1
Into Evaluation: A Start-Up Toolkit, the predecessor of this resource (www.capetown.gov.za)
4
Section 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
This toolkit does not provide extensive detail about the methods for collecting or generating
evaluation data. This is critical knowledge which is often neglected when evaluations are
designed. For many evaluators a questionnaire is the only ‘proper’ research tool and it is
applied in all situations, even when not really appropriate. Qualitative methods such as
focus groups, observations or interviews can be equally or more valid, if they are well
designed.
Although the case studies and examples are from the context of CEPA programmes related
to biodiversity in a local government, the suggestions here are valid for CEPA related to
other environmental, developmental and social change contexts, too. They are also
pertinent to biodiversity-related CEPA and evaluation outside of local government.
How to Find Your Way in this Toolkit
The toolkit components are stored in different folders on the CD. If you are most interested
in the practical tools, start with folder 4 and work your way towards the others later.
EXECUTIVE SUMMARY
A brochure which can be printed out (small Z-fold) to share with others, it gives an overview
of the toolkit in a nutshell. Particularly useful to show the nine design steps which make up
the heart of the resource and the steps you would follow (or dip into) if you were to design
your own evaluation using the resource. Start here for a good sense of what this is all about.
FOLDER 1
ORIENTATION
Refer back to the current file from time to time if you want pointers on how to approach
toolkit components.
FOLDER 2
KEY IDEAS PAGES
Three sets of notes that provide background to the key ideas behind the tools. Read these if
you are confused and/or when you would like more in-depth understanding.
FOLDER 3
CASE STUDIES
You could read them first, to get a sense of the diverse contexts in which CEPA programmes
take place, and some of the many challenges and strategies evaluation teams have. While
you work through the tools in Folder 4, you may want to refer back to individual case
studies, as examples are often drawn from these contexts.
FOLDER 4
EVALUATION DESIGN STEPS
This file is the biggest and the heart of the toolkit. It consists of nine giant steps through
which to design a customized evaluation.
FOLDER 5
APPENDICES
Several useful source documents. They are mentioned in the other files, so look them up as
and when you encounter these references, to understand their context. You could add
other documents that you find useful, to this folder, keeping all your resources in one place.
5
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
SECTION 2: KEY IDEAS PAGES
1
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPROACHES TO CEPA AND CHANGE
CEPA refers to Communication, Education and Public Awareness programmes. These are
somewhat different kinds of processes used in a variety of ways towards more or less openended results consisting of learning and change among individuals and communities. How
we approach and do CEPA activities depends to some extent on our context. But our
approach is also influenced by:
•
•
•
our understanding of the causes of the problem we are trying to tackle
our beliefs about whom or what needs to change, and
our views on how such change will come about, and the role of CEPA activities in the
process.
CEPA managers and other stakeholders have a range of assumptions or mental models
about these fundamental things. To design a CEPA evaluation that fully meets the
stakeholders’ needs, CEPA managers need to recognize the mental models of CEPA and
change that they hold. Biodiversity managers, for example, need to reflect on the exact role
that they want CEPA programmes to play in biodiversity programmes. A comprehensive
evaluation should encourage managers and CEPA practitioners to reflect, from time to time,
on their CEPA models, to consider if those should change.
“It is not enough to tell people about biodiversity and the threats it faces in
order to bring about positive change. The changes required will not come about
by rational individual choice but require those in the field of biodiversity to
start thinking differently about using communication, education and public
awareness [programmes].” (ICLEI LAB)
Below is a sketch of three common mental models or approaches for CEPA processes. Most
practitioners would regard one or more of these as valid, and use one or more of them,
depending on the context. Most practitioners tend to favour one, while some do not even
consider that there are alternatives!
2
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
A. Advocacy for Change within Powerful Structures
In this approach to CEPA processes, the problem of biodiversity loss is understood to be the
result of powerful agencies and structures (such as industry and governments) who have a
negative impact on biodiversity, or fail to act in the interest of nature but also in the interest
of people who are often negatively affected. Education and communication is used to
conscientise citizens to put pressure on these powerful agencies to change their ways. A
proponent of this approach was the renowned Brasilian educator Paolo Freire (1921-1997).
The model for change is conscientisation, i.e. giving people tools to understand oppressive
systems and information about problematic actions of powerful agencies. In the 1960s,
following in the wake of unregulated development, Japanese teachers and other citizens
campaigned in the media and marched against mercury in the sea and other forms of
pollution in what was called ‘education against the disruption of the public space’. There is
strong emphasis on taking oppositional action for system wide change. Today this is still a
popular approach among some NGOs and activists but most local governments, who can be
regarded as a powerful agency themselves, find this approach to CEPA too disruptive for
normal operations.
B. Behaviour Change in Individuals
In this approach to CEPA processes, the problem is seen to be the behaviors of individuals,
e.g. local residents. CEPA programmes aim to provide these individuals with powerful
experiences and information to cause a change in their behaviour, or (in the case of
children) to shape their future behaviour. The need for change, and the nature of the
change, is pre-defined by the CEPA manager. For example, a local authority may have
introduced a recycling programme, but residents do not yet have the habit of sorting their
waste, so a radio campaign is used to get residents to change their waste sorting behavior
(develop a new habit or behaviour) to support the new waste management system. Or,
CEPA programmes in the forest might be aimed at influencing citizens to support the local
authorities’ decision to set aside land for conservation purposes, rather than to lobby for a
new housing development.
The model for change is in its basic form linear: experience + information  attitude change
 behavior change.
While this is perhaps the most widely held understanding of the kind of change that is
needed and the kinds of CEPA that will achieve the change, there are also many questions
about the assumed relationships between the elements. For this model to work well, the
expected behavior needs to be pre-defined and the intended outcomes must be easy to
control. For example, if a river is polluted, children can be educated not to swim in it. It may
be more complex to educate citizens around behavior in forests, which hold different values
for diverse citizen populations who have had different experiences of forests in the past;
forests are beautiful and can be enjoyed and treasured, but they can also be dangerous as
sites of crime, so ‘messaging’ becomes more complex or even impossible.
3
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
C. Co-Constructing Collective Change
In this approach to CEPA processes, the view is that problems and solutions are complex and
context dependent; that CEPA practitioners and stakeholders must learn together what the
different dimensions of a problem like biodiversity loss are, and that the responses to
environmental problems may need to be open ended, because they are complex, and may
differ from context to context. It is therefore seldom possible to simply give a target group a
message or information, and assume that the situation will change in a predetermined way.
The aim of CEPA in this approach is to strengthen collective capacity to act on issues. The
responsibility for identifying problems and for finding the solutions is shared by individual
citizens and by agencies like government, industry and education systems. This model for
change may be termed collective or social learning and it is reflexive. It may involve various
parties coming together to form communities of practice that work together to better
understand and formulate solutions to problems of common concern, in the process,
questioning the values that may underpin the problem. CEPA practitioners in Canada, South
Africa and Holland are among those exploring this approach.
Action may drive change, in a new, non-linear, model of change: Action + collective
reflection  More action + reflection + collective change 
Another way in which to differentiate between different CEPA models is illustrated below:
Linear Behaviour Change Model of CEPA:
Citzen lacks
knowledge and fails
to act for the
environment
Citizen receives
scientists'
information
through CEPA
Citizen is aware and
takes action for the
environment
4
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Non-Linear Collective Learning from Doing Model of CEPA:
Acting on the
environment
Collectively
learning by
reflecting on
this action
Collectively
learning more
through further
reflection
Acting better
for the
environment
Similarly, we can view change generally as linear or non-linear processes:
A simple, linear model of change:
Intervention brings about required
change
Status quo - a fixed situation
that can be separated into
discrete variables
Undesired outcomes change to
desired outcomes
5
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
A non-linear ‘complex systems’ model of change:
Contextual factors
influence intervention and
its impacts
Fluid, dynamic status quo,
featuring multiple variables
and interaction with its
environment (context)
Intervention may have multiple
impacts on status quo
Intervention changes during life
of the programme
How are our assumptions about CEPA relevant to evaluation?
•
•
Our model of CEPA and change will determine ‘what matters most’, and what we wish
to evaluate
Evaluating our model of CEPA and change may help us to understand why our CEPA
programme is successful, or not (see also Reid et al 1).
If one city’s CEPA model is about getting citizens to change their behavior in a pre-defined
way, its CEPA evaluation will use indicators for that specific behavior: Are citizens now
sorting their waste? Have children stopped from swimming in the river?
If another city’s CEPA model is about building the entire city’s capacity to respond
appropriately, but in a variety of ways, to a variety of issues and situations, they would
rather test whether they are building that capacity, and how such capacity is evident among
both staff and citizens, in a variety of ways. These are two quite different approaches to
evaluation. For example, the first is more prescriptive, and the second more open-ended.
Understanding that the city’s approach to CEPA is one of a number of alternatives, allows
the city to evaluate its underlying approach to CEPA, and not just the activities. Therefore, if
an evaluation results show that CEPA activities are not reaching the desired results, the city
Alan Reid, Alan, Nikel, Jutta, & Scott, William, Indicators for Education for Sustainable Development:
A report on perspectives, challenges and progress, Centre for Research in Education and the
Environment, University of Bath, 2006
1
6
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
would be able to examine whether it is perhaps the approach to CEPA and model of change
which are inappropriate to the situation. The following quote from Donella Meadows 2
highlights this other consideration:
“Models will always be incomplete. We will be
making decisions under uncertainty. One of the
tasks of evaluation is to reduce the uncertainty. …
There is no shame in having a wrong model or
misleading indicator, only in clinging to it in the
face of contradictory evidence”. Donella Meadows
How to Use This
We find it useful to articulate our understanding of what we believe CEPA processes are
about, and how they are likely to bring about change, and then to map these theories or
assumptions as part of our programme plan. This then becomes part of the basis of a
comprehensive evaluation that examines both activities and starting assumptions. The
process is outlined in detail in Folder 4: Evaluation Design Steps.
The big challenge is how to evaluate what matters most, the really worthwhile outcomes of
CEPA processes. Often we settle for indicators that reflect what we can practically measure,
rather than that which is actually worth focusing on. In longer term CEPA programmes,
indicators determine where CEPA managers end up putting most of their attention 3.
There is no doubt that the ‘really worthwhile’ impact and outcomes of CEPA programmes
are hard to evaluate.
Discuss the following questions with CEPA colleagues:
How do we measure learning?
Because we have all been tested for our knowledge at school, we may think this is easy. But
in many CEPA situations, we want to achieve more than just increase the participants’
knowledge. Why?
2
Meadows, Donella, Indicators and Information Systems for Sustainable Development, A report to
the Balaton Group, published by The Sustainability Institute, Vermont 1998
3
See Step 6 in Folder 4: Evaluation Design Steps
7
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
What is learning? Is it only about gaining more knowledge?
In many CEPA situations there is a desire for participants to develop a deeper
understanding, different ways of thinking about biodiversity, different attitudes and values,
even to un-learn some of their deep-seated understandings. For example, citizens may have
learnt that an area of un-developed land is unsightly, a ‘waste’ (of development
opportunity), or a security threat. For them to start valuing it differently, requires more than
just factual information about species loss. And often, we don’t have the scientific
information to ‘prove’ that a particular area of land plays a critical role in citizens’ wellbeing. Hence, information is not always enough!
How do we observe values and attitude change? How do we interpret behaviour?
If we are interested in values and attitude change, our evaluation may ask respondents
directly: Did you change your mind about this forest? But this is a crude measure, as many
of us answer politely to please the other person, or we might not even know for sure
whether there has been a shift in our attitude. A change in behaviour can be an indication of
a value shift. For example, if villagers are starting to bring injured howler monkeys to the
rehabilitation centre for treatment, we could judge that they have changed their attitude
and now value biodiversity. But, for some of them it could simply mean that they are hoping
for a small reward, or a chance to talk to the staff.
Can we link attitude and behaviour change to eventual outcomes? What about the other
variables that could be involved?
When we do observe positive changes in values, attitudes and behaviour, can we
confidently attribute them to our CEPA programme? When residents seem more positive
about the forest, how do we know this has been influenced by our CEPA activities and not
(also) by a television series on climate change, completely unrelated to our programme, or
by a new school curriculum? Or could it be that residents who are more affluent and mobile,
able to pay for security, and also value nature, are choosing to move into areas closer the
forest – regardless of our CEPA programmes? Then again, perhaps these residents value
nature now because of CEPA programmes they participated in when they were children?
The outcomes of CEPA programmes may only be observed in the middle to longer term –
which also makes it more complex to evaluate if we want to know right know whether these
efforts are worthwhile, or not.
8
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
How do we link CEPA outcomes to biodiversity benefits?
For some, the most perplexing question about the changes resulting from CEPA is whether
we can link them to biodiversity benefits. If the participants in the City of Edmonton’s
Master Naturalist Programme 4 volunteer 35 hours of their time after training, does that
mean that Edmonton’s natural environments are being better protected? To ascertain this,
one would probably have to explore the nature and the quality of their volunteer actions. If
they clear out invasive alien plants, without destroying any native vegetation, we can
probably be confident that they are having a positive effect – provided this was a
conservation-worthy site to start with, and that some quirk of nature (such a big storm or an
insect plague) does not unexpectedly destroy their work. But what if the Master Naturalists’
volunteer work is an awareness raising activity? Then it becomes doubly difficult to evaluate
the eventual impact of their work, on the status of Edmonton’s biodiversity.
Yet some would probably say that newly aware citizens who support biodiversity
conservation and who share their enthusiasm and knowledge with other citizens, are surely
a good outcome. Do we still need to prove that it is worthwhile?
An alternative approach: using what we know about good education and communication
Many CEPA practitioners choose to assume that CEPA is inherently worthwhile, provided it
follows best practice guidelines. Following this approach, an evaluation would test for
evidence that best practice guidelines are being followed. The best practice guidelines –
being only theories - can in turn be tested and refined over time as they are informed by our
observations of their outcomes, over the shorter and longer term.
How to Use This
What is good CEPA educational or communications practice? Is this being followed? What
are some of the results we observe? What changes to practice can we make? What are some
of the new results? Asking these questions from time to time leads to a process of ongoing
learning, in a process that one could call adaptive management of CEPA activities5.
This approach acknowledges that some of the benefits and outcomes of CEPA activities are
either truly intangible, or simply so difficult to measure that they are impractical. An
approach may then be to simply describe what good environmental education is, and to
assess our CEPA processes against these criteria or principles.
What constitutes an excellent CEPA programme? In Appendix 3 we suggest 14 principles
which you can examine to see if they match your understanding of what constitutes good
CEPA processes.
4
5
See Folder 3: Case Studies.
For guidelines on education and communication, refer to the Appendices Folder on the CD.
9
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Summary
To summarise the key ideas in this section, understanding our mental models for CEPA and
change is useful in the design of evaluations. What we regard as important milestones and
destinations, is determined by what we view as the route to change. These in turn,
determine what we choose to evaluate. Our important milestones and destinations are
turned into indicators, which often form the backbone of evaluations.
If we conclude that CEPA situations are quite complex and that we can seldom measure
anything important or worthwhile directly, a developmental approach to evaluation
becomes useful. In this approach, evaluation is used more for learning and ongoing
adaptation of activities, as opposed to proving that they are worthwhile. Evaluation
becomes integrated with CEPA activities. CEPA is approached as ongoing, collective learning
through doing and evaluation. The next set of Key Ideas explains this further.
“The concept of ‘environmental education’ is not just a simple ‘add-on’ of
sustainability concepts to the curriculum, but a cultural shift in the way we see
education and learning, based on a more ecological or relational view of the
world. Rather than a piecemeal, bolt-on response which leaves the mainstream
otherwise untouched, it implies systemic change in thinking and practice –
essentially a new paradigm emerging. Envisioning this change – and realisable,
practicable steps in our own working contexts – is key. In essence, what we all are
engaged in here is a critically important ‘learning about learning’ process, and one
which will directly affect the chances of a more sustainable future for all” Stephen Stirling, 2001, Sustainable Education: Re-Visioning Learning and Change,
Green Books for the Schumacher Society.
10
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPROACHES TO EVALUATION
This set of Key Ideas explores factors that influence the design of an evaluation. We argue
that there are different approaches to evaluation, based on:
•
•
The research framework underpinning the evaluators’ approach
The particular role the evaluation is to play.
The role of the evaluation is in turn influenced by:
•
•
The stakeholders in the evaluation, their needs and interests, and
The stage in the CEPA programme at which the evaluation takes place.
This section shares ideas and tools for planning different evaluation strategies, and using
different indicators, to meet needs of different audiences, and needs that are more or less
prominent at different times in the life of a programme. In particular, we highlight a more
developmental approach to evaluation, which is non-traditional and demanding, but
particularly suitable for complex contexts such as those in which we undertake CEPA
programmes.
Research Frameworks for Evaluation
Evaluation is essentially a research process, and as such our approach to evaluation is
influenced by our understanding of how one gathers evidence and come to valid
conclusions. After all, evaluation reports have to differ from mere opinion on how good a
CEPA programme is. How do we gather evidence, what is regarded as suitable evidence, and
what count as rigorous research processes that allows us to state with some confidence:
“This evaluation found that …”?
Depending on how we answer these questions, we will adopt one of a number of recognized
research frameworks for an evaluation. One such a typology, summarised in Table 1, is
described in the forerunner to this toolkit, called Into Evaluation (www.capetown.gov.za).
Readers interested in this aspect are referred to this document, or one of the other
excellent resources detailing different approaches to research (paradigms and
methodological frameworks).
11
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 1 Research Frameworks for Evaluation 6
Research Frameworks for
Evaluation
Experimental and Empiricist
Naturalistic and
Constructivist
(e.g. Illuminative Evaluation,
an approach developed by
Parlett and Hamilton in the
early 1970s)
Participatory and Critical
(see e.g. the work of Patti
Lather on Feminist
Educational Research in the
critical tradition)
Realist and Pragmatic
(see e.g. Ray Pawson and
Nick Tilly’s Realistic
Evaluation, published by
Sage in 1997)
Key Features
Empiricism is the assumption that objectivity can be
obtained by using only measurable indicators and
quantifiable data. Based on the traditional scientific method
used in the natural sciences, with allowances where
necessary for the idiosyncrasies of the social world (e.g.
difficult to control variables), often resulting in quasiexperimental design. Pre-test post-test designs and the use
of control groups are popular designs in this framework.
Research ‘subjects’’ opinions are not valued.
Intentionally departs from the above approach by using
more ‘natural’ methods (like conversations rather than
questionnaires); this approach assumes that the ‘objectivity’
of scientific measures is, like much of our reality, socially
constructed. Detailed case studies are popular, and
stakeholders’ opinions and insights are valued and quoted.
Promotes the learning role of evaluation, and the value of all
stakeholders actively participating in setting the evaluation
questions, generating data and coming to conclusions
through dialogue and reflection. Often uses action research
cycles (iterative processes). Where a critical element is
present, this assumes that power structures must be
interrogated in case they play an oppressive role (e.g. some
powerful participants may prevent CEPA programmes from
challenging the status quo).
Claims that while much of our reality is socially constructive,
there is also a material reality and not all understandings of
this reality are equally valid or valuable. Uses a combination
of qualitative and quantitative data, detailed case studies as
well as programmatic overviews, to interrogate the validity
of our theories about our CEPA programmes.
Into Evaluation: A Start-Up Resource For Evaluating Environmental Education and Training Projects,
Programmes, Resources, 2004, City of Cape Town, www.capetown.gov.za
6
12
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Roles of Evaluation
CEPA practitioners and managers use evaluations for a variety of reasons, which will in turn
influence the design of these evaluations. These purposes include:
1. Accountability and feedback to funders and other implementation partners
2. Accountability and feedback to intended beneficiaries
3. Keeping track of progress
4. Identifying problems as they arise
5. Improving the programme being evaluated, during its life span
6. Communicating about our CEPA programmes in a credible manner
7. Motivating and inspiring CEPA participants and others with our efforts & results
8. Providing information for decision making about the future of a programme
9. Learning how better to conduct CEPA programmes
10. Learning how to work in an evaluative manner.
Evaluations have different audiences, who often require and expect different things from an
evaluation: perhaps different indicators of success, or a different reporting format. For
example …
“As a politician I want the
evaluation to give me a oneline statement so I can tell
the public whether this
programme was worth it.”
As the CEPA manager I
want to know how to
improve our programme,
but also, whether we are
making a difference for
the better.
“I need to decide
whether to continue
funding this project so
I need hard evidence
that money has been
well spent.”
“Everyone wants to
know whether CEPA will
have a positive impact
on people and the
environment!”
Over the lifespan of a CEPA programme, evaluation will have different purposes, and
different evaluation strategies are appropriate at these different stages of the programme.
13
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Evaluation Strategies
Traditional Evaluation Strategies - Summative & Formative Evaluations
Traditional evaluations are often described as either formative or summative:
•
•
Formative evaluations are conducted during programme development and
implementation, to provide direction on how to best to improve the programme and
achieve its goals.
Summative evaluations are completed once a programme is well established and will
indicate to what extent the programme has been achieving its goals.
Formative evaluations help you to improve your programme and summative evaluations
help you prove whether your programme worked the way you planned. Summative
evaluations build on data from the earlier stages.
Within the categories of formative and summative, one can also distinguish different types
of evaluation linked to purpose. Table 2 describes these.
How to Use This
Use Table 2 to reflect on what type of evaluation you need at this stage of your CEPA
programme’s life span, and what questions you could usefully ask.
TABLE 2: Types of Evaluation (adapted from MEERA 7)
Type of
Evaluation
Needs
Assessment
Process /
Implementation
Evaluation
CEPA
programme
stage
Before
programme
begins
New
programme
Purpose of the Evaluation
Formative (Improve)
Determines if there is a need for programme, how
great the need is, and how to meet it. A needs
assessment can help determine what groups are not
currently served by CEPA programmes in a city and
provide insight into what new programme would
meet the needs of these groups.
Examines the implementation process and whether
the programme is operating as planned. Focuses
mostly on activities, outputs, and short-term
outcomes for the purpose of monitoring progress
and making mid-course corrections if needed. Can
be done continuously or as a once-off assessment.
Results are used to improve the programme. The
Examples of Question
to ask
Is there a need for
education,
communications?
What would best meet
the need?
Is the programme
operating as planned?
How many participants
are being reached with
CEPA programmes?
Which groups attend
the courses?
My Environmental Education Resource Assistant, MEERA website, http://meera.snre.umich.edu/,
by Jason Duvall, Amy Higgs & Kim Wolske, last modified 2007-12-18 16:33, contributors: Brian Barch,
Nick Montgomery, Michaela Zint.
7
14
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Edmonton and Nagoya case studies in Folder 3 on
the CD are examples of process evaluations.
Established
programme
Outcome
Evaluation
Impact
Evaluation
Mature or
historic
programme
Summative (Prove)
Investigates the extent to which the programme is
achieving its outcomes. These outcomes are the
short-term and medium-term changes that result
directly from the programme. Although data may be
collected throughout the programme, the purpose
here is to determine the value and worth of a
programme based on results. For example, the Cape
Town Green Schools Audit (see Folder 3 on the CD)
looked for improvements in students’ knowledge,
attitudes and actions.
Determines any broader, longer-term changes that
have occurred as a result of a CEPA programme.
These impacts are the net effects, typically on an
entire school, community, organisation, city or
environment. Impact evaluations may focus on the
educational or environmental quality, biodiversity or
human well-being benefits of CEPA programmes.
How satisfied are
participants with the
courses?
Has the programme
been achieving its
objectives?
What predicted and
unpredicted impacts
has the programme
had?
15
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
An evaluation is likely to be more useful (and easier to conduct) if the evaluation process is
provided for from the start, and built into other programme activities. Making evaluation an
integral part of a CEPA programme means designing the CEPA programme with evaluation
in mind, collecting data on an on-going basis, and using this data at regular intervals to
reflect on and improve your programme. Developmental Evaluation
When evaluation is integrated into your programme for continuous improvement, the
approach is called developmental. Developmental evaluation is in a way a combination of
formative and summative evaluation.
“The great unexplored frontier is evaluation
under conditions of complexity. Developmental
evaluation explores that frontier.”
Michael Quinn Patton, 2008,
Utilization-Focused Evaluation, Sage
In developmental evaluation, programme planning, implementation and evaluation are
integrated processes. Evaluative questions are asked and evaluation logic is applied to
support programme (or staff or organizational) development in a long-term, on-going
process of continuous improvement, adaptation and intentional change. 8 Programmes are
seen as evolving and adaptive, and evaluation processes are used to regularly examine the
programme, and alert programme staff to possible unintended results and side effects. Even
assumptions behind the programme and its design are from time to time questioned, all
with the intent of improving the likelihood of success.
Developmental evaluation allows CEPA practitioners to adapt their programmes to
emergent and dynamic realities in the particular, complex contexts in which they operate. It
encourages innovations which may take the form of re-designing aspects of the programme,
developing new teaching or communication methods, adapting old resources, and making
organisational changes or other systems interventions.
Figure 1 illustrates the continual loop learning for developmental evaluation. This cycle can
be started up at any stage of a CEPA programme, and it is also described in Folder 4
Evaluation Design Steps, where it is used as the basis for designing an evaluation process.
8
Patton, Michael Quinn, Utilization-Focused Evaluation, Sage, 2008
16
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Figure 1 Developmental Evaluation is a Continual Loop for Learning
Programme
planning
Formal
evaluation
stage
Programme
implementation
Ongoing
informal
evaluation
Formal
evaluation
stage
Continued
implementation
Programme
refinement or
change
Developing and implementing such an integrated evaluation process has several benefits. It
helps CEPA managers to:
•
•
•
•
•
better understand target audiences' needs and how to meet these needs
design objectives that are more achievable and measurable
monitor progress toward objectives more effectively and efficiently
increase a CEPA programme's productivity and effectiveness
Learn more from evaluation.
Table 3 compares developmental evaluation to traditional evaluation. A key difference is
one of continuous learning (developmental) compared to definitive judgement based on a
single evaluation of the process or result. The evaluator or evaluators also take on a
different role; in developmental evaluations, the evaluator plays an active role in supporting
the learning of participants through the evaluation process. CEPA programme staff are
expected to be centrally involved in these evaluation processes.
17
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 3: Developmental evaluation compared to traditional approaches 9
Traditional Evaluation
(formative or summative - for testing
results)
Testing models: renders definitive
judgments of success or failure
Uses mostly an external evaluator who
is deemed to be independent, objective
Measures success against
predetermined goals using
predetermined indicators
Evaluators determine the design based
on their perspective about what is
important; evaluators control the
evaluation process.
Design the evaluation based on linear
cause-effect logic models
Aims to produce generalised findings
across time and space.
Accountability focused on and directed
to external authorities and funders.
Accountability to control and locate
blame for failures.
Evaluation often a compliance function
delegated down in the organisation.
Evaluation engenders fear of failure
Developmental Evaluation
(formative and summative combined for
continuous improvement)
Complexity-based, supports innovation
and adaptation. Provides feedback,
generates learning, supports direction or
affirm changes in direction in real time
Evaluator part of a team, a facilitator and
a learning coach bringing evaluative
thinking to the table, supportive of the
organisation’s goals
Develops new measures, monitoring
mechanisms and indicators as goals
emerge and evolve
Evaluators collaborate with those
engaged in the change effort to design an
evaluation process that matches
philosophically and organizationally.
Design the evaluation to capture the
assumptions, models of change, system
dynamics, interdependencies, and
emergent interconnections in complex
environments.
Aims to produce context-specific
understandings that inform ongoing
innovation.
Accountability centered on the
innovators’ deep sense of fundamental
values and commitments and desire for
continuous learning, adapting the CEPA
programme to a continually changing
complex environment.
Learning to respond to lack of control and
staying in touch with what’s unfolding,
thereby responding strategically
Evaluation a leadership function for
reality-testing, results-focused, learningoriented leadership
Evaluation supports hunger for learning
9
Patton, Michael Quinn, Developmental Evaluation, Applying Complexity Concepts to
Enhance Innovation and Use, 2011
18
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Although there are clear benefits to a developmental evaluation as an overarching
approach, each evaluation strategy - formative, summative or developmental - fulfills a
specific purpose and adds a particular kind of value, when it is appropriate for the situation.
We choose our evaluation strategies according to the circumstances, resources, time lines,
data demand, intended users, political features and purposes of a particular situation. A
developmental approach may not always be feasible or appropriate.
However, a developmental approach to evaluation is perhaps the only way in which we can
adapt our enquiries to the nonlinear dynamics that characterise CEPA programmes, when
we start looking them as complex systems. Traditional approaches to programme planning
tend to impose order on this complexity, passing over many dimensions in the process. They
also assume a certainty which is perhaps impossible to achieve. When situations present
themselves as disorderly and highly uncertain, yet we need to evaluate and improve them, it
is useful to explore them as complex systems. This is what evaluations based on complex
systems theory seeks to do.
Model of Simple Linear Programme Evaluation:
Design
Implement
Evaluate
Model of Developmental Evaluation:
Programme
planning
Formal
evaluation stage
Programme
implementation
Ongoing
informal
evaluation
Continued
implementation
Formal
evaluation stage
Programme
refinement or
change
19
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
UNDERSTANDING COMPLEX SYSTEMS
This final section of the Key Ideas Pages provides us with tools for understanding CEPA
programmes and the contexts in which we introduce them, as complex systems.
Do we really need to look for complexity? Is it not better to try and simplify things?
Evaluations that focus only on pre-determined and measurable outcomes tend to ignore the
complex nature of CEPA processes and contexts. The danger is that they then fail to capture
the rich nuances and full impacts of a programme. Evaluations based on simplicity may also
fail to observe features and factors that can undermine the success of a programme. And,
such evaluations are seldom able to provide useful explanations for their findings, i.e. the
reasons why a programme is succeeding, or not succeeding.
An exploration of the complexity of a CEPA programme and its context provides us with an
opportunity to improve our learning and insights and thereby, to more adequately support
and steer CEPA programmes.
“Trying to run a complex society on a single indicator
like the Gross National product is like trying to fly a
[Boeing] 747 with only one gauge on the instrument
panel ... Imagine if your doctor, when giving you a
checkup, did no more than check your blood
pressure." Hazel Henderson, 1995, Paradigms of
Progress, McGraw Hill.
CEPA indicators designed to measure behaviour change and awareness do not pick up the
finer dynamics of multiple variables which interact locally at multiple levels – and which
affect the desired progress of the programme in a variety of ways. There is a growing
realization that a complex systems approach can be a useful tool in facilitating and
strengthening the evaluation of educational and social change processes. For CEPA
managers to use this approach, a few key ideas should be explored.
20
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
What is a Systems Approach?
Wikipedia 10 describes a complex systems approach as “a new science that studies how
relationships between parts give rise to the collective behaviors of a system and how the
system interacts and forms relationships with its environment”.
The interconnectedness of multiple variables and the significant contribution that even the
smallest variable can make to a larger system, are among the major breakthroughs that a
complex systems approach has contributed to our understanding of educational and social
change processes.
This means that we no longer need to ‘write-off’ or ignore observations we cannot easily
explain or events that do not make sense within the linear logic of cause and effect in which
most of us have been trained. In a systems approach we can attempt to account for the
previously hidden, ‘missing’ and ‘invisible’ variables in order to paint an overall ‘big picture’,
even if the conclusions we draw will always remain ‘subject to further changes’. In a systems
approach there are no final conclusions. All decisions, including evaluation outcomes, are
contingent, conditional and provisional, and relevant to a particular phase of our enquiry.
Thus flexibility becomes the norm, along with multiple reasons to explain observations.
'In our analysis of complex systems ... we must avoid the trap of trying to
find master keys. Because of the mechanisms by which complex systems
structure themselves, single principles provide inadequate descriptions.
We should rather be sensitive to complex and self-organizing interactions
and appreciate the play of patterns that perpetually transforms the
system itself as well as the environment in which it operates.' (Paul
Cilliers, 1998, Complexity and Post Modernism, Routledge)
Every new science develops its own ‘jargon’ and the complex systems approach is no
exception. Concepts such as emergent, critical junctures, flexibility, resilience, adaptive and
self-organising are among those commonly used. While it is not necessary to use these
terms, it is well worth understanding the concepts they refer to, in the context of LAB CEPA.
References and reading materials listed in this toolkit will assist those who want to explore
them further. Here we aim where possible to use simple language that will be selfexplanatory to non-specialists.
10
http://en.wikipedia.org/wiki/Complex_system
21
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
What is meant by Complex?
Is complex the same as complicated? Not at all. Think of a piece of machinery with many
parts, say an aircraft engine. It looks and it is complicated, but it is not complex. A technician
can take the engine apart to see how each part is connected to another to make it work,
and put it back together again in exactly the same way. If something goes wrong, a fixed
procedure will help us to find the faulty part, replace it, and have the engine working again.
In short, there is replicability - the patterns can be reproduced. Predictability is a key factor
in a system – such as an aircraft engine - that is complicated but not complex.
For an example of a complex system, think of a family. There may be only two parents and
one child, but the dynamics between them, and their interactions with their environment,
are complex. There may be some replicability and predictability, but there are invariably
also unexpected variations and surprises. One morning the child wakes up and is not eager
to go to school. This may be due to a developmental phase, a flu virus, or something else
that is hard to determine. Replace one factor – say the father – with a different man, and a
whole new dynamic arises. A family of three may appear simpler than an aircraft engine, but
understanding it, and working with it, is a more complex process. And so it is with many
social as well as institutional structures and situations. They are complex and they cannot be
taken apart to study in the same way as a machine or other complicated system.
Another key idea about complex systems is the presence of multiple variables that interact
in multiple and sometimes unpredictable ways with each other in a particular environment,
while also influencing that environment in the process. Take for example a carrot seed. At
first glance it appears to be a simple system, in fact not even a system at all. We can say if
we plant carrot seeds they will, predictably, give carrots. Yes, the seed will give us a carrot.
But there is also a complex system at work here. Variations in the water quality (high pH,
low pH, brackish, sweet, hard, soft, fresh, chlorinated) are among the variables that will
affect germination. Then look at the soil. Soil texture, quality, microorganisms and organic
materials present or not present in the soil, are also variables that will impact the
germination of the carrot seed. Now let us look at the temperature differentials. This will be
determined by the sun - direct sun, heat, shade will be affecting the seed. The amount of
rainfall is another factor - whether there is too much rain, whether there is adequate
drainage, etc. There is a whole climatic effect on the seed – cycling through autumn, winter,
spring and summer, depending on when you plant the seed and whether you plant it inside
a glass house, outside, in a pot or directly in the soil.
So these are some of the features of what appears to be actually a complex system that will
determine the nature of the carrot. In a laboratory the conditions can be fine-tuned and
made to reproduce in a replicable manner. But out in the ‘real world’ all the above variables
will impact on the carrot, producing sometimes nice, juicy sweet tasting carrots, and
sometime stringy, woody and crookedly shaped carrots. The combination and permutations
of these variables can be endless. In a complex system the interaction between the variables
not only impacts the carrot itself, but they also impact on the nature of soil structure.
Scientists now know that fine hairs on the roots of plants are able to change the microenvironment around them, to facilitate the uptake of water and nutrients from the soil. The
22
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
environment in which the carrot is growing can itself be affected by the system. And so it is
with CEPA programmes!
A CEPA Programme to Illustrate a Complex System
Cape Town’s first evaluation case study (see Folder 3: Case Studies on the CD) is an example
of a systems approach to CEPA programme evaluation. But to illustrate how CEPA contexts
can be seen as complex systems, refer to Cape Town’s Case Study 2: The Green Audit and
Retrofit programme for schools , in the same folder.
This CEPA programme introduced a process of auditing biodiversity, water and energy
consumption and waste production, at a range of schools in the city. What are some of the
variables that affected this introduction, and the outcomes of the programme?
Experience has shown that the enthusiasm of the teacher is an important factor in the
adoption of these initiatives, even when the focus is on the students, and the city provides
external facilitators to introduce programmes to them. Where teachers changed midway
through the life of the Green Audit programme, the course of the project in these schools
was significantly affected.
Some students mentioned that their interactions with students from other schools were
significant learning experiences – even though this was not necessarily planned as a key
feature of the programme.
History proved to play a role, too. Historically, water and electricity meters or gages were
placed – or came to be obstructed - in inaccessible parts of grounds and buildings, because
in the past, children and staff were not expected to monitor the school’s water and energy
consumption. This factor actually reduced the success of the programme in some schools, as
it was just too difficult for the students to obtain the readings – a fundamental part of the
auditing. Given the difficulty of gaining access to meters and records, the janitors and
finance managers became unexpectedly important variables in the programme, and one
finance officer pointed out that if she had been informed about the programme at the start,
she could have provided the figures, but since she had not been informed, she couldn’t!
Could the Green Audit team have foreseen the role of this particular variable? Systems are
full of surprises!
It also proved significant that schools are systems with their own rhythm and pattern over
the course of the academic year, which differed somewhat from school to school, even
though all schools were linked to the provincial and national education systems. The CEPA
team needed to understand this rhythm, to know when best to introduce their programme,
how much time schools would need to complete the programme, and when to schedule an
evaluation that could get the views of the teachers as well as students in different grades,
some of whom had a shorter school year than others.
To add to the complexity, variables seemed to have differing impacts in different schools.
The nature of the schools’ management is an example of such a variable. Some
23
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
administrators manage their schools with a firm hand, with strict systems and high
requirements from their staff and students. The students in these schools are generally
better prepared academically, and able to participate well in CEPA projects, but time is
often a great constraint for them, given that they have much else to do. In other schools
management is rather less attentive with fewer rules and regulations, and fewer activities in
the school calendar. Here CEPA teams have more space and freedom to introduce their
initiative … but because academic standards are generally lower, the students at these
schools struggle more with some environmental concepts. The interplay between school
management, teaching and CEPA initiatives, is an example of interactions between variables
in what is undoubtedly a complex system.
More Features of Complex Systems
Complexity does not arise as a result of a chaotic free-play with infinite possibilities.
Complex systems have structure. It is structure which enables the system to behave in
complex ways. If there is too little structure, the system can behave more freely, but this
freedom leads to activities which are meaningless, random or chaotic. The mere capacity of
the system (i.e. the total amount of freedom available if the system was not restricted in
any way) is not a useful indication of its complexity. Complex behavior is only possible when
the behavior of the system is constrained. On the other hand, a fully constrained system has
no capacity for complex behavior either.
Complex systems do not operate under conditions of equilibrium, that is, they do not
necessarily strive to reach some balance. Complex systems are open systems, meaning that
the environment in which they operate influences them to the point that they expand
beyond their operational boundaries.
Complex systems also consist of many components. At least some functions display
behaviour that results from the interaction between these components and not from
characteristics inherent to the components themselves. This is sometimes called
emergence, or internal dynamic processes.
“We have emergence when a system as a whole exhibits novel properties that
we can’t understand – and maybe can’t even predict – simply by reference to
the properties of the system’s individual components. It’s as if, when we finish
putting all the pieces of a mechanical clock together, it sprouts a couple of
legs, looks at us, says “Hi, I’m out of here,” and walks out of the room. We’d
say “Wow, where did that come from?” (Thomas Homer-Dixon)
Another example of emergence is the appearance of unintended results. We zone a city into
industrial, commercial, residential and recreational areas, and then we observe the
unintended result of a huge amount of energy being spent to move people between these
areas, because they live far from the areas where they want to work, shop and play.
24
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
How is a Complex Systems Approach useful in CEPA Evaluation?
Many CEPA managers believe that in order to develop, implement and evaluate a
programme effectively, we need to understand the programme’s environment or context.
As the example above of the Green Audit and Retrofit programme for schools illustrates, the
context in which CEPA programmes function is complex. When we evaluate a programme,
we tend to reduce the complexity of the system and its environment, in that we choose only
a few aspects on which to focus.
In the case of the Green Audit programme, the evaluation (briefly described in Folder 3:
Case Studies) focused on students’ and teachers’ experience of the programme, and
evidence of learning among the students who completed the programme. Broader
contextual factors were not included, for example the reasons why some schools dropped
out of the programme, were not explored. An evaluation indeed has to focus, not
everything can be included. What the systems approach does, however, is to make us more
aware of the contextual factors that we are leaving out, of how incomplete our selection
process is, and that our findings will therefore also be incomplete, as we may be ignoring
some crucial variables.
In the process, we become more mindful of the basis or assumptions on which we make our
selection, of the fact that there is more than one way to approach an evaluation, and that
some selection criteria may be more appropriate than others.
In the complex systems approach, there is
no search for a meta-framework which
A systems approach can make
explains everything, or supersedes all
evaluation design choices more
previous ways of doing things. We realize
conscious and refined.
that we choose rather than receive our
frameworks for collecting and analyzing
evaluation data, but also that this choice
need not be arbitrary, or based on unexamined traditions. As a result, we realize that we
need to review the status of our chosen evaluation framework (and the framework itself)
from time to time.
Our efforts to find evidence of change in CEPA participants and situations, through
indicators, are approached as ‘snapshots’ through which we can possibly map out the
territory as best as we can, in the full knowledge that this is not the territory itself. The
process is understood as a matter of reducing the complexity of the programme so it can be
communicated and discussed.
25
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
The indicator based picture, while a
reduction of complexity, can be filled out
through
the use of metaphors and imagery
A systems approach makes use
(see e.g. the case example of the evaluation
aware of the bigger picture and
of the City of Cape Town’s education and
what we may be missing.
training programme, which compared the
programme with a tree), and the use of
qualitative data, case studies and stories.
This fuller picture is a more comprehensive understanding of the current status and effects
of a programme, as a guide for CEPA managers and stakeholders and a learning tool, rather
than final ‘proof’.
The realization that evaluation findings are only provisional, does not relegate evaluation to
being a useless exercise. Rather, it means that we are more motivated to build ongoing and
longer term evaluation processes into CEPA programmes, so that we can continue to build a
fuller picture of them, by changing our scope and focus from time to time. Un-intended
outcomes, unexpected results or even negative scenarios are also more likely to find a valid
place in the big picture.
An understanding of complex systems
brings with it a new understanding of the
role and use of indicators in evaluation. It
A new approach to indicators as
encourages
us to see indicators as guides
guidelines in continuous improvement.
rather than end goals in themselves. As
the systems theorist Paul Cilliers 11 put it:
Our indicators serve only as feedback loops
for us to reflect on the territory and the direction we’re heading. Just like a compass and
map, they guide us through unknown places.
The choice and use of indicators is critical, if we consider how they can actually determine
changes in systems. Another systems thinking pioneer, Donella Meadows 12, explained that
indicators arise from values (we measure what we care about) but they also create values
(we care about what we measure). When indicators are poorly chosen, they can cause
problems, as the pursuit of indicators may then steer CEPA processes in the wrong direction.
Say the City of Cape Town’s indicator for success of the Green Audit programme was the
number of schools who participated in it. If this became the driving force for the
implementers, they would be tempted to change the programme so that it does not require
students to measure their schools’ water and energy consumption and take action to reduce
it. They could simply produce and distribute a book on water and energy consumption, and
11
Cilliers. Paul, 1998, Complexity and Post Modernism, Routledge, London.
12 Meadows, Donella, 1998, Indicators and Information Systems for Sustainable Development,
Report to the Balaton Group, Sustainability Institute, Vermont.
26
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
teach a once-off lesson at each school. In the process they could reach more schools, and
their indicator would look good. However, they would alter the quality of the learning
process, as educational guidelines (see Appendix 3) suggest that meaningful actions are a
better opportunity for deeper learning and capacity building, than simply receiving
messages.
In a systems approach indicators are not treated as end results, but rather as reflexive
points to guide programme implementers. In this instance, if the indicator showed that only
few schools participated successfully in the Green Audit programme, the reasons could be
explored: Is the programme introduced at the right time of the year? Should it be of longer
duration? Does it clash with what schools are already doing? The reason for this clash might
be problems in the existing school system that might need to change – for example, the
school community’s access to information about their actual resource consumption.
Thus the Green Audit programme may evolve over time, in response to reflection on
indicators, to focus on access to water and energy consumption figures, and focus on
systems changes which can give large numbers of residents this access, as the basis for
learning and action. The indicator could then be: the number of residents (or school
communities) who have access to their consumption figures, and their resources and
capacity to utilize these figures for learning and action.
It should be clear from this example that it would be difficult if not impossible to develop
generic indicators that can adequately account for all CEPA situations, give their nature as
open complex systems. The attention therefore shifts rather to system specific indicator
development processes (see Step 6 in Folder 4), as they keep responding to evaluation
findings, and are being refined to better attune the CEPA programme to its goals and its
context. The more in-depth an evaluation enquiry, the more nuanced these indicators will
become.
An iterative approach to programme development, implementation and evaluation
becomes necessary. Evaluation should be a way of work, the way in which all CEPA
initiatives are approached, as a matter of course. Individual initiatives need to be evaluated
and refined at regular intervals. Across all the initiatives in a programme, evaluation results
should be combined and compared, and their lessons used to refine programmes on an
ongoing basis.
Complex systems theory encourages the
practice of micro-reflection. CEPA
A systems approach encourages and
managers and evaluators can design-in
allows for continuous improvement.
embedded reflective processes to provide
evaluation insights within a given timeframe. These insights can be applied to
the CEPA programme straight away, for example, re-assessing the conceptual design that
was used at the start of the project, checking whether it still holds or whether it needs
tweaking. The advantage of such an approach is that it provides opportunity for selfcorrection in a self-organised ‘emergent’ process, before a programme strays too far off
27
Section 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
course and resources are wasted. These in-built reflection processes can thus save
resources. They also create spaces for ‘safe-fail’, small scale experimentation and innovation
during the project cycle, without costing too much in terms of resources upfront.
In the adaptive view promoted by complex systems theory, the complexity of context means
that social and educational change is typically a journey across shifting ground during which
goals become redefined. In the realm of practice, processes of change often begin by being
conceived as linear, and then are subsequently reconceived as non-linear and adaptive, as
events unfold. The evaluation design process outlined in Folder 4 is based on this idea.
Indicators are leverage points in a system
(see Appendix 1). Their presence or
A systems approach helps to identify
absence, accuracy or inaccuracy, use or
effective intervention points.
non-use, can change the behaviour of a
system, for better or worse. In fact,
changing indicators can be one of the most
powerful and at the same time one of the easiest ways of making system changes. It only
requires delivering new information to new places 13.
Systems theory teaches that short, simple feedback loops can significantly affect behaviour
change, compared to longer, complicated feedback loops. A long, complicated feedback
loop is involved when a student has to make a special effort to take an electricity reading
from an inaccessible meter in the basement of the school building, and get a monthly
account from the finance officer. An example of a short, simple feedback loop is an
electricity usage panel on a cell phone with usage per appliance measured in physical
impact or monetary value. Systems theory suggests that the shorter feedback loop of the
latter is more likely to influence the system, change behavior and reduce electricity use.
13
Ibid
28
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
SECTION 3: CASE STUDIES
1
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Cape Town Case Study 1 – Evaluation of the Environmental
Education Programmes and Projects of the City’s Environmental
Resource Management Department, 2009
Goal and Description of the CEPA Initiative
When this evaluation was undertaken (2009), the City of Cape Town’s Environmental
Resource Management Department (ERMD) was initiating, hosting and supporting a
considerable number of environmental awareness, education and training campaigns,
projects and resource development initiatives with schools and other groups, as well as a
smaller but growing number of staff training initiatives. Most of these activities and
programmes were run in partnership with a large number of internal and external partners,
in an extensive network of agencies involved in environmental education and training across
Cape Town. The rationale and principles informing these City-driven activities, and the
broad vision towards which they needed to work, were spelled out in an Environmental,
Education and Training Strategy.
At the time the City had a small but growing team of environmental education and capacity
building officers and managers, as well as a communications team and manager, and an
overall coordinator. There was a growing amount of activity supported by growing budgets,
and the City decided to commission a comprehensive evaluation, to see what all this work
was achieving.
Goal of the Evaluation
The City of Cape Town was successful at mobilising funds for environmental education and
training programmes and resources. The significant investment of resources and staff time
was generally regarded as very positive, but the question also arose as to whether these
activities were beneficial, in relation to their cost. Was it worth spending this money and
time? What were these activities achieving? Do they support a local government’s service
delivery role?
The goal was therefore to evaluate the impact, outcomes, efficiency and effectiveness of
the City’s environmental education and training programmes in relation to the City’s related
targets. Specifically, the evaluators tasked to:
•
•
•
•
Review and revise the current targets and outcomes
Justify if possible the funds spent on environmental education and training
Be formative (empowering) and summative
Consider the context and its complexities.
2
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Evaluation Methods Used
A team of two external evaluators were appointed. They undertook a participatory
evaluation, using multiple methods for data generation. These included a desk study, over
40 structured interviews and discussions with key participants, site visits and observations,
and a selection of case studies. Quantitative data, where available, was synthesized into
findings, but the main focus was on qualitative methods such as the case studies, which
were used for probing beneath the surface and highlighting key findings.
A first draft of the evaluation report was drawn up in the form of a ‘report-and-respond’
instrument. This included the findings, interspersed with questions for response. This hybrid
evaluation tool is a mix between a draft report and a questionnaire for generating further
data. The final report included the findings, the responses received and the inputs from a
concluding workshop with the department’s staff.
The evaluators believed that it would most likely be too simplistic and not very helpful to
ask whether or not the programme is ‘effective’. Instead they probed with programme
participants what works, when and where, and what doesn’t work, when and where, in
order to engage in deeper learning about the programme, in ways which would inform
future planning and activities. In this way the process contributed to both a summative and
a formative evaluation.
Indicators Used
The City’s environmental education and training strategy did not have clearly stated
outcomes, other than an assumption that raising behaviour would result in behaviour
change. No other more tangible indicators for success were provided.
The evaluators decided to use a systems approach 1 to the evaluation. They looked for
evidence of programme impact where available, and reflected on the environmental
education and training programme’s relevance in relation to the broader context and
economy. They introduced systems-level indicators, such as whether activities were aligned
with policy, whether the programme had strong partnerships, and whether it was providing
enabling conditions for the seeds of awareness raising, to take root and grow into strong
actions and resilience in the recipient communities.
Value of the Evaluation
This was a first comprehensive evaluation of the City of Cape Town’s environmental
education and training programme. It resulted in 16 recommendations, many of which were
taken up by the City. Based on its systems approach, the evaluation presented the City’s
1
See the Folder on the CD with Key Ideas Pages for an Overview of Complex Systems.
3
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
environmental education and training programme as multi-layered and complex. The
evaluators used an ecosystems metaphor to explore the complexity of the structural and
operational dynamics of the programme in its broader social and institutional context.
To communicate the complexity in a useful way, the evaluators used the metaphor of a tree,
to represent the environmental education and training programme. It thus also provided
the ERMD with a new lens with which to look at its educational programme. If the ERMD’s
education programme was a tree, it would need strong roots and trunk (policies, alignment,
partnerships) in order to be resilient and healthy. A tree does not need to grow very big and
fast in order to thrive and be resilient. The many projects, resources and other activities the
programme was sprouting were equated with branches and leaves. While these were a sign
of growth, there might also be need at time to prune back activities that were crowding
each other out. The evaluation pointed out the need for stronger roots and trunk (clearer
policy, alignment of activities with policy, and greater connection between activities). The
evaluation further found that the ERMD’s education and training programmes linked
strongly with external partners and these links were in many cases proving effective –
similar to a tree providing habitats for many creatures.
The evaluators recommended that the City recognises the benefits of ‘what makes the
strongest tree’ and that it uses facilitative indicators that reflect the mutually beneficial
partnerships and the strong root and trunk system (a clear vision and effective policies).
Challenges
The key challenge faced by the evaluation team was the absence of indicators against which
to assess the effectiveness of the programme. The Environmental Education and Training
Strategy also did not have clearly stated outcomes. The evaluators pointed out that, in the
absence of indicators, it is impossible to assess the effectiveness of a programme. They
recommended a number of different types of indicators 2 that can be used in the
programme, in key areas: status, facilitative and outcomes indicators, with a focus on
facilitative indicators for earlier phases and outcomes indicators for later phases. An
example of a facilitative indicator would be the level of integration of environmental
education into the school curriculum; the associated longer-term goal might be how many
schools are involved in programmes that protect biodiversity.
2
See Tool 6 in the Evaluation Design Steps Folder on the CD.
4
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Analysis & Key Learnings
This evaluation provided a new way of looking at the programme and at evaluation, and an
explanation, perhaps, of why many evaluations of CEPA projects were difficult to undertake
and of limited value. As a result, the City commissioned the production of a Toolkit which
would include a wider range of indicators for evaluating CEPA projects.
The evaluation showed how a metaphor can be useful to communicate a range of concepts
to users. Staff were able to see why too much growth, too fast, on weak foundations, was
not ideal; that the connections they’ve formed with other partners were beneficial, but that
they also needed stronger alignment with their policies, vision and intent, and greater
attention to the most effective leverage points for change. Following the evaluation the City
revised its Environmental Education & Training Strategy and produced two distinct
strategies, focusing on City staff and a range of public groups, respectively.
5
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Edmonton Case Study – Master Naturalists Programme
The CEPA Initiative – Goal and Description
The CEPA programme evaluated in this case study is about building capacity for the
restoration, protection and monitoring of natural areas in the city of Edmonton. The
programme was created in 2009 to give community members an opportunity to get
involved ‘hands on’ in the stewardship of Edmonton’s natural areas and biodiversity. The
City of Edmonton’s inventory of protected natural areas was growing, along with the need
for active management. At the same time, resources available for management were
diminishing as the City was forced to cut spending. In response to the interest of
Edmontonians asking for opportunities to be involved in stewardship and learn more about
local natural areas, and to the growing demand for the energy and skills of community
members, the Edmonton Master Naturalist programme was created.
Specifically, the programme goals are to:
•
Increase the City’s capacity for the management of natural areas by training interested
Edmontonians and connecting them with a range of volunteer opportunities that
support the City’s work in the protection, restoration and management of natural areas.
•
Build a well‐connected network of conservation partners including conservation and
other community groups, landowners, the development and academic communities,
and other orders of government, to foster the sharing of information and expand
community capacity in support of local conservation and stewardship.
•
Support a system of shared conservation education to increase citizens’ recognition
and appreciation of the value of Edmonton’s natural areas systems and the ecological
processes they support, and establish internal processes to capture and integrate the
local ecological knowledge of community members.
The City of Edmonton has strived to attract programme cohorts that are diverse in their
cultural backgrounds, abilities and experience, with a goal of encouraging shared learning –
between instructors and participants, and amongst the participants themselves. The city is
striving to build a ‘community of practice’ around natural area stewardship in Edmonton –
that is, a group of people who engage in a process of collective learning through the
collaborative project of natural area stewardship 3. Their hope is that this community will be
created, inspired and supported by the Master Naturalist Programme for years to come. In
exchange for 35 hours of training, participants volunteer for 35 hours in activities that
support natural areas management, restoration, protection, and education.
3
See Approaches to CEPA and Change in the Folder Key Ideas Pages, on the CD.
6
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
In the first three years of the programme, 82 graduates have initiated or supported
volunteer stewardship projects at 36 project sites across the city. They have reported 817
hours of stewardship, naturalisation and invasive plant management, and 798 hours of
community engagement and education. The programme has between 15‐20 instructors
each year.
Goal of the Evaluation
The evaluation goals were to identify, quantitatively and qualitatively, the successes of the
programme after its first three years of implementation, as well as outstanding challenges
and possible solutions. The evaluation was completed after the first three years of the
Master Naturalists Programme.
Evaluation Methods Used
The first phase of the evaluation involved a review and analysis of volunteer hours logged
(by email or fax - where and how they were spent), review of participant feedback about the
programme (verbal and written feedback during course evaluation sessions), and review of
the programme financials.
A further, more in‐depth, evaluation of the programme was undertaken by the University of
Alberta. In this phase there were three rounds of data collection:
•
Pre-programme email-based survey questionnaire, including questions such as current
level of nature-environmental training, motivation for participation in the
programme/hoped for personal outcomes, importance of place/settings, and definition
or perception of nature) (n=18).
•
Focus group in the middle (Day 5) of the formal in-class training sessions, which explored
the meaning of stewardship held by participants using drawing exercises and their
dialogue about stewardship between each other (n=22).
•
Post training and volunteer service in-depth in-person interviews. These interviews
occurred 10-20 months after initiation of training. They were 1-2 hours in length (n=10).
Topics explored included: Suggestions for improving the programme, identification of
key structural constraints existing in the city that inhibit stewardship and suggestions for
addressing these, engagement with other Master Naturalists in achieving duties/
stewardship, changes in perceptions of nature and/or stewardship, changes in
motivations for remaining in the programme vs. when they first enrolled, effect of
formal training vs. learning from others informally (social learning).
Indicators Used
The indicators used in the first phase of the evaluation were quantitative, i.e. numbers of:
• programme graduates
7
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
•
•
•
project sites
volunteer hours given:
o to stewardship, naturalization and invasive plant management
o to community education and engagement
o to support the work of community organizations
community stewardship groups created and/or supported through volunteer work
new partnerships generated through the programme.
These indicators were intended to demonstrate what has been achieved through the
programme in support of the three high‐level goals.
The second phase of the evaluation, carried out by the University of Alberta, provided a
valuable qualitative assessment of the program, using the following research questions:
•
•
•
•
•
Who were the 2009 Master Naturalist participants? What characterised their education
backgrounds, occupations, life stage, childhood and current interactions with nature,
motivations for involvement, neighbourhood of residence, etc.
How did the 2009 cohort of Master Naturalists trained by the City understand and
engage in stewardship?
How was nature perceived and defined by the 2009 Master Naturalists and what role did
this play in their engagement in the programme?
How successful was the programme in fostering citizen-based environmental
stewardship? What outputs and outcomes were generated?
What factors and processes affected the successful engagement of the 2009 Master
Naturalists in immediate and ongoing stewardship of Edmonton’s natural areas?
Value of the Evaluation
The evaluation was valuable for:
• Helping to communicate the successes of the programme
• Identification of issues and putting in place measures to address them, thus improving
the programme.
Analysis and Key Learnings
The first phase of the evaluation used only quantitative indicators. These were valuable in
pinpointing issues with reporting and with the range of volunteer opportunities that were
available. The process of compiling and analysing the volunteer hours helped staff to
understand a) where and how those hours had been spent, and b) that there was an issue
with both low completion of hours and under‐reporting of hours. They were able to make
adjustments to the volunteer opportunities available, to how they direct volunteers to those
opportunities, and to improve the reporting system.
8
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
But questions remained. There was a need to also look more deeply at the programme
contexts and dynamics, such as volunteers’ capacity, and the contexts in which they conduct
their activities. What more might they need, besides the information shared with them
during their training? What is involved in setting up and maintaining a vibrant and effective
community of practice? To explore these dimensions, other indicators and methods of
evaluation were required, and these were addressed in phase 2.
The City of Edmonton is mindful of the need to include a diversity of citizens in these
communities of practice. In order to understand what kind of audience they have attracted
to date, they needed to conduct a demographic analysis of programme participants,
including information on: age of applicants /participants, number of male vs. female
applicants/ participants, their cultural backgrounds, and what neighbourhoods they live in,
versus where they volunteer. This put the City in a position to consider potential changes in
the promotion of the programme, its structure or content, and the geographic distribution
and type of volunteer opportunities, in order to better achieve the stated goals and ensure
that the programme generates an engaged, appropriately‐distributed and sustainable
volunteer base in years to come. The University of Alberta evaluation was a valuable step in
starting to answer some of these questions, as applied to the 2009 cohort. The intention is
to expand this assessment to include other participants, as well as programme applicants.
In addition, in order to explore the impact of the programme on biodiversity, a further
phase of evaluation could include the ‘area of natural area stewarded by program
participants’, both per year and as an overall area. This could be calculated through a simple
mapping exercise.
Thus far the evaluation has proven to be valuable in informing the continued roll out of the
programme, and making adjustments to it. It was the first comprehensive evaluation of the
programme, and the process offered excellent learning. Having understood the true
accomplishments of the programme ‘on the ground’, and what the successes and challenges
are, the staff were able to better assess what work is needed to improve the programme.
9
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Nagoya Case Study – Nagoya Open University of the Environment
Goal of the CEPA Initiative
Nagoya Open University of the Environment is one of the first project activities conceived as
part of the Biodiversity Local Action projects aimed to inspire the citizens of Nagoya in
contributing to local biodiversity actions. The aim was to involve the entire Nagoya city and
its neighbouring municipalities in the project, with a view to establishing Nagoya as an
environmental capital where the entire city acts as a campus – an Open University of the
Environment. A total of 173 courses per year take place annually with a total of 20,901
participants attending (Fiscal Year 2010 data).
Nagoya Open University of the Environment aims to develop human resources and
interaction which will support the realisation of the vision of Environmental Capital Nagoya
and a Sustainable global society, by ‘growing-up together’ through a variety of courses on
the environment. In this context, ‘Nagoya’ includes any areas, individuals and companies
that are involved in administration by City of Nagoya and/or this Open University
framework, not just the city’s geographical territory or citizens of Nagoya only.
The specific goals, from The 2050 Nagoya Strategy for Biodiversity document, are as follows:
• Networking and leveraging the support of citizens for environmental activities
• Developing human resource capacity around biodiversity actions
• Accelerating citizens’ direct involvement in biodiversity parks
• Greater public awareness of the value of sustaining Nagoya’s biodiversity
• Increased interaction with public and institutional bodies
• Training and capacitation around a variety of environmental topics such as: energy
problems and climate change, disaster prevention and safety in daily life, reducing
carbon emissions, local food security, clothing and shelter, living in harmony with
nature and biodiversity, waste management and recycling, international
cooperation, environmental study and human resource development.
Courses take the form of lectures, field trips, workshops and more. In addition there are
networking events for course developers and planners, as well as a number of ‘social
experiments’ and networking and communication platforms, to complement the courses.
The Open University is governed by an executive committee chaired by the mayor of
Nagoya. Committee members include representatives of citizens, companies, non-profit
organisations (NPOs) and non-governmental organisations (NGOs), universities and local
governments. Since its launch in 2005, over 750 courses have been held with nearly 100,000
participants.
Goal of Evaluation
Evaluations take place annually, on a large scale. They have a focus on evaluating the
quality and success of the courses offered by different presenters, and on gathering
10
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
information about the participants in these courses, and their reasons for doing them. The
aim is to improve the programme in order to realise its goals.
Method Used
Questionnaires are the main evaluation method used. Two kinds of questionnaires are used,
for (a) course participants and (b) course developers and planners. The questionnaires for
course participants gather information on their age and gender, their reasons for doing a
course, how they got to know about the course, their satisfaction with courses, and the
degree of recognition of related content. Questionnaires for course planners and developers
require them to self-assess the success of the course and to measure participants’
satisfaction with the course.
Indicators Used
The indicators used included the numbers of participants, the degree of recognition of
content and the degree of satisfaction with a course.
Challenges
One significant challenge experienced by evaluators is the distribution of the questionnaires
and obtaining the relevant statistics from the course managers and presenters. There is
occasionally a lack of co-operation from the course coordinators and managers in assisting
with distributing and returning the questionnaires, and in ensuring that they are correctly
completed.
Value of evaluation
It is hoped that the evaluation will help to improve courses, to improve the City’s
understanding of the profile of course participants, and thus to improve public relations and
the attraction of greater numbers of citizens to attend courses, with the ultimate aim of
meeting the programme goals.
In such a large scale and long-term programme, there is considerable merit in a simple,
questionnaire-based evaluation which can be semi-automated, gather large amounts of
comparable data, and then compare the data across courses, participant groups, and years.
There would also be merit in supplementing such survey data with more in-depth case
studies of particular courses, in order to understand their dynamics, including why the
questionnaire-based evaluations are not always satisfactory. Such lessons could then be
used to improve the survey methodology or processes.
In-depth case studies of particular participants groups, courses or other interventions in the
Open University of the Environment could also be used to explore the relationship between
attendance of courses, learning on courses and satisfaction with courses on the one hand,
and the likelihood of course participants to take follow-up action, on the other hand.
11
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
São Paulo Case Study – Reintroduction of Howler Monkeys in São
Paulo City
Goal and Description of the Initiative
The Atlantic rain forest remnants in and around São Paulo are under threat from
urbanisation. This has negatively impacted the habitat of the bugio (Alouatta clamitans, the
howler monkey). Howler monkeys live in the forest canopy, where they eat leaves, flowers,
fruits and seeds. The species is endemic to the Atlantic rain forest and is endangered in the
State of São Paulo. It is considered as an umbrella species that maintains the ecological
balance of the forest. It is also a flagship species, that is, a charismatic species which
facilitates the dissemination of conservation messages to the public.
Since 1992 there has been an increase in the number of howler monkeys injured by
electrocution, road accidents, dog attacks and other causes related to urbanisation. Many
injured animals from the Metropolitan Region of São Paulo are rescued, receive biological
and veterinary medical care, and are then released in the wild by the Veterinary and Wild
Fauna Management Technical Division of São Paulo City Hall. Laboratory tests are also
carried out to diagnose diseases, because in the urban environment howler monkeys live
very close to domestic animals and the human population.
In order to prepare the monkeys for release, the Alouatta clamitans Reintroduction
Experimental Programme was created in 1996. From 1996 to 2005, 21 howler monkeys
were released in six forested areas in the city of São Paulo. At that time it was not easy to
observe and follow these monkeys to know if they were alive, eating and reproducing.
In 2006 a programme was approved to improve the howler monkey reintroduction in São
Paulo city, with the aim of conserving both the species and its rain forest habitat. The
programme has been jointly initiated by the Veterinary and Wild Fauna Management
Technical Division and the Municipal Herbarium of the Municipal Secretariat for the
Environment.
The programme also has CEPA components, namely information and educational activities
with the goal of using the charismatic image of the howler monkey to facilitate the
assimilation of knowledge about local conservation actions, to build awareness of the rehabilitation programme, and to sensitize residents to the importance of local biodiversity.
Particular target groups were rural and indigenous communities (including children,
teenagers and youth), landlords of the areas in which releases were taking place, and
teachers in municipal school located near howler monkey habitats.
The CEPA methodology includes educational activities and publications for adults and
children, lectures, public presentations, visits and teacher training. A photo novel for
children, called Howler: Nature Thanks You, and a DVD film, The Howler Monkey
Reintroduction Project, were produced.
12
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
In these activities more than 330 people had received information about the programme, as
well as 71 education professionals from 25 schools located around the reintroduction areas.
The teachers’ course had three objectives:
•
•
•
Provide guidelines for educational and environmental activities with a sustainability
focus in the city of São Paulo
Articulate knowledge of the environment and the biodiversity of the municipality with
the Municipal Secretary of the Environment: Curriculum Directions, across different
knowledge areas of the elementary school curriculum
Plan didactic activities taking as a reference the materials produced by the Division of
Wildlife and the Curriculum Guidelines and Proposal of Learning Expectations.
The educators’ courses consisted of a general introduction of the project, fieldwork, and
planning of workshop projects to reach students, school teams, families and broader
community.
Goal of the Evaluation and Methods Used
A monitoring programme was created using radio tracking to follow the newly released
monkeys to see if they were adapting well to their natural habitat. From 2008 to 2009, 34
howler monkeys were released and monitored by radio tracking. They were divided in five
groups, and in each group a female received the radio collar. The male cannot receive the
collar because of the size of the hyoid bone in its neck.
The eating behaviour is observed (direct observation) and the plant fragments found in the
faeces is analysed (indirect observation). The plant material is identified by the Municipal
Herbarium.
The evaluation showed that 64% of the howler monkeys remained alive in the released
area, 21% died, 7% went back to captivity and 7% disappeared.
How were the CEPA components evaluated? At the end of the educators’ course, the
teachers were invited to answer an evaluation sheet with five questions about the course:
(1) Corresponded to my needs for continuing education?
(2) Contributed to the construction of new knowledge?
(3) Has practical application in my professional action?
(4) Favours implementation of Curriculum Directions?
(5) Reorients the construction of my plan work?
The majority (85%) of teachers strongly agreed with the themes and content developed in
the course.
13
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Participants also answered open questions and indicated the most important themes and
contents for their practice. The evaluation therefore had a formative role, in that it could
inform future versions of the course.
Participants committed themselves to apply the acquired knowledge and teaching materials
received in the course, planning and executing the project to be developed in each school.
Thus the evaluation also gave CEPA staff and managers some indication of the success of the
courses.
The positive evaluation by the teachers led to the continuation of the course in 2012.
Indicators Used and Value of the Evaluation
The indicators used for the success of the adaptation of the re-introduced monkeys included
the composition of their diet, as an indication of whether they were eating properly.
An indication of whether the CEPA programme was successful was teachers’ commitment to
use their new knowledge and the materials they received, to educate others about the
programme and its objectives.
Analysis
Both indicators used are process or facilitative indicators 4, which are helpful in shaping the
further continuation of the project. Any indication here of problems, would alert
practitioners and assist them in making some adjustments for a greater likelihood of
success. For example, if teachers answered that the courses did not have practical
application value for them in their professional practice, the course presenters would have
to make changes to the programme, in order to improve the chances that teachers will
actually use the course outcomes, thereby achieving the CEPA objectives and eventually,
broader biodiversity goals as well.
To assess the overall impact of the programme on São Paulo’s residents and its biodiversity,
more indicators would have to be introduced. But what indicators?
Whereas it is difficult to monitor the physical impact of our Local Action for Biodiversity – in
this case the re-introduction of primates in rain forest areas in close proximity to human
settlements – it may be even more difficult to evaluate the associated CEPA activities.
While whole manuals on biodiversity indicators exist, there is much less information on how
to determine whether our work with teachers, children and landlords are successful, or not.
This toolkit has been designed to provide CEPA practitioners with guidelines to customdesign evaluations of CEPA activities such as the programmes undertaken in São Paulo.
4
See Tool 6 in Folder 4: Evaluation Design Steps.
14
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Monitoring howler monkeys re-introduced in the rainforest
with radio tracker, direct observation and indirect
observation of plant material in faeces.
How do we monitor and evaluate CEPA activities such as this
lesson with children? What tools do we have for this? What
direct and indirect observations can we do here, where our
subjects are human and our processes social and
educational?
15
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Cape Town Case Study 2 – Green Audit and Retrofit Project for
Schools
Goal and Description of the CEPA Initiative
As noted in the first case study, the City of Cape Town initiates a wide range of
environmental education and training programmes with schools. This particular programme
is one of a growing number of opportunities for high schools and learners in the final years
of schooling. The programme has a broad aim to raise awareness around sustainability and
climate change. Specifically, it introduces school students to a toolkit through which they
are meant to measure their consumption of water and electricity and production of waste
and biodiversity management at the school, after which they need to identify and
implement a retrofit plan to reduce the school’s consumption and waste, or to improve
biodiversity, thereby addressing climate change and sustainability. The programme is
implemented by small groups of students (usually associated with environmental clubs) a
teacher, and mentors who are appointed to support them. This was the first roll-out of the
newly produced Green Audits Toolkit for schools and it took place in six schools, over a two
year period. The overall programme implementation was conducted by a consulting
company that developed and produced the Green Audit Toolkit in partnership with the City
of Cape Town.
Goal of the Evaluation
The evaluation was conducted by a different group of consultants within the same
company. Its goal was to evaluate the pilot in order to consider and inform a possible
further, wider roll-out of the Green Audits Schools Programme. Specifically, it aimed to
determine:
•
The value in raising awareness around sustainability and climate change in the
participating schools
16
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
The value in twinning well-resourced with a under-resourced schools
•
The measurable success of schools in reducing environmental impacts in their chosen
audit focus area (be this Water, Waste, Energy or Biodiversity)
•
The effectiveness of the methodology used by the programme team to implement the
project.
Evaluation Methods Used
The 3-person evaluation team used the following methods:
•
A questionnaire for students and teachers
•
One-on-one interviews with teachers, school principals and other staff
•
Focus group discussions with students at the participating schools
•
A mini Green Audit assessment of the schools’ chosen focus areas ( Water, Waste,
Energy or Biodiversity)
•
Interviews with service providers and suppliers of products (such as worm farms or
shade cloth for gardens).
Indicators Used
Technical indicators: The audit indicators for reduction in electricity and water use are not
mentioned, but were presumably straight forward – except for the fact that data could not
readily be collected to assess them. The indicator for food gardens established at the
schools was the amount of income the school derived from them. Indicators for indigenous
gardens and for reduction in waste were not reported.
CEPA indicators: There is reference to the increase in learners’ awareness and knowledge,
and mention is made of qualitative indicators, such as students’ own assessments of what
they have learnt.
Challenges
The availability of data, the absence of indicators and planning appropriately for the
evaluation were all challenges in this evaluation. At the time the evaluation was conducted
(towards the end of the second year), Grade 12 students who participated in the project had
already finished school, and Grade 10-11 students were busy preparing for exams. Many
teachers were also unavailable to participate in the evaluation.
There were further gaps in the data due to the fact that data which the consultants assumed
schools would collect as part of their audits, were not in fact obtained. The evaluators also
did not conduct the mini audits they intended to conduct, to see what benefits the students’
projects brought the schools. Time and budget constraints were mentioned, although the
17
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
inaccessibility of the data also seemed to play a role. For example, financial managers had to
be asked to prepare the information and it seemed to be ‘too much trouble’ at this stage in
the programme’s lifespan, for them to prepare this.
The evaluators also note that it was “difficult” to assess changes in students’ awareness and
knowledge of sustainability and climate change, presumably because no baseline studies
had been conducted. They suggest that some test or other form of monitoring be done in
future for this purpose.
Value of the Evaluation and Analysis
The evaluation is strong on identifying the strengths of the project, related to its potential
for inspiring people and promoting environmental actions.
It also confirmed findings from evaluations previously conducted on the City of Cape Town’s
projects with high schools, for example that: well organised schools in particular prefer that
projects be introduced well in advance, at least a year; that projects should ideally run over
longer periods; that schools need considerable outside support with environmental content
and inspiration; and that students need a mix of fun, facts and meaningful ‘make-adifference’ projects.
The evaluation identified challenges with implementing an ambitious programme of this
nature, but did not provide strong recommendations on how they should be addressed. It
did not build on similar evaluations that had been conducted before. Had it done so, it could
have been used in a systematic evaluation of high school interventions or school projects in
general. Such a systematic review of previous valuation findings would give developers of
new programmes and evaluation teams a better understanding, in advance, of schools’
requirements, and how to plan for an evaluation in school contexts.
More might have been learned, if it had been possible to include the three schools (one
third of those who started) that dropped out of the programme, been included in the
evaluation. Learning about ‘failures’ or why certain participants were not willing or able to
benefit from a CEPA initiative offered, could provide valuable insight to inform future
versions of the programme.
The challenge of obtaining information on electricity and water consumption and waste, as
well as biodiversity gardens, was surprising, given that the CEPA project was focused on a
green audit. Questions about auditing could therefore have been useful in the evaluation. If
one asked ‘double loop’ ‘why’ questions, for example, one would start considering why
electricity and water meters are not readily accessible. Why have schools or society not
previously considered it important to measure their consumption? The fact that we are now
starting to consider it as important, is an indication of a slowly emerging shift – for which we
need to cater with practical arrangements, such as more accessible water and electricity
meters, and also good auditing tools. If the current ‘Green Audit toolkit’ did not prepare
schools adequately to obtain the necessary data, why not? What assumptions should be
revisited, before the next stage of the programme is designed?
18
Section 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Another useful reflection may be on why the City of Cape Town chose to focus primarily on
the students to reduce consumption, given how difficult it is for them to access the
necessary data. While there were significant benefits from this, another evaluation question
could be what the benefits might have been if other parties at the schools (such as the
financial managers, estate managers) were more centrally involved in the project.
Key Learnings
The analysis of this evaluation suggests that asking deeper ‘double loop’ questions 5 could
open up greater understanding and perhaps inform the design of future phases of a
programme.
This study also demonstrates the need for careful planning of evaluation and the timing of
the evaluation. The evaluators would have had an easier task, had the indicators and the
data that would be needed, been identified at the time of programme design, and collected
in an integrated process throughout the life of the programme. This would, for example,
have alerted the programme managers and evaluators to the prohibitive difficulties of
collecting auditing data.
One is reminded how important it is for CEPA practitioners to understand the context 6 for
which they design programmes, projects or resources – and, it would seem – evaluations.
This is not always possible, but then particular care needs to be taken to get quite an indepth understanding of the context in which the project and evaluation would play out,
preferably before and certainly during the evaluation.
5
6
See Step 3 in FOLDER 4 EVALUATION DESIGN STEPS, on the CD.
See Step 4 in FOLDER 4 EVALUATION DESIGN STEPS, on the CD.
19
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
SECTION 4: THE EVALUATION PROCESS
This section of the toolkit is a guide through the steps of designing an evaluation. It contains
practical design tools and examples of evaluation questions and indicators.
At times it refers to the context in the Case Studies, to points in the Key Ideas Pages, and to
various reference documents on the CD.
1
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
OVERVIEW OF THE EVALUATION DESIGN STEPS
We have found the following steps useful for designing evaluations for CEPA programmes.
We have followed these steps in different orders, depending on the stage of the CEPA
programme, or if there is an existing evaluation, how far it has proceeded. The steps can be
iterative, and we find it useful to move back and forth between them, particularly steps 3-5.
1. Choosing the evaluation approach
• The classic approach is to design a CEPA programme, implement it, then evaluate. In
developmental evaluations, CEPA programmes are implemented in a continuous
spiral of learning with evaluation integrated throughout.
2. Plotting the CEPA programme logic
• It is useful to start with an outline of the programme logic of change: What impact
do we want? What outcomes will lead to this impact? What outputs and activities
can help us achieve these outcomes? What resources do we need for these
activities? By deciding on these elements and lining them up in a linear fashion, we
create a logical framework (log frame) of how we think the programme will work.
3. Identifying assumptions
• Here we recognise that the connections between the elements of the logical
framework are assumptions. We similarly have many other assumptions about our
interventions and the context in which they play out. By recognising this, we can ask
double-loop evaluation questions, and learn more about why our programmes work
well, or not.
4. Unpacking the context
• Each CEPA programme is part of a particular ecological context and an institutional
and political system. It can be affected by economic contexts at various levels - local,
national and global. Cultural and educational factors can influence how it is
approached, and received. Evaluation questions about the role of the context
therefore provide insights for improving CEPA programmes.
5. Mapping causal links in the system
• The linear change model is useful but has limitations. A systems map reminds us of
the onnections between multiple variables that influence CEPA programmes and
their outcomes. In this step we draw a simple systems map with causal loops and
consider associated evaluation questions.
6. Adding indicators
• Here we suggest a process for developing indicators, to answer the range of the
evaluation questions developed in previous steps. The Toolkit provides examples of
different types of indicators that could be suitable for adaptation in a particular CEPA
programme.
2
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
7. Choosing data collection methods
• Social processes like CEPA programmes require social science tools to gather data
and develop case studies. Methods like observations, interviews, focus group
discussions and questionnaires are compared for their strengths and limitations.
8. Populating an evaluation planning table
• This step involves inserting the results of all previous steps into a simple table that
becomes the basis for resourcing and managing the evaluation process.
9. Doing, using and communicating the evaluation
• Embark on the evaluation, and learn more about the CEPA programme. At the same
time, learn more about evaluation processes, and take the learning into the next
evaluation. This step involves special attention to how indicators and other
evaluation findings are communicated to intended users.
STEP 1: CHOOSING AN APPROACH TO THE EVALUATION
We find it important to decide early on what broad role evaluation will play in the CEPA
programme. Will it be formative, summative, or developmental? 1
Typically, evaluation plays a summative role in a process that is linear and limited to the
three discrete steps in Figure 1.
Design the CEPA
programme
Implement the
CEPA programme
Evaluate the CEPA
programme
Figure 1: Linear Model of Programme Evaluation
A second phase of the programme could continue after the evaluation, in which case the
evaluation could be said to be formative, if it shapes the second phase. If summative and
formative evaluations are approached in this linear fashion, CEPA practitioners often find
the provisioning for evaluation inadequate, for example, we might not have collected the
necessary data for the evaluation from the start of the programme, or we might not have
adequate opportunities to stop and reflect as the programme rolls out. This is a common
situation, and the Cape Town Green Schools Audit Programme (in the Case Studies Folder on
the CD) provides one example of where this may have been the case.
Refer to the Key Ideas Page: Approaches to Evaluation. There we explain these broad roles
that evaluation can play in a CEPA programme.
1
3
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Another approach to evaluation is developmental. In this case, evaluation is built into all
phases of a programme and is planned for at the same time as the CEPA programme is being
designed. Evaluation data is collected throughout and there are regular programme pauses
to review and reflect. This approach could be reflected as follows:
Programme
planning
Formal
evaluation
stage
Programme
implementation
Ongoing
informal
evaluation
Formal
evaluation
stage
Continued
implementation
Programme
refinement or
change
Figure 2: Developmental Evaluation
We find that complex systems 2 such as those in which we conduct CEPA
programmes are not easy to map out and influence in a predetermined way.
Developmental evaluations are therefore useful because they allow for practicebased learning, as we evaluate both our models of change and our programmes
regularly in a process of continual action and reflection. They provide us with short
feedback loops of information, which allow us to adapt and evolve our programmes
and respond intelligently to the complexity of the situations in which our CEPA
programmes play out.
However, developmental evaluations do require a different way of thinking about
how we work, and adequate planning. All CEPA programme participants as well as
managers must understand the role of evaluation in order to regularly contribute
evaluation data; time and resources must be set aside for internal staff and, from
time to time, for external evaluators; and CEPA staff must build evaluation into their
daily routines. It is therefore not always possible to take a developmental approach.
The chosen approach to evaluation will depend on:
•
•
•
•
•
2
the phase of the CEPA programme
the available resources
the interests of the various stakeholders in the evaluation,
the models of CEPA processes and change and what they should achieve, and
the research paradigm informing the process.
Refer to Key Ideas Pages – Understanding Complex Systems.
4
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
In addition to choosing a formative, summative or developmental approach to the
evaluation, evaluation teams should decide on their research approach. Will the
evaluation follow an experimental design, an interpretivist case study approach, a
participatory action research design, or a combination? These (research
methodology) choices have a great influence on how we go about the evaluation,
and the kinds of questions that we ask. For example, a pre-test post-test
experimental design requires one to set up baseline pre-tests and control groups
beforehand. For a participatory action research-based evaluation, one needs to
involve a broader than usual range of evaluation participants right at the start, when
the research evaluation questions are formulated.
Pre-test: Assess new
MNP volunteers'
ability to manage a
stewardship site
(knowledge & skill
levels)
Intervention:
Volunteers exposed
to 35 hours of MNP
related training by
specialists
Post-test: Assess
MNP volunteers' new
ability to manage a
stewardship site
(knowledge & skill
levels)
Figure 3: Illustration of a potential pre- /post-intervention evaluation component
in the Edmonton Master Naturalists (MNP) Programme 3:
Table 1: Choosing the Evaluation Approach
What role should evaluation play in your CEPA programme?
The benefits of this would be
…
The practical design implications
are …
A summative role
A formative role
A developmental role
What research design is most appropriate for this evaluation?
Experimental Design
Case Study Based
Participatory
Other
Combination
Refer to the Case Study Folder on the CD, for a description of the Master Naturalists
Programme.
3
5
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Complete this table, then map the evaluation process as you see it unfolding, perhaps using
one of the diagrams in this section (e.g., a linear evaluation process, a pre-test post-test
design, or a developmental evaluation process). Also return to this table later, however, as
you may want to change your approach once you have worked through the next steps.
STEP 2: PLOT THE LOGICAL FRAMEWORK FOR THE CEPA PROGRAMME
In this step we outline how one can plot the theory or logic of how a CEPA programme is
meant to work in the form of a logical framework (usually abbreviated to ‘log-frame’). It is
very useful to do this at the start of planning a new CEPA programme with built- in
evaluation. We have also used it as starting point for designing an evaluation at any stage of
a CEPA programme that is already underway. Once the logical framework has been plotted,
we can identify single-loop evaluation questions related to the various elements of the logframe. The process of drawing up the log-frame is very valuable in itself if done with the
CEPA programme staff.
How do we know and demonstrate that a CEPA programme contributed to the change we
intended? A well-crafted programme logic offers a basis for evaluating progress against
intended outcomes and impacts. One of the reasons for this is that CEPA programmes
operate in complex environments where the scientific certainty of proof is seldom
attainable. Unlike in a laboratory, influences and forces in real-world contexts and
communities are mostly beyond CEPA practitioners’ control. Therefore, evaluation is
generally more about documenting a programme’s contribution to change, than about
proving causal links or attribution.
This is where the programme’s logical framework is helpful. Using the basic ‘inventory’
template for a log-frame, and working backwards (from impact to resources moving left
across the columns), identify and list the key components of the CEPA programme’s logic,
as a basis not only for planning and implementing the programme, but also for
designing evaluation questions.
The best time to build evaluation into a CEPA programme plan is in the initial programme
planning stages. One of the many advantages is that one then knows what sort of data to
collect and one can plan for it accordingly.
However, if a programme is already up and running when the need for an evaluation plan is
identified, the logical framework for the programme can be plotted at any stage.
Programme stakeholders often want to modify their log-frame after the results of an
evaluation phase become evident.
There are various versions of a log-frame template. One example is used here (Table 2), and
an alternative is included on the CD (Appendix 6). Most tabular templates use rows to order
and show the relationships among components. Some number the lists within a column to
aid discussion. Others have a box and arrow format to illustrate ‘causal linkages’, i.e.
demonstrating how resources, activities, outputs, outcomes, and impact connect to form
chains.
The first important task is to get the component parts categorised and described in a simple
inventory (such as Table 2). Then, once the basic inventory table has been filled in,
6
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
experiment with identifying the relationships among the items across columns. For
example:
Activities &
Resources
Outputs
Outcomes
Impacts
We find it very useful to complete these tasks in a group of stakeholders involved in the
CEPA programme. The process often results in enlightening discussions if stakeholders or
team members have differing understands of the programme elements and what they are
meant to achieve.
Fill in a Basic Programme Logic Template
Fill in Table 2, or another logical framework format of your choice. We work backwards,
starting by identifying the intended results (outcomes, incomes and outputs) before listing
activities. For ideas, see the notes following the table.
Table 2: A Basic Logical Framework Development Template 4
Resources
Activities
In order to
accomplish our
CEPA activities
we have and/or
will need the
following:
In order to
achieve this we
will conduct the
following CEPA
activities:
Materials and
other resources
required for the
CEPA activities.
Outputs
Short- & Longterm outcomes
We expect that
if completed or
on-going, this
programme will
lead to the
following
changes in 1-3
then 4-6 years:
We expect that
once completed
or under way
the CEPA
activities will
produce the
following
evidence of
learning and
action for local
biodiversity:
What is being
The most
Actual benefits
done or will be
immediate
or changes.
done to create
intended results
the desired
of the CEPA
change.
programme.
Each relates
directly to an
activity.
Evaluation Questions … Table 3
Indicators … Table 5
Impact
We expect that
if completed this
programme of
CEPA activities
will lead to the
following
changes in 7-10
years:
The longer-term
change that
stakeholders
hope the CEPA
programme will
help to bring
about.
Adopted from W.K. Kellogg Foundation, Logic Model Development Guide, January 2004,
www.wkkf.org .
4
7
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Below are some commonly used guidelines for completing the logical framework:
•
Impact refers to the results expected 7-10 years after a CEPA programme is under way –
the future environmental change we hope our CEPA programme will bring about.
Impacts are the kinds of organisational, community, or system level changes expected to
result from programme activities; they might include improved biodiversity conditions,
increased human well-being, ecological resilience or social capacity.
•
Long-term outcomes are results one would expect to achieve in 4-6 years. Like shortterm outcomes (see below) long-term outcomes are also specific changes in attitudes,
behaviours, knowledge, skills, biodiversity status or level of functioning, expected to
result from programme activities. The difference is that they usually build on the
progress expected by the short-term outcomes.
•
Short-term outcomes are results one would expect to achieve 1-3 years after a CEPA
programme is under way. Short-term outcomes are specific changes in attitudes,
behaviours, knowledge, skills, biodiversity status, or level of functioning expected to
result from programme activities.
•
Outputs are the direct results of programme activities. They are usually described in
terms of size and scope of the products and services delivered or produced by the CEPA
programme. They indicate whether or not a programme was delivered to the intended
audiences at the intended ‘dose’, scope or intensity. A programme output, for example,
might include the number of classes taught, meetings held, materials distributed, or
programme participation rates.
•
Activities and Resources - The planning meetings, brochures, booklets, training
workshops, and so on, that the CEPA programme needs, in order to achieve the
intended results. To connect actions to results, this exercise links one’s knowledge of
what works, with specific descriptions of what the programme will do. In the planning
stages, CEPA staff can consult CEPA specialists or refer to published guidelines for CEPA 5,
for expert-derived suggestions for CEPA activities. When listing the resources that are
needed to support what the CEPA programme proposed, it may also be helpful to
describe the influential factors in the context that CEPA staff would be counting on to
support their efforts.
Create Evaluation Questions
We find that once we have created a logic model of the CEPA programme, it is not that
difficult to develop evaluation questions. A logic model illustrates the purpose and content
of the programme and therefore suggests meaningful evaluation questions. Table 3 gives
some examples. As you work through it you may realise that a myriad of questions can be
generated. Deciding which questions to ask is a very important component of evaluation
design, and is ideally an iterative process of consultation with stakeholders.
In the evaluation design framework outlined in this toolkit, there are two broad sets of
questions that can be derived from a programme log-frame:
Guidelines for Environmental Education and Guidelines for Biodiversity Communication are
included on the CD (Appendices 2 and 3 respectively).
5
8
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
Evaluation questions about the single elements in the log-frame (e.g., has an activity
been completed, what was the quality of the output?). These types of questions are
illustrated in Table 3.
•
Questions about relationships between the elements of a programme’s logic model (e.g.,
to what extent does a particular output result in a desired outcome?). Such ‘double
loop’ questions serve to question the assumptions within the logic model itself, which
means that one’s evaluation (and programme) does not become entirely constrained by
the logic model with which one started. This is discussed in step 3.
Add evaluation questions related to the elements in the programme logical framework, that
you have drawn up earlier. See Table 3 for an illustration.
Table 3: Creating Evaluation Questions Using Logical Framework Components 6
INPUTS
Staff;
Money;
Training
materials
OUTPUTS
Activities
Process
Development Targeted
of CEPA
participants
course
attended
Provide x
interactive
training
sessions
Was the
provisioning
of funding
and staff
sufficient,
timely?
Were the
training
materials of
suitable
quality,
content?
6
Was the
required CEPA
course
developed?
Were all x
sessions
delivered?
Targeted
content
covered to a
standard
OUTCOMES
Short-term
Long-term
Participants
increased
knowledge
of
biodiversity
stewardship
Participants
undertake
stewardship
activities
Key Evaluation Questions
Did all intended To what
participants
extent did
attend? All
knowledge
sessions? Why?
increase?
What are
Why not?
Do the CEPA
participants
programmes
able to
communicate
understand
the issues
and do as a
comprehensively result of an
and effectively? input/activity?
How many
Were
participants
participants
signed up for
satisfied with
volunteer
the course
stewardship/
delivery?
conservation
action?
Indicators (Table 5)
Participants
join or form
communities
of practice
Biodiversity
is effectively
co-managed
by City and
citizens
After 12
months, how
many
participants
are still doing
stewardship?
How many
groups have
been formed?
What is the
scope and
quality of
their
stewardship
activities?
How many
hectares
covered?
IMPACT
Biodiversity
loss
reduced;
ecosystem
services
increased
What is the
status of
biodiversity
and ecosystem
services in the
city compared
to before the
programme
started?
Have goals
been
reached?
What
unintended
impacts have
there been?
W.K. Kellogg Foundation, Logic Model Development Guide, January 2004, www.wkkf.org
9
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
The Benefits of a Logical Framework in Designing an Evaluation
•
•
•
•
•
•
Can provide the framework for an evaluation plan, as it helps us to select and
communicate evaluation questions and associated indicators.
Provides a basis for discussion about the programme and the evaluation, and what we
want to achieve, among stakeholders, CEPA practitioners, managers and experts.
Helps determine and explain the relationship between an indicator and its purpose, in
assessing the suitability of potential indicators to answer the key question(s) and their
validity, and how effectively they represent the intended change.
Increases the evaluation’s effectiveness by focusing on questions that have real value for
stakeholders.
Helps to clarify the subject being addressed for all involved and aids in the selection and
communication of appropriate indicators.
Can guide on how to structure the explanation of an issue and the meaning of the
indicators; it can be included in a report, where it may help to develop the narrative.
Finally, to the extent that the logical framework communicates the CEPA programme’s logic
of change, or the programme theory, it opens up the programme logic to questioning and
revision, in those instances where the logic may be faulty and therefore hampering progress
in achieving the intended outcomes and impacts.
STEP 3: PROBE THE ASSUMPTIONS UNDERPINNING THE PROGRAMME
It should be clear from Step 2 that plotting a logical framework for a CEPA programme is a
very useful process. Among other things it reflects our assumptions about how change is
likely to happen in a programme 7. Log-frames are commonly used in development planning
and may also shape our understanding of how change happens.
But what if our assumptions about what we need to do and what outcomes we will achieve,
are wrong? Should we not also probe these very assumptions?
Like a pane of glass framing and subtly distorting our vision, cognitive or mental models
influence what we see. These maps consist of personal and collective beliefs that are based
on conclusions that we have drawn based on what we observe, our past experience, and
education. We need these mental ‘maps’ to help us navigate through the complex
environments of our world. However, all of our mental maps are flawed in some way, to a
greater or lesser extent. This is only a problem if our self-generating beliefs remain untested.
Using the Ladder of Inference 8
The ladder of inference (Figure 4) can help us to gain greater clarity on a CEPA programme
we aim to evaluate, by:
• Becoming aware of our own thinking about CEPA process and change through reflection
• Making our thinking and reasoning more visible to others
• Learning more about others’ thinking, through reasoning.
See Approaches to CEPA and Change, in the Key Ideas Pages Folder on the CD.
Senge, Peter, 1990. The Fifth Discipline: The Art and Practice of the Learning Organisation.
Doubleday.
7
8
10
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Figure 4: The Ladder of Inference 9
To explore the possibilities of this, start at the bottom of the ladder, in the empirical world
of reality and facts. From there (moving up the ladder), consider that we:
•
•
•
•
•
•
Experience reality and facts selectively, based on our beliefs and prior experience.
Interpret what this reality and these facts mean.
Apply our existing assumptions, often without questioning or even noticing them.
Draw conclusions based on the interpreted facts and our assumptions.
Develop beliefs based on these conclusions.
Take actions that seem ‘right’ because they are based on what we believe.
Without examination, this process can create a vicious circle. Our beliefs have a big effect on
how we select from reality, and can lead us to ignore evidence, facts and possibilities. We
could be ‘jumping’ to conclusions – by missing facts and skipping steps in reasoning.
Use the Ladder of Inference to encourage all evaluation participants to start with the facts
and use their beliefs and experiences to positive effect, rather than allowing them to narrow
or cloud their field of judgment.
We find it useful to consider the Ladder of Inference once we have developed the model of
change for the CEPA programme we are evaluating, but also right throughout the
evaluation. It encourages us to ask probing questions such as:
•
•
•
9
Is this the ‘right’ conclusion? Why did we draw that conclusion? Is it sound?
Are there alternative conclusions that are better supported by the facts?
Why do we think this is the right thing to do?
ibid.
11
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
•
•
What data have we chosen to use and why? Have we selected data rigorously?
Are there any facts/ best practice research that we have left out? How would including
them, change the conclusions?
What are we assuming, and why? Are our assumptions valid?
Drawing a Picture of the CEPA Programme’s Theory of Change
Now that we have considered the nature of our beliefs and assumptions, and where they
come from, we are in a better position to draw another model or picture of our
understanding of why a particular CEPA programme should lead to the desired change.
Is it clear why the selected activities, outputs and outcomes will create the desired impact
among these participants? The answer to this question constitutes the CEPA programme’s
model of change, which supports and builds upon the logical framework developed in Step
2. Successful programmes create a desired change and are built on a solid understanding of
what works – Pawson and Tilley 10 call this understanding, the programme theory.
Systematically work through the following programme and evaluation planning processes, in
order to describe the basic theory that underpins the CEPA programme you wish to
evaluate, and its change strategy (Figure 5 provides a possible template):
Figure 5: A Theory of Change Template
a. Problem or Issue
b. Needs/assets
c. Desired Results
f. Assumptions
e. Strategies
d. Influencing
Factors
a. Define the problem the CEPA programme is attempting to address (e.g. which
biodiversity issue in this City, which educational issue, the target group(s) and why they
are important). Explain concisely the issue you will address. The model of change will be
built upon this statement, which should illustrate how the CEPA programme will
function or functions, and what it expects to achieve in the city. We try to refer
wherever possible to research about the problem or issue, e.g. a State of the
Environment report; consultative workshops with CEPA specialists can provide other
successful programme or “best practice” information.
b. Quantify the scope of the needs or assets that led to the selection of this particular
10
Pawson, Ray and Tilley, Nick, 1997. Realistic Evaluation. Sage.
12
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
problem. Documenting the needs and assets helps the evaluation plan later on. It can
become a baseline providing indicators that measure progress made by the CEPA
programme over time.
c. Describe the desired results. These are the outputs, outcomes and impacts you have
listed in your logical framework.
d. Identify contextual factors that could influence the outcomes, either by helping or by
hindering (barriers). Are there perhaps policies that could affect your CEPA programme?
Look at previous evaluations of similar programmes, as they might identify some of
these barriers and enabling factors.
e. Why do you believe this programme will work? Look for a rationale in research into
effective CEPA programme strategies and evaluations of what worked, or didn’t work, in
other cities or situations like this. Connect what you plan to do, with why your approach
will succeed. Funders would like to see evidence that supports the proposed solutions.
Apply best practice guidelines that support plausible solution strategies for the
identified problem area (for example that active ‘hands-on’ involvement with the issue
will bring about the desired learning and behaviour change among residents and staff.)
f.
Why will your approach be effective? After you make the case for selecting a specific
strategy from among the alternatives you researched, state why your CEPA programme
strategy is needed and why it will work in your city. It should for example be apparent
how the programme intends to function as an intervention in terms of biodiversity
benefits. List these assumptions last because in this format, you have the benefit of all
the information that supports your assumptions. They are then easier to spot and
articulate with all the facts in front of you.
Here is a fictional example of the first processes, based on the city of Edmonton’s Master
Naturalist Programme (see the Case Study Folder on the CD):
Table 4: Towards a Theory of Change Underpinning Edmonton’s Master Naturalist
Programme (editor’s own examples):
Describing the CEPA programme’s
Possible Responses (editor’s own examples)
theory of change
Define the problem the CEPA programme Edmonton has many special natural areas that
is attempting to address
contribute to quality of life in the city, but skilled
manpower to effectively manage and protect all
these sites is limited; as a result natural areas
are invaded by alien vegetation and wetlands
are threatened by inappropriate development
which may cause reduction in ecosystem
services and quality of life.
Quantify the scope of the needs or assets X (number) natural areas comprising Y hectares
that made the case for the selection of
are currently unmanaged, and the City of
this particular problem
Edmonton has only Z site managers and no
volunteer stewards at this time.
13
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Desired results
Identify factors in the context that are
likely to influence the outcomes, either
by helping or by hindering (barriers).
Apply best practice research that
supports plausible solution strategies for
the identified problem area.
X natural areas are effectively protected and comanaged by City staff and knowledgeable
volunteers.
Willingness of many Edmonton residents to
participate in programme, but the distribution
of the volunteers may not match the
distribution of sites that need co-management.
Learning through doing, working collectively in
communities of practice strengthens
commitment and skills.
Complete a table like the above for the CEPA programme you wish to evaluate, then map
out a theory of change template such as the one in Figure 5.
That takes us to the next part of the evaluation design, which involves preparing evaluation
questions that test the assumptions underpinning the model of change.
Testing Assumptions
Assumptions are explored by adding probing ‘double loop’ questions to your logic model. By
being explicit about our assumptions that underpin our models of change, we allow
ourselves to also reflect back on or review these assumptions during evaluation. This adds a
basis for evaluation that can be particularly helpful in explaining why a particular
intervention or programme works, or fails to work. An important tool to help us identify the
assumptions behind our models is the ladder of inference.
Also see Appendix 5: Most Significant Stories of Change on the CD. This valuable evaluation
methodology surfaces and works with participants’ assumptions about CEPA success.
When evaluating a CEPA programme, it is important to evaluate not only whether it is
producing the intended outputs and leading to the desired outcomes and impacts, but also if
not – why not? Double Loop Learning Questions to add to the evaluation could include:
•
•
•
•
Are all the underlying assumptions correct?
In drawing up the model of change, did CEPA practitioners allow for discussion and
debate of a range of theories?
Does the model of change take into account that change is not necessarily a simple
linear process?
What unintended outcomes and impacts are evident, and what might their effects be?
Asking key questions such as these for the evaluation can be related back to the model of
change and the underlying assumptions, and can help CEPA practitioners to refine and if
necessary, re-define their programme (adaptive management).
Figure 6 further illustrates questions for what is called double loop learning. Where single
loop questions are about inputs, activities, outputs, outcomes and impacts, double loop
14
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
questions are about the underlying assumptions – in this case, about the relationship
between these single elements. The developmental evaluation process involves asking these
kinds of questions on a regular basis with a number of feedback loops to facilitate
continuous learning and double-loop learning.
Using the logical framework and theory of change maps you created, and after revisiting the
ladder of inference, create ‘double loop learning’ questions to test the assumptions about
the relationships between the elements of the logical framework, and the assumptions
underpinning the theory of change of the CEPA programme you want to evaluate.
Figure 6: Examples of Double Loop Learning Questions about Assumptions
Was a course a sufficient
intervention to prepare the
volunteers for stewardship?
INPUTS
Staff;
Money;
Training
materials
OUTPUTS
Activities
Process
Development Targeted
of CEPA
participants
course
attended
Provide x
interactive
training
sessions
Were these the appropriate
inputs to achieve these outputs?
Were the assumptions about
what was needed, correct?
Targeted
content
covered to a
standard
OUTCOMES
Short-term
Long-term
Participants
increased
knowledge
of
biodiversity
stewardship
Participants
undertake
stewardship
activities
Participants
join or form
communities
of practice
IMPACT
Biodiversity
loss reduced;
ecosystem
services
increased
Biodiversity
is effectively
co-managed
by City and
citizens
What were the unintended outcomes?
(E.g. more co-management sites mean
that biodiversity managers now need
people management skills)
15
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Figure 7: Overarching ‘double loop learning’ questions in different kinds of evaluation
Which aspects of the
context most shaped our
ability to do this work?
Formative Evaluation
What did the CEPA
programme
accomplish?
Summative Evaluation
What have we learned about
doing work in this context?
Developmental Evaluation
STEP 4: UNPACK THE CONTEXT
Exploring context is about understanding how the CEPA programme functions within the
economic, social, institutional and political environments in which it is set. We need to
consider whether a particular model of change is appropriate within the context of the
particular CEPA programme. What factors in the context might influence our ability to
implement the planned programme? Did the CEPA practitioners perhaps assume a very
different kind of context to the one that actually exists?
Such evaluation questions can help us explain some of the strengths and weaknesses of a
programme as well as the effect of unanticipated and external influences on it. This in turn
can help us explain why, or why not, a particular programme works.
16
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Demonstration of assumptions about context
Cape Town’s Smart Living Campaign designed an environmental resource use audit that was
suitable for the home. It made assumptions about the ease of measuring energy and water
consumption, and waste production, in the context of the typical family home. Here it is
relatively easy, as residents typically receive monthly utility bills from the local council,
which indicates their water and electricity usage from the municipal supply. They can also
measure their electricity supply from a meter in the home; and they can measure the
volume of waste produced by direct observation of the waste bins they leave outside the
home for collection on a particular day of the week.
When the Green Audits Programme 11 applied the same assumptions to schools, however,
these assumptions did not seem to apply that well to the new context. Students could not
readily measure the amount of energy and water used or waste produced at the school.
Schools consist of multiple buildings; utility bills are usually combined for different buildings;
are sometimes issued quarterly rather than monthly; and could only be accessed after prior
arrangement with management staff. Water and electricity meters are often in inaccessible
places or out of bounds for the students. Waste is produced in multiple sites (offices,
residences, kitchens, tuck shops) and disposed of in a variety of ways, on different days of
the week.
Failing to take these differences in context into account, and planning adequately for them,
could spell trouble for a CEPA programme requiring consumption measurement in schools.
Figure 8 below illustrates that a developmental evaluation process (and the indicators for it)
would ask questions about the CEPA programme, but also about its context, and about the
mental model of or assumptions about the programme and its context.
Using the second half of Figure 8 as a possible template, list all the critical features of the
context of the CEPA programme you want to evaluate.
We find it useful to identify economic, political, cultural, organisational and bio-physical
factors, at multiple levels. For example, economic factors at national, regional,
organisational and international levels may all be significant features of a CEPA programme’s
context. For more guidelines on this step, see below.
11
See the Case Studies Folder on the CD.
17
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Evaluation
Indicators
CEPA
Context
Model
Ecosystems,
Society &
Economy
Organisations
& Institutions
Systems &
Processes
Individuals
Figure 8: Aspects of Context and the Role of Context in a CEPA Programme Evaluation
18
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
We find that how we define the CEPA programme’s context and what we choose to include
in an evaluation of context, depends to some extent on the scope, size and duration of the
programme, and to a large extent on its actual focus.
In a large scale, long term programme like Nagoya Open University of the Environment 12, for
example, ‘context’ would certainly include the broader context of the society of Nagoya, the
role of Japan’s economic, business and other social systems, which may influence citizens’
values and lifestyle decisions, as well as a variety of institutional role players, including
various tiers of government and the education system, from schools to universities.
Contextual factors such as the large scale natural disasters that have been affecting Japan
would be particularly significant, for example in determining what content is on offer, and
how citizens relate to this content. In other societies, the contextual factors would differ.
In a smaller scale initiative, such as the City of Cape Town Green Audits for Schools, the
national economy may not be that significant, but the local economy might be, if one were
to consider refurbishing schools to reduce resource consumption. The international context
of donor funding for ‘green energy’ technology could be considered an important factor in
this context, too. Local institutional contexts are also important, for example the different
kinds of management evident at different schools 13. The national school curriculum, which
determines what teachers and learners should emphasise at school, can also influence the
extent to which they prioritise biodiversity related CEPA activities.
Decide which contextual aspects are relevant to the evaluation you are planning, and add
key questions in relation to these contextual factors.
Below are some examples of questions to ask about the context. At the start of the
programme:
•
•
•
Which features of the context are critical for programme success?
Which features of the context may prevent programme success?
Do the assumptions of the change model apply in this context?
And during the course of the programme:
•
•
•
•
•
12
13
In what ways is the programme being influenced by its context?
Which contextual factors seem to be particularly influential in shaping the outcomes of
the programme?
In what ways is the programme influencing its context? Or: Which aspects of the context
are being influenced by the programme?
Which features of the context seem to be at odds with the CEPA programme’s logic and
change model?
To which features of the context does the CEPA programme seem to respond
particularly well?
See the Case Study Folder on the CD.
Ibid.
19
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
STEP 5: MAPPING CAUSAL LINKS IN THE SYSTEM
The logical framework drawn up for most CEPA programmes implies that change will happen
in a linear manner, with clear one-way influences between a discrete set of factors or
variables. In the Key Ideas Pages 14, we provide an argument that change seldom happens in
a linear and entirely predictable manner. In step 4 you would have formulated some
questions about assumptions about how change happens in the CEPA programme you wish
to evaluate. You also mapped out a number of contextual factors, with associated evaluation
questions, which might have started to suggest a variety of non-linear linkages between a
multitude of factors involved in all CEPA programmes, including the simplest.
Evaluations can ask useful questions and generate useful insights if they allow for a systems
perspective on a CEPA programme, to complement and extend the more conventional linear
model of change. But complex systems theory is an entire field of theory and practice that is
beyond the experience of most CEPA practitioners and evaluators. To derive the benefit of
the systems perspective, without having to immerse oneself in a new discipline, we
recommend the process of drawing a simple ‘mind map’ of trends (increases and decreases)
in the CEPA programme, with arrows to indicate the possible causal links between them.
Although we have not done so in Figure 9, one can add a plus or minus sign to indicate
whether the trend is being exacerbated (+) or diminished (-) by the trend linked to it.
Figure 9: Map of Causal Links in the System of re-introducing Howler Monkeys in São Paulo
MORE monkeys
treated and
released
MORE
funds for
rehabilitation
MORE CEPA
with drivers
14
MORE forest
MORE forest
neighbours are MORE
neighbours MORE
aware of value of reaware of value of
introduction
reintroduction
MORE CEPA
with forest
neighbours
MORE funds
for CEPA
MORE monkeys
survive in forest
MORE monkeys
brought in for
treatment
MORE drivers
aware of value of
re-introduction
FEWER
monkeys
injured by
dogs
FEWER
monkeys
injured by cars
MORE
forest
neighbours
control
dogs
MORE Funds for
speed control
measures
MORE Speed
controls
See Understanding Complex Systems, Key Ideas Folder on the CD.
20
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
As with the linear logical framework, the process of producing the map is important.
Whether the map accurately reflects the system is less important; mapping one’s thinking
about the system is important, as this creates opportunities to evaluate and refine that
thinking where necessary. Hence the mapping process is again most useful if done with the
CEPA programme staff, as this helps to surface all assumptions and understandings of the
programme, its change theory and its context.
Figure 9 is an example based on the case study of a CEPA programme accompanying the reintroduction of howler monkeys (Alouatta clamitans) in Atlantic rain forest remnants in the
city of São Paulo 15. The content has been generated by the editor, drawing on the
background to the case study as well as some assumed factors, which may or may not apply
in the actual context. The systems map is provided simply for the purpose of demonstrating
how one could represent a particular system, and the causal loops within it.
Draw one or more causal loop system maps for the CEPA Programme you are about to
evaluate. Then add key evaluation questions that will allow you to test the CEPA
programme as well as the underlying assumptions on which programme activities are based.
These questions will then require you to look for evaluation data, best practice guidelines or
expert opinion to support or refute the postulated trends, and the links between them.
For example, in the above example, an evaluation team could ask questions about whether
there has been an increase in CEPA programmes with forest neighbours as well as passing
drivers, whether these programmes have resulted in greater awareness among the
neighbours and the drivers; and whether this awareness has in turn resulted in behaviour
changes, for example, whether drivers are reducing speed and injuring fewer monkeys, or
returning more injured monkeys to the rehabilitation centre.
‘Double loop learning’ questions could also be asked to test the assumptions that inform the
programme activities. For example, based on the fictional systems map in Figure 9
evaluation questions could be asked to determine whether drivers reduce speed in rain
forest areas where monkeys occur because of speed control measures (such as speed
humps, signage, or prosecution by traffic police) or because of a greater awareness of the
importance of the rain forest and its inhabitants?
Finally, add evaluation questions about any unintended consequences that might be
occurring in the system. For example, a programme raising awareness about the
reintroduction of endangered species in the rain forest might stimulate or increase the
efforts of collectors or hunters to track down the reintroduced animals. This will be a
consequence to avoid.
15
See the Case Study Folder on the CD.
21
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
STEP 6: DEVELOPING INDICATORS
“That which is good and helpful ought to be growing and that which is bad and hindering
ought to be diminishing .... We therefore need, above all else ... concepts that enable us to
choose the right direction of our movement and not merely to measure its speed.” 16
“The search for indicators is evolutionary. The necessary process is one of learning.” 17
One of the biggest challenges in developing an evaluation plan is deciding what kind of
information would best answer the evaluation questions. Indicators are the measures you
select to answer the questions you have posed. They act as markers of progress and success.
They are central to the design of evaluation processes and for data collection and reporting.
Indicators are often likened to the icons on a car’s dashboard, that indicate (for example), at
what speed we are driving, whether our headlights are on, how full the fuel tank is, and so
on. A red light often signals that the car is about to cross a dangerous threshold, while an
absence of red lights could mean that all is well! In a CEPA evaluation, typical indicators
might be the number of participants from different groups attending a CEPA course, the
degree of satisfaction expressed by participants on the course, and the level of relevant
knowledge gained by them.
Indicators are not ends in themselves. The red fuel tank icon on the dashboard is not the
fuel tank itself. A CEPA course, the participation in the course, and the satisfaction of the
course participant are probably not end goals in themselves, either. There is something else
we want to achieve through people’s participation in our courses – for example, growing
their capacity to act for biodiversity.
At the same time, the nature of the indicators we choose and work towards can have a very
real impact on CEPA programmes. For example, if we set a target of reaching 10,000 citizens
to attend our courses, this is likely to push CEPA practitioners’ efforts towards attracting
more and more citizens to courses, at least until the target is met. If this is the only or main
indicator in the evaluation, it can have the effect of detracting the CEPA practitioners’
attention away from other considerations such as the quality and relevance of the courses.
In this toolkit we promote an approach to indicators that promotes reflection on practice
rather than simply hitting targets.
Indicators are a central part of effective CEPA programme decision-making and adaptive
management. They can provide measures of the progress and success of policies and
programmes, and they can form part of an ‘early warning system’ to detect and fix problems
as they arise. Indicators can be used to raise awareness about an issue. An example would
be a drop in the number of observed howler monkeys in São Paulo’s rain forest. The same
indicator can then be used to put responses to this issue (a reintroduction programme and
related CEPA activities) into context, as is done in the case included in this Toolkit 18.
16
Schumacher, E.F., 1989. Small Is Beautiful: Economics as if People Mattered, Harper
Perennial.
17
Meadows, D., 1989. Indicators and Information Systems for Sustainable Development. The
Sustainability Institute, Vermont.
18
See the Case Study Folder on the CD.
22
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Indicators by themselves, however, provide little understanding of an issue. They always
need some analysis and interpretation of what they are indicating. Just knowing that there
has been a drop in the number of howler monkeys in São Paulo would not mean much,
unless we knew that there was a concomitant increase in illegal capturing or hunting of the
monkeys, or a disease that struck the local population, or a decrease in the area of natural
habitat (Atlantic rain forest) due to urban expansion.
Indicators don’t guarantee results. But well-chosen indicators, in themselves, can produce
desired results. Donella Meadows gave the example of industries in the United States that
started to reduce emissions in the absence of stricter laws, in response to the indicator (air
pollution level per company) being made known to the public.
On the other hand, if the indicators of success are wrong, then no amount of measuring,
reporting, funding, action, political will, or evaluation will lead toward the desired outcome.
Compare the following two indicators – which one is likely to lead to a more effective
reintroduction programme?
Release rate
Success rate
Figure 10: Comparison of Two Different Indicators for the same Programme
It was precisely because they needed to have a better indicator for success than the number
released, that the São Paulo biodiversity managers in our case study introduced a system of
monitoring groups of introduced howler monkeys.
The Challenge of Indicator Development
We contend that there can be no universal set of indicators for CEPA programmes that can
be used in all contexts. Indicators are purpose-dependent, and the indicators we choose, will
vary with our purpose. To the extent that we share purposes in CEPA activities, there will be
some commonalities in our indicators, and the examples in Table 5 and Table 6 will no doubt
be useful to many CEPA practitioners.
23
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Also consider Donella Meadow’s advice: “What is needed to inform sustainable development
is not just indicators, but a coherent information system from which indicators can be
derived”. 19
In addition to providing some examples of common indicator examples relevant to CEPA
programmes, this toolkit promotes a process for developing CEPA indicators based on:
•
•
•
•
•
mapping the logical framework of the CEPA programme
identifying the underlying assumptions and models (programme theory)
developing indicators for different stages of the programme
testing the results against these indicators, and then
re-thinking or re-designing the programme and the indicators, if necessary.
Drawing on our case studies, we provide examples of types of indicators that could be useful
in each case. Note that these are not necessarily indicators that the case study practitioners
had actually used, but they are indicators that could be used in similar situations.
Illustration of an Inappropriate Indicator – Fictional Case
Imagine for one moment what could happen if a government were to decide that each child
in the city should receive a book about the forest. Let us say that behind this is the goal of
educating the city’s children from a young age to understand and appreciate the forest. But
say the evaluators inadvertently choose an inappropriate indicator, namely: Every child in
Year 1 should receive a book on the forest.
To try to ‘achieve’ this indicator, the CEPA staff may put a large budget and all their effort
into effectively obtaining and distributing the books. They are likely to have much less
budget and time left to ensure that the books have good quality content, are suitable for
this age and language ability (including diverse languages across the city), and that teachers
are willing and able to introduce the books to the children with enthusiasm. In other words,
in our imaginary example there are no indicators for quality, relevance, or use of the books.
Around the world there are examples where such a choice of inappropriate indicator has
resulted in children receiving books that did not contain correct information or messages,
were not attractive, were not in their home language, or failed to be promoted by teachers –
and yet, the indicator – Each child should receive a book – would have been achieved and
the programme could have been regarded as a success!
“As you know, what usually happens is that we can only measure simple things, and then
because that is what we can measure, we say that those simple things are the only real
things. So we count numbers, do simple pre-post treatment surveys, look for short-term
changes, measure things, and then write our report. The real things, the ways in which
environmental education can change someone’s life, are much more subtle and difficult to
measure. You can ask questions about meaning, about influence, about impacts, and look at
things that aren’t visible necessarily over a short time, but become apparent over the long
term. This is what we have to consider as we look at effectiveness of environmental
education.” 20
Ibid
Meadows, Donella, 1998. Indicators and Information Systems for Sustainable
Development. The Sustainability Institute, Vermont.
19
20
24
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
What are Good Indicators?
Indicators are most useful 21 when they are:
•
•
•
•
•
•
Representative of what one wants to find out about the programme
Relevant and useful to decision-making (stakeholders care about this measure)
Easy to interpret
Sensitive to change
Feasible and cost-effective to obtain
Easily communicated to a target audience.
However, just because an indicator is easy to measure, easy to interpret and cost-effective
to obtain, it doesn’t mean that it is a good indicator. These considerations should not limit
the choice of indicators.
It is quite easy to list the characteristics of ideal indicators, and much harder to find
indicators that actually meet these ideal characteristics. It is fair to say that the development
of indicators is one of the most difficult parts of the evaluation planning process.
Bear in mind that indicators can take many forms. They don’t have to be quantitative
(numbers). They can be qualities, signs, symbols, pictures, colours.
Involve stakeholders in developing indicators
The process of developing indicators requires careful attention. It is strongly recommended
that all evaluation stakeholders (however you define them) are consulted as early in the
process as possible in order to determine the purpose of the indicators. Who would these
stakeholders be? The indicator selection process works best with a careful combination of
expert and grassroots or non-expert participation.
In the case of the Nagoya Open University of the Environment, for example, the
stakeholders who could help determine indicators may be experts and direct users of the
indicator (the CEPA programme managers and the programme steering committee), those
with a broader interest in the issues surrounding the programme (e.g. environmental
managers, funders and other institutional partners), and those holding relevant data (e.g.
the course designers and trainers).
Consulting with these groups and identifying their needs will help to clarify how simple or
complicated the indicator needs to be, and the most appropriate ways of communicating
and interpreting it.
Most of us already have indicators in the back of our minds, based on issues of particular
concern to us. It is important to get them out on the table at the start of the indicator
development process. As indicators are selected and defined, stakeholders will express their
values, purposes will be agreed upon, change models will be at play, and programme
21
Adapted from Evaluation Sourcebook: Measures of Progress for Ecosystem- and
Community-based Projects, 2006, Schueller, S.K., S.L. Yaffee, S. J. Higgs, K. Mogelgaard,
and E. A. DeMattia. Ecosystem Management Initiative, University of Michigan, Ann
Arbor.
25
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
theories will be developed and shared (implicitly and explicitly). The indicator selection
process is the place where the legitimacy and comprehension of an evaluation are built, as
people see their values incorporated into the indicators.
The most significant change story methodology 22 mentioned earlier is a useful strategy for
surfacing values and developing agreed-upon indicators for a further evaluation phase.
Questions to ask during this step:
•
•
•
•
Who are the relevant stakeholders in this programme, and do they all need to be
consulted in the development or choice of indicators?
How much ownership and decision-making power are different stakeholders going to
have over the choice of indicators?
Have the inputs, expectations and outputs of the indicator development process been
clearly defined for the stakeholders?
Do the stakeholders want to use the indicator(s) for decision-making, for reporting
purposes, and/or for continuous learning? Any other purposes?
Relating Indicators to Evaluation Questions
In the preceding steps we have worked towards posing a range of evaluation questions.
Once one has chosen which of these questions are most important to ask at this particular
juncture in the CEPA programme’s implementation, indicators should be developed to these
key questions. The evaluation question defines the purpose of the indicator, and what its
user wants to know about it.
One of the benefits of defining a key question is that it encourages the selection and
communication of the indicators in a form that aids their interpretation. The logic of
addressing a key question also encourages further analysis to explain complex issues. The
more precise and specific to a situation a key question is, the more guidance it gives for the
selection and development of suitable indicators.
It may be necessary to use several indicators and data sets to answer a single key question.
Relying on just one indicator can distort one’s interpretation of how well a programme is
working. On the other hand, the total number of indicators needs to be a manageable
number. Identifying a core set of indicators is a good way to proceed.
Table 5 below shows types of indicators that can be used to answer evaluation questions
related to the different components of the CEPA programme’s logic model.
22
See Appendix 5 on the CD.
26
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 5: Establishing Indicators to Answer Evaluation Questions
INPUTS
Staff;
Money;
Training
materials
OUTPUTS
Activities
Process
Development Targeted
of CEPA
participants
course
attended
Provide x
interactive
training
sessions
Was the
provisioning
of funding
and staff
sufficient,
timely?
Were the
training
materials of
suitable
quality,
content?
Number of
staff.
Amount
spent.
Number of
booklets
produced.
Quantitative
content
analysis of
booklets.
Was the
required CEPA
course
developed?
Were all x
sessions
delivered?
Course
developed.
Number of
training
sessions
delivered.
OUTCOMES
Short-term
Long-term
Participants
Participants
increased
join or form
knowledge
communities
of
of practice
biodiversity
stewardship
Targeted
Participants
Biodiversity
content
undertake
is effectively
covered to a
stewardship co-managed
standard
activities
by City and
citizens
Key Evaluation Questions
Did all intended
participants
attend? All
sessions? Why?
Why not?
Do the CEPA
programmes
communicate
the issues
comprehensively
and effectively?
Were
participants
satisfied with
the course
delivery?
To what
extent did
knowledge
increase?
What are
participants
able to
understand
and do as a
result of an
input/activity?
How many
participants
signed up for
volunteer
stewardship/
conservation
action?
Indicators
Quantitative Indicators
Numbers per Pre- and
group
post-course
attended per knowledge
session.
test scores.
Satisfaction
Number of
expressed as volunteers,
a number.
hours
worked.
IMPACT
Biodiversity
loss
reduced;
ecosystem
services
increased
In 12 months,
how many
participants
are still doing
stewardship?
How many
groups have
been formed?
What is the
scope and
quality of
their
stewardship
activities?
How many
hectares
covered?
What is the
status of biodiversity and
eco-system
services in the
city compared
to before the
programme
started?
Have goals
been
reached?
What
unintended
impacts have
there been?
Number of
volunteers
after 12
months.
Number of
groups.
Range of
activities.
Hectares
covered.
Conservation
status of
land e.g.
change in
species
counts,
change in
numbers of
individuals in
rare,
threatened
and
vulnerable
categories.
Change in
volume of
water from
27
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
wetlands,
change in
pollution
levels.
Comments
on staff
skills,
capacity.
Educational
experts’
analysis of
quality and
relevance of
materials.
Qualitative Indicators & Associated Methods
Participant
reflection on
reasons for
attendance,
nonattendance;
scope and
relevance
analysis
based on
expert and
participant
input during
focus group
discussion.
Individuals’
capacity
based on selfand peer
assessment,
and expert
observation
of conduct in
the field.
Groups’
capacity
based on
self-and peer
assessment,
expert
observation
of conduct in
the field.
Map of
areas
managed vs.
areas not
managed,
colour
coding
reflecting
levels of
management.
Most
significant
change
stories.
Identification
of
unintended
outcomes,
impacts
through
stakeholder
review
process(es).
28
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Types of Indicators Required According to Evaluation Phase
Five broad indicator types are used at different stages of implementation of a CEPA
programme. They seek different types of data and are distinguishable by their focus on
different variables relating to progress.
The five broad indicator types are:
Status
Indicators
Facilitation
Indicators
Effect or Result
Indicators
e.g.
e.g.
e.g.
• Baseline
indicators
• Process
indicators
• Performance
indicators
• Context &
• Learning
indicators
•Output
indicators
•Outcome
indicators
•Impact
indicators
Communication Indicators
e.g.
System
Indicators
•Headline or
aggregate
indicators
•Linkage
indicators
•Leading
indicators
•Leverage
points
e.g.
Status Indicators
These assess variables that determine the position or standing of the CEPA programme.
Baseline indicators belong to this category. Baseline indicators help to identify the starting
points for change and provide reference points in identifying realistic impact indicators.
In the case of Edmonton’s Master Naturalists Programme, a status (baseline) indicator could
be the number of knowledgeable volunteers who are involved in the stewardship of the
city’s natural areas, at the start of the programme. In the case of Nagoya Open University of
the Environment, a baseline indicator could be citizens’ knowledge and commitment to
biodiversity before they attend the Open University. In the case of the reintroduction of
Howler Monkeys in São Paulo City, a baseline for the CEPA component could be the number
of forest neighbours with a positive attitude to preserving the monkeys and their habitat.
CEPA practitioners interested in influencing the content of the school or university
curriculum may start with a review the status of the current curriculum, by looking for
indicators of biodiversity related content currently covered in the various subjects. Appendix
2 on the CD provides an example from an Australian government review of environmental
content in educational resource materials.
29
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Facilitative Indicators
These assess variables that assist, support or encourage engagement with CEPA
programmes. Process questions are concerned with the quality of programme delivery and
how well programmes have been implemented. Facilitative indicators show whether
planned activities are actually carried out and carried out effectively and/or according to
available guidelines. Facilitative indicators may measure the number of outputs generated,
participant and partner satisfaction with these outputs, and other aspects of programme
implementation. Context, process, and learning indicators belong to this category.
In the case of the Master Naturalists Programme, facilitative indicators would include the
number of courses offered to volunteers and the quality and relevance of this training. In
São Paulo the facilitative indicators would indicate whether residents have been reached by
planned CEPA programmes, how many activities were offered, how satisfied various
partners were with the quality of these activities, but also, what participants actually
learned, if CEPA programmes were regarded as facilitating the reintroduction programme.
Effect or Result Indicators
These indicators assess variables related to initial, medium and long term achievements
during the CEPA programme. Output, outcome and impact indicators belong to this
category. The outcome questions and indicators often look for evidence of change in
participants’ awareness and behaviours over time. Impacts are the broader, long-term
changes that a programme has on society and environment. The questions and indicators
may look for evidence that the state of biodiversity has improved over time, or that more
citizens enjoy the well-being associated with functioning ecosystems and intact biodiversity.
Impact indicators assess progress towards these objectives:
•
Short term impacts on individuals and organisations (e.g. changes in Cape Town school
students’ understanding of resource use at school, and reduced resource use in Cape
Town schools)
•
Longer-term impacts on practice at different levels, such as:
o changes in practices (such as curriculum changes and the institutionalization of
resource use reduction measures at schools e.g. regular recycling, installation of
energy saving appliances and water wise landscaping);
o organisational change in terms of policy (e.g. curriculum policy on
environmental education; local government policy on urban planning) and
o growing partnerships (e.g. stewardship sharing between government and
residents).
The long-term influence of CEPA programmes is difficult to assess, not only because it
requires long term commitments to collecting data, but also because many other factors,
beyond the programme, can influence such changes. For example, in Cape Town the cost of
water and electricity use may increase significantly, and if residents reduce their
consumption of these resources, it would be difficult to distinguish the impact of this
variable, from the (perhaps additional) impact of the Green Audits CEPA programme. Or, in
30
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
the case of the reintroduction of Howler Monkeys in São Paulo, even if CEPA programmes
result in high levels of awareness of and care about the rain forest remnants among local
residents, if policies and population pressure lead to rapid expansion of urban areas, the rain
forest may reduce to such an extent that Howler Monkey populations cannot be sustained.
Communication Indicators
These indicators are for disseminating information relating to a range of evaluation
questions in an accessible way that facilitates communication to stakeholders. Examples of
communication indicators are headline or aggregate indicators, which are the sort of
statements that could make it into a regional or community newspaper. Examples of
headline indicators could be the number of Cape Town schools actively recycling their
waste; or the number of hectares that are now under volunteer stewardship in Edmonton.
The mixed nature of some stewardship groups (consisting of old and new Edmontonians)
could also provide a headline indicator demonstrating widespread support for biodiversity
management.
System Indicators
These indicators provide an overall picture of the state of a CEPA programme. They can
provide an indication of the status of the programme, programme processes or impacts, or
all of these combined. They can be very useful for communication purposes and for further
programme visioning exercises. Systems change over time and we find it helpful to look for
indicators to tell us about this dynamic behaviour. Systems dynamics is a field of expertise
that specialises in the unfolding behaviour over time of whole systems. System dynamics can
be useful in finding linkage indicators, leading indicators, and leverage points where systems
are especially likely to signal change or respond to action 23. Metaphors are also valuable for
providing a ‘picture’ of the overall status of the system (e.g. comparing a CEPA department
to a healthy diverse ecosystem).
In the City of Cape Town’s overarching Environmental Education and Training evaluation 24,
the CEPA activities of its Environmental Resources Management Department were
compared to a tree that has grown very large, with many branches and leaves (activities),
but a weakness in the connections in its trunk and to its roots (alignment between
departments and alignment with vision and policy intentions).
23
24
See Appendix: Leverage Points in a System, on CD.
See Case Study Folder on the CD.
31
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 6: Indicator Types Using LAB CEPA Programme Examples 25
Examples of Status Indicators for LAB CEPA Programmes
Indicator
Type
Function
Quantitative Indicator
Examples
Qualitative Indicator
Examples
Status Baseline
To describe the
status of the overall
CEPA picture
% of local government
departments currently
providing CEPA
programmes with a
biodiversity component;
% of citizens who actively
participate in biodiversity
protection measures;
% of conservation worthy
land in the city that is
protected and/or well
managed.
Metaphors or ‘one liners’
describing the attitude of
various citizen and city
staff groups towards
biodiversity, before a
CEPA programme for
these groups start;
Photographic record of
conserved and degraded
sites around the city.
Policy exists that requires
CEPA programme for
biodiversity in local
government;
Coordinator and staff
appointed to assist local
government with
integrating LAB CEPA
programmes into service
delivery.
Concepts and principles
in national curriculum
policy on biodiversity
content in schools;
Strength and quality of
volunteer CEPA activities
support to local
government.
To describe the
status of the overall
Local Action for
Biodiversity picture
Status Context
To identify the
existence of CEPA
support systems
Examples of Facilitative Indicators for LAB CEPA Programmes
Indicator
Type
Process
Function
To identify the
existence of CEPA
processes and
activities, and to
what extent they
have been
implemented.
Quantitative Indicator
Examples
Number of citizens and
range of citizen groups
reached in CEPA
activities;
Number of press releases
with an environment or
biodiversity focus;
Attendance at
biodiversity related
Qualitative Indicator
Examples
Feedback from
stakeholders about how
programme is being
implemented;
Quality of responses to a
biodiversity debate
during a radio phone in
programme;
Evidence of good CEPA
Adapted from Education for Sustainability indicator guidelines produced by Daniella
Tilbury and Sonia Janousek in 2006, published by the Australian Research Institute in
Education for Sustainability, with additional examples by present authors.
25
32
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
events;
Number of teachers who
use biodiversity related
materials in their
teaching
% of activities completed
within timeframe;
Number of hours spent
on activities relative to
priorities.
Learning
To promote learning
and reflection in and
on CEPA
programmes.
practices according to
theories of change and
best practice guidelines;
Expert analysis on
classroom teaching on
biodiversity related
topics;
Staff opinions on
whether time is used
well.
Identify markers for
change - Reflective
analysis of case studies of
changed practice e.g.
schools that reduce
water consumption
during CEPA projects;
Identify conditions for
change – Review of the
process of adopting a
new urban planning
policy; Programmatic
review of a number of
smaller scale evaluations,
to look for similarities,
differences, patterns and
trends across projects;
Lessons learned in the
evaluation of LAB CEPA
activities are captured
and shared.
Examples of Effect Indicators for LAB CEPA Programmes
Indicator
Type
Output
Function
Quantitative Indicators
To assess
outputs such as
training
resources/course
materials, and
the immediate
results of an
activity.
Number of resources developed for
LAB CEPA courses and media
campaigns;
Number of topics covered e.g.
biodiversity, threats, ecosystem
services, climate change, risk,
adaptation, mitigation, resilience,
etc. (content analysis).
Qualitative
indicators
Adherence to
quality criteria in
resources
developed for LAB
CEPA courses and
media campaigns;
their relevance &
policy alignment.
Outcome
To assess
outcomes
related to
% of new teachers using CEPArelated content in the classroom;
Change in attendance at relevant
Level and scope of
biodiversity
management
33
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Impact
changes or
improvements
that result from
CEPA efforts;
To what extent
has the
community
become more
aware of
biodiversity
issues?
events;
Number of volunteer hours worked
on Local Action for Biodiversity;
Number of people who can name
threats to biodiversity in a survey;
Increases in nursery sales of
indigenous and water wise plants
and decreases in sales of invasive
plants.
activities
undertaken by
volunteers;
Evidence among
citizens of pride in
local forests and
mountain;
Case examples of
new networks/
communities of
practice.
To assess
impacts that
result from CEPA
efforts: Is there
an improvement
in the status of
biodiversity and
Local Action for
Biodiversity in
the city?
E.g. improvement in water quality
and numbers of endangered species
in urban wetlands.
For biodiversity indicators refer to
Biodiversity Indicator Partnership26
Over a quarter of participants agree
that their behaviour has changed in
a specific way, e.g., that they keep
their dogs out of protected forests;
Number of individuals, action
groups and volunteer days worked
in actions to restore, remediate, or
improve a natural area; number of
hectares of invasive species cleared;
number of wetlands or rivers
restored; number of new species
discovered by citizen groups;
number of hectares newly placed
under conservation management.
Most significant
change stories
which can include:
Development
decisions in
favour of
biodiversity;
Participants
making written
reference in
journals to their
new sustainability
practices; action
projects/ changes
they have made/
special events: in
the form of
postcards,
photographs,
videos, journals,
and web page
entries;
Examples of
citizens
motivating others
to join them in
taking action.
Increase in the number of local
governments providing CEPA
programmes with a biodiversity
component;
Numerical comparison in LAB CEPA
activities across cities in one region,
and across regions;
The presence or absence of a
number of criteria (i.e. greater
Qualitative
comparison in LAB
CEPA activities
across cities in
one region, and
across regions.
Performance To assess the
change in the
status of the
overall CEPA
picture in the
city and region.
26
Biodiversity Indicator Partnership, www.bipindicators.net/indicators.
34
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
budgets for biodiversity
conservation, more active
participation among a wider range
of citizens in volunteer
programmes);
Can be same as status indicators for
comparison to baseline.
Examples of Communication Indicators for LAB CEPA Programmes
Indicator
Type
Headline
Function
Quantitative Indicators
Qualitative Indicators
To provide a
‘dashboard’
summary for
communicating at a
high level
Headlines of priority
indicators for respective
stakeholders; for
example “More than
20,000 citizens
participate in Nagoya
Open University of the
Environment”;
Use Wordle for an overall
‘picture’ of the key
qualitative words
describing
outcomes/impacts from
case studies or surveys
(see cover of the toolkit).
Headlines of priority
indicators for respective
stakeholders; for
example “City of Cape
Town plays a significant
role in city-wide
environmental
awareness”;
Aggregate
To provide an
aggregated summary
of key indicators for
communicating at
managerial level
For example, “Overall
growth in CEPA activities
follows climate change
summit”.
For example,
“Biodiversity CEPA
activities significant in
service delivery”; this can
be ‘clickable’ to unpack
further; can also include
qualitative case study
references.
Examples of Systems Indicators for LAB CEPA Programmes
Indicator Type
Function
Quantitative Indicators
Qualitative Indicators
Combinations,
linkages and
leverage
points
Provide an overall
picture of the state
of CEPA
programmes as
systems.
Indicate status,
processes, impacts
or all combined.
Good for
Systems dynamics –
change in biodiversity;
change in citizen
involvement over time.
Systems linkages –
change in biodiversity
related to specific citizen
values, actions or
contributions.
Typically images, stories
or metaphors are
valuable here. For
example, the use of a
tree as a metaphor to
describe the overall
status of a programme
system – is the
programme weak but
35
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
communication.
Indicate linkages
and leverage points
where the system is
likely to signal
change or respond
to action.
Leverage points in the
system (change in city
development policy, rezoning, new legislation –
see the relevant
Appendix on the CD and
add your examples!
with many branches, is it
strong within a thriving
ecosystem due to all its
partnerships, or is it at
risk from a drought as
resources dry up?
Questions to ask during indicator development:
•
•
•
•
•
•
•
•
•
•
•
•
How well does the potential indicator help to answer our key question(s)?
How does the indicator act a proxy measure, as opposed to a direct measure, in relation
to what is not measurable?
How does this set of indicators address change or the dynamics in the system?
Are these indicators likely to foster compliance with laws, or foster learning and
innovation? Which is required now?
What are the alternatives to the indicator set?
What are the best indicators that will influence appropriate change in this system?
What are the resources available now and in the future for producing the possible
indicators?
Are there existing indicators that can help to answer the key question(s)?
When will we review and if necessary revise this indicator set to keep it relevant and
helpful?
What guidance can we offer partners and participants (e.g. CEPA course planners and
trainers) on how the indicators are to be interpreted?
Are the pitfalls in the selection and use of the indicator set transparent and explicit?
What are the incentives for developing, using and acting upon the indicator set and its
findings?
Establishing Targets for Indicators
Once an indicator has been selected, it is sometime possible to agree upon specific targets
to be reached as a measure of success. For example, if we want to assess whether a CEPA
programme increased student knowledge of biodiversity, our indicator could specify “at
least 80% of students will correctly identify three common sources of biodiversity impacts.”
This type of targeted indicator provides a more unequivocal standard of success than one
without such a target.
Indicators with their interpretative text can then be part of the definition of targets or
objectives. Caution is required, though, if targets are set on the basis of a desired value of an
existing indicator, especially if the indicator has been chosen principally because it is
something for which there is existing data. It is important to determine the desired state of
which the indicator is just an indicator. 27
Meadows, Donella, 1998. Indicators and Information Systems for Sustainable
Development. The Sustainability Institute, Vermont.
27
36
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
If a programme already has well-specified objectives, we may be able to extract targeted
indicators from these objectives. Consider the following two alternative objectives for a
biodiversity education programme:
"As a result of the CEPA outreach campaign ...
Option 1: ... “The public will be more committed to protecting biodiversity.” This objective
is not ideal from a measurement perspective: i.e., the indicator is not explicit. Which public?
What does it mean to be "committed to protecting biodiversity?" How much "more"
commitment will there be?
Option 2: ... “Adult participation in voluntary biodiversity management activities will
increase by 50%."
Note how this option offers an indicator for measuring "commitment to
protecting biodiversity" that is, participation in volunteer programme activities. In addition,
it includes a ‘target’, i.e., the expected increase. You can show that you have met this
objective if there is at least a 50% increase in participation compared to past, baseline levels
of participation.
Relating CEPA outcome/impact indicators to local and national biodiversity goals and targets
can also be important, depending on one’s approach to CEPA programmes.
All cities and/or countries have management objectives and policies with direct or indirect
impacts on biodiversity, and reporting on progress towards these is a major role for related
impact indicators. The Biodiversity Indicators Partnership has developed a set of indicators
for assessing biodiversity status 28. These can be used as related targets for assessing
progress towards desired impacts of LAB CEPA programmes.
However, a common problem is that local policies often lack clearly stated objectives,
explicit targets or specified mechanisms for measuring progress. As a result, quantifying the
indicator is not always straightforward. Different indicators may well be needed for decisionmaking on objectives and actions. For example, changes in the Living Planet Index (LPI) are
an indicator of overall biodiversity loss or gain and this information is important for raising
public and policy makers’ awareness of the issue, but the index value alone does not explain
why there is biodiversity loss or gain, or what responses are required.
Probing Questions to Ask during this Step:
•
•
•
•
•
28
What are the existing biodiversity-relevant management objectives and targets in our
city and country? Are these realistic? Adequate?
What is the size / scope of the problem we are trying to address?
What is the size / scope of the benefits we are trying to preserve or optimise (e.g.
ecosystem services like quantity and quality of water, beauty of natural areas, tourism
and recreational value, sustainability of marine and sea fisheries resources)
Who wants to know about progress in reaching these objectives and targets?
What outcomes and impacts of our CEPA programmes are we hoping to achieve related
to these objectives and targets?
http://www.bipindicators.net/indicators
37
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
•
•
•
Over what time period?
What resources do we have to achieve this?
What contextual factors will help or hinder us?
What are therefore realistic targets for our CEPA programmes?
STEP 7: DATA COLLECTION
At its heart, evaluation is about obtaining information and making sense of it against our
chosen framework. Once we have chosen our indicators, or more likely, while we are
choosing our indicators, we identify what information we will need in order to assess each of
the chosen indicators. Data collection methods could include, but are not limited to:
questionnaire-based surveys, focus group discussions or one-on-one interviews, and
observations of CEPA programmes in action.
When weighing up potential data collection methods, consider the following:
•
•
•
•
practicality
potential sources
when to collect data and
the tools/instruments which you will need to develop, or find.
Consider how the necessary information can be efficiently and realistically gathered. When
it seems impossible to gather the necessary evidence, we may need to go back to the
indicator development step and find another indicator that will be easier to evaluate.
Illustration of the Need to Plan for Data Collection
In the City of Cape Town’s Green Audits for Schools, the evaluation team ran into trouble
when the evaluation was due, which happened to be at the end of the school year. They had
great difficulty to reach their intended data sources, namely students and teachers, at this
time of the year. Teachers were too busy marking exam papers to grant interviews. Some
students were studying too hard to complete questionnaires. Other students had already
finished exams and were on holiday! The evaluation team also had trouble completing
resource use audits, because many of the schools’ metres for water and electricity usage
were in inaccessible places. To make matters worse, the team wanted to measure attitude
change among the programme participants (their indicator) but found they did not have a
good measure for this.
Data collection needs to begin as soon as possible, to identify and iron out difficulties early
on, to establish a habit of monitoring, and to make sure one does not run out of time or data
sources later. Evaluations should utilise existing opportunities for collecting data, but efforts
should also be made to collect new data in innovative ways. The indicators with the greatest
impact are often produced by using and presenting data in novel ways, including combining
different kinds of data in ways that may not seem immediately obvious.
Building CEPA practitioners’ capacity in data collection for a variety of CEPA indicators
should be encouraged.
We often make use of triangulation, that is, the use of two or more data sources and / or
38
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
data collection methods, to measure the same outcomes. Two independent measures that
‘triangulate,’ or point to the same result, are mutually complementary and strengthen the
case that change occurred.
Reduction in
injured monkeys
observed in
surveys
Increase in
healthy monkeys
observed in
surveys
Increase in
positive accounts
of rainforest
protection made
by public
Success of
primate reintroduction
in São Paulo
Level of satisfaction
with courses offered
Level of commitment
to and action for
biodiversity reflected
in reports by past
course participants
Increase in numbers
and range of
participants
attending courses
Success of
Nagoya Open
University of the
Environment
Figure 11: Two Examples of Triangulation
If one uses an experimental design for an evaluation, the standard way to account for
change is to measure levels of the indicator(s) in which one is interested, both before and
after a CEPA intervention. This is referred to as pre/post intervention testing (see Figure 3).
Any techniques used to make claims about change that do not rely on pre/post testing must
instead rely on reconstruction, in which subjects make claims about ‘the way things used to
be’. Often, these claims tend to remain unsubstantiated.
39
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Questions to Ask during this Step:
•
•
•
•
•
•
•
•
•
•
•
•
•
Are there suitable data sources for each of the possible indicators?
Can existing data be transformed into appropriate indicators?
How well does the available data relate to the key questions and possible indicators? (If
it doesn’t relate particularly well, consider triangulation with additional data sources.)
Are the necessary agreements in place to allow data to be collected and used?
Is there clear institutional responsibility for the continued production and reporting of
the data?
Who would be responsible for obtaining this data?
Who will be responsible for collating and analysing this data?
Is the data accessible and likely to continue to be produced in the future?
Is there sufficient institutional technical capacity and resources to produce the data now
and in the future?
Is the data collected in a consistent and comparable manner over time?
If an indicator is required to detect change, is the data collected with sufficient
frequency?
Is the data collection method appropriate to give the desired sensitivity to change?
Do data collection and monitoring systems or agreements need to be strengthened?
Decide on the most Feasible Methods for Collecting Data
Table 7 lists the more common methods used for obtaining data to answer evaluation
questions with both qualitative and quantitative indicators. One’s choice of method will be
determined by:
•
•
•
•
•
What you need to find out
The evaluation team’s research paradigm or methodological framework – in an
empiricist framework, qualitative data sources are often not highly valued or wisely used
The kinds of data sources that are available (for example, documents or people)
Available budget, staffing and time and associated constraints
Possible barriers such as language, distances to travel, etc.
Also consider a suite of methods which complement each other. Each method has strengths
and limitations, and often a variety of methods strengthens an evaluation.
40
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 7: Methods for Generating Evaluation Data
METHODS
EXAMPLES
LIMITATIONS
STRENGTHS
Workshops & focus
groups
Workshops with teachers to find out
how a teaching resource for schools
can be improved; focus group
discussions with volunteers, on their
wetland rehabilitation strategy.
It can be difficult to focus these meetings as they
generate a lot of information, which must be
accurately and adequately recorded before
analysing or interpreting it.
Participants know what you’re after and can
assist you in finding answers to the evaluation
questions; a joint exploration. Particularly useful
in participatory evaluations where members
seek answers together.
Questionnaires
Questionnaires to trainers and
participants in the Nagoya Open
Environmental University
Programme, to find out their views
on the courses offered.
People are often reluctant to complete
questionnaires. They may fear to offend other
parties. Different respondents may interpret
questions in different ways, and the information
obtained can be limited and hard to interpret.
Questionnaires can reach a large number of
people quickly and if questions are well
designed, they can produce a fair amount of
information. Closed questions are easier to
collate and can be analysed quantitatively.
Interviews
Interviews with individual
stewardship volunteers, to find out
their views and theories about their
stewardship practice.
The interviewer has a chance to build a
relationship, explain questions, and check their
interpretation of the answers.
Tests
To check what trainees have learnt
during training; a multiple choice
test could be combined with a
demonstration, for trainees to show
what they have learnt e.g. about
wetland rehabilitation.
An activity on tending a biodiversity
More time-consuming than questionnaires and
harder to collate and analyse across interviewees.
The one on one situation can encourage
interviewees to simply say what they think you
want to hear.
Tests are often intimidating. It takes time to design
them well. They usually test only factual recall.
Activities take careful planning and can be time-
Activities are usually not as intimidating as tests
Observations
Activities
Observing a trainer conducting a
CEPA course, or observing
volunteers rehabilitating a degraded
wetland.
It can be difficult to interpret what you see. For
example, are the learners learning through fun, or
are they distracted? Are the volunteers taking a
break or unsure of how to proceed?
One can see what actually happens, rather than
rely on reports of what happens.
One can check for specific existing knowledge
on specific topics, so tests are useful for
planning new activities which address areas of
limited knowledge or misunderstandings.
41
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
garden with learners in the Green
Schools Audit Programme, to teach
them something while finding out
what they have already learnt.
consuming. They should be designed so as to
ascertain more than mere recall.
and can be part of the learning, while evaluating
the learning.
Document Analysis
Analysis of visitor numbers recorded
in staff reports; review of Strategy
documents to find evaluation
criteria.
The information is only as good as those who
compiled the document; the original purpose and
contexts of the document may limit its value if
your purposes are different.
Often a quick way to access a lot of information,
including historical facts which people may have
forgotten. Useful for establishing trends and
contextual profiles/overviews.
Participatory
Appraisals
Transect walks with villagers,
stopping every 100 metres to
appraise the surroundings, factors
affecting forest species and possible
solutions.
Participatory appraisals may set up ‘artificial’
situations, or create unrealistic expectations of
changes in local conditions. Strong individuals
speaking on behalf of others in the ‘community’
may misrepresent others’ views.
A wide range of people is given a chance to have
their say, in a non-threatening setting. More
formal consultations are often experienced as
intimidating.
This table has been adapted for this toolkit from its original source, Into Evaluation: A Start-Up Toolkit, www.capetown.gov.za, where it is listed as Tool 6. Tool 6
also has introductory information on sampling procedures, case studies and surveys, and different types of data.
42
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Figure 12: Multiple Data Sources and Methods can strengthen an Evaluation
Interview with trainer
Observation of course
Focus group with trainees
Data on a CEPA
Course
Analysing Quantitative and Qualitative data
Data is collected in either a quantitative (i.e. numerical) or qualitative form. For analysing
quantitative data there are standard statistical procedures. We do not discuss them here,
but take note of important considerations about sample size, and use a good primer on
quantitative analysis and the use of statistics in the social sciences.
If you are not experienced in analysing qualitative data (data that is not numerical in nature,
such as comments and general observations), obtain a good text book or guide on the topic.
One general process is to: read through all the data, organise it into similar categories, e.g.
concerns, suggestions, strengths, etc.; label the categories or themes; then identify patterns,
or associations and causal relationships in the themes. Consider developing in-depth case
studies and narratives (such as most significant change stories) with qualitative data.
Most comprehensive evaluations combine the two types of data well. Qualitative data can
help you interpret the patterns and trends you observe in your quantitative analysis;
quantitative analyses in turn bring perspective to the details of qualitative studies.
The level and scope of information in the evaluation report depends on its purpose and
intended users and readers. A vital part of the use of the data, beyond reports to funders
and senior management, is thinking through how you will apply what you learn from this
evaluation phase into the next round of programme development - the learning that comes
through looking at what worked and what didn’t.
It is very important to question evaluation data for double-loop learning, for example:
•
•
Is the data reflecting changes at source or only in symptoms?
Does the data indicate some deeper change that needs to be made?
For each of the key evaluation questions and indicators you have chosen, indicate the
methods and sources of data to provide the answers to these questions. Add a column with
the names of responsible parties, a time frame for when the data should be collected and
analysed, and any resources or special arrangements that would be required.
43
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
STEP 8: COMPLETE AN EVALUATION PLAN
At this stage of the process of evaluation design, you will have generated a number of
evaluation questions and associated indicators, and you would have identified data sources
and data collection methods with which to answer these questions. All that remains now is
to put everything together in a format that shows the relationships between the various
elements, and allows you to make the necessary arrangements about time and resources, in
order to execute the evaluation. Such a tool also serves to communicate the evaluation plan
to various stakeholders and role players, e.g. funders, managers, CEPA practitioners and
evaluation team members. It is particularly useful for keeping track of the evaluation
process, not only to ensure that everything happens when it should, but also to remind
everyone what the purpose of the various evaluation activities and data sets are. This is
easily forgotten in the hurly-burly of on-going, developmental evaluations!
Evaluation teams often use Excel spread sheets or other software to capture their evaluation
plans, in which case data could be added straight into the spread sheet. For illustrative
purposes we provide a simple evaluation planning table below, that may work just as well.
Table 8 is adapted from Into Evaluation, the first evaluation toolkit for environmental
education produced by the City of Cape Town (www.capetown.gov.za). That resource
focused on working out the answers to and relationships between the first 4 of these
questions in particular. This follow-up toolkit focusses in particular on Questions 5-7.
Note that at this stage, if the number of evaluation questions and indicators seem unrealistic
for the available time and resources, a discussion can now be held to trim down and focus
on the most important aspects.
Complete an evaluation plan, using the template in Table 8, or another of your choice. Make
the necessary adjustments to ensure that (a) your plan is realistic, (b) you are asking the
most important questions and (c), you have the indicators, the data sources and the means
to answer each of these questions. Then, go ahead and do the evaluation!
44
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Table 8: An Evaluation Planning Tool for Capturing all Basic Information
WHAT role should the
evaluation play?
How will we collect data for ..
•Indicator 1?
•Indicator 2?
•Indicator 3?
WHAT are the key evaluation
questions for this role?
What are the indicators for
each question?
•Question 1:
•Question 2:
•Question 3:
•Indicator 1:
•Indicator 2:
•Indicator 3:
Who/what are best data
sources for ...
When will the data be
collected for ..
•Indicator 1?
•Indicator 2?
•Indicator 3?
•Indicator 1?
•Indicator 2?
•Indicator 3?
Resources required and
Responsible parties
STEP 9: REPORT AND COMMUNICATE EVALUATION RESULTS
Once the evaluation is underway, collect and communicate findings to help inform and
shape the CEPA programme being evaluated. In the process, also gather insights to help you
improve and adjust the evaluation. Give special attention to communicating evaluation
findings.
Indicators are evaluation tools and steering tools but also important communication tools.
To optimise their communication value, one needs to invest time and effort in presenting
and explaining indicators appropriately for their intended audience(s). Hence the skills
needed for indicator development lie not solely in technical areas, but also in
communication and writing. Being clear about the key questions for the evaluations is one
way of ensuring that indicators are selected and communicated in a form that aids their
interpretation.
45
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Present Indicators in a Hierarchy
Indicators can be aggregated and presented in a hierarchical information system of
increasing scale and decreasing specificity. One interesting way we have been exploring for
presenting indicator data is in a form similar to a hypertext page. The main ‘cockpit’ shows
the most critical and aggregated indicators relating to the questions of highest priority (as
defined by stakeholders). A ‘click’ on that indicator opens a more detailed set of information
that has contributed to the aggregate indicator. Another ‘click’ could open boxes of further
information including illustrative case studies. Further ‘clicks’ could give even more specific
details, such as the data sources or explanations about how the indicators have been
derived. Evaluations are useful to multiple stakeholders if the entire information system is
accessible to users. 29
Number of monkeys
released in São Paulo
City rain forest areas
Success rate of howler
monkeys now surviving
in São Paulo City rain
forest areas
Reduction in number of
monkeys injured
Case examples of
villagers bringing
injured monkeys to
rehablitation centre
Survey data on number
of monkeys
rehabilitated at centres
Awareness among
drivers of need to
reduce speed in forest
areas
Awareness among
villagers of the need to
control their dogs
Figure 13: A Hierarchy of Possible Indicators for the São Paulo Case Study
Communicate indicators in terms of a story
It is often necessary to simplify information in order to convey useful messages to a wide
audience. The art in communicating indicators is to simplify without losing credibility.
To achieve this, the overall communication of indicators can be in the form of a ‘story’ or
narrative about the subject, in response to the key question(s). The narrative surrounding an
indicator (set) is essential, as indicators by themselves provide only a partial understanding
(hence ‘indication’) of an issue. They always need some analysis and interpretation of why
they are changing and how those changes relate to the system or issue as a whole.
Additional information allows the reader to put the indicator in context and see how it
Meadows, Donella, 1998. Indicators and Information Systems for Sustainable
Development. The Sustainability Institute, Vermont.
29
46
Section 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
relates to other issues and areas. Information to support and explain the indicator should
therefore be collected as the indicator is developed. The selection and creation of indicators
should consider how they can detail and communicate the ‘story’. It is also important to
remember that a single indicator cannot tell us all we want to know.
Questions to ask during this step:
•
•
•
•
•
How will the indicator be used?
Who are the target audience(s) that will be using the indicator?
Why are they being targeted? What do we want to achieve through communicating with
them?
What are the key questions that these users (may) have about the issue?
What medium will be used to communicate? Will there be a printed report, a document
on a website, a static or interactive web-page, video footage on national TV, a workshop
or site visit with stakeholders, a Power Point presentation, a newspaper article, a radio
interview, or a combination of some of these?
47
Appendix 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 1
Leverage Points – Where to Intervene in a System1
Use the following to develop indicators for your CEPA programme. They are in increasing order of
effectiveness as leverage points to intervene in a system.
WEAK INDICATORS
12. Constants, parameters, numbers (such as subsidies, taxes, standards): Parameters are points of
lowest leverage effects. Though they are the most clearly perceived among all leverages, they rarely
change behaviours and therefore have little long-term effect. Say for example a contractor is
managing a local forest or wetland on behalf of the local authority. They are doing a poor job, and
the local authority suggests they develop a new set of management standards. Just checking
whether standards have been developed and to what level (as opposed to being implemented), is a
weak indicator for change.
11. The size of buffers and other stabilizing stocks, relative to their flows: A buffer's ability to
stabilize a system is important when the stock amount is much higher than the potential amount of
inflows or outflows. Buffers can improve a system, but they are often physical entities whose size is
critical and can't be changed easily. In a forest ecosystem, the presence of a wetland may buffer the
system from climate change towards hotter, drier conditions.
10. Structure of material stocks and flows (such as transport networks, population age structures):
A system's structure may have enormous effect on operations, but may be difficult or prohibitively
expensive to change. Fluctuations, limitations, and bottlenecks may be easier to address. For
example, a conservation agency wants to rehabilitate a degraded wetland, but a major road runs
through it. Rather than to move the road entirely, culverts or stream redirection can be considered.
In a CEPA system, it is useful to identify where there are bottlenecks in systems, for example, busy
teachers are often bottlenecks when local authorities want to work with school children, in which
case they may rather work with existing after-school youth clubs.
9. Length of delays, relative to the rate of system changes: Information received too quickly or too
late can cause over- or under reaction, even oscillations. Consider breaking news about criminal
activities in a local forest under council’s protection. Communication specialists cannot release
information too early, before pertinent facts and suitable responses have been established, as it may
cause panic and calls for the forest to be re-zoned for housing; if the information is released too late,
on the other hand, the chance to act to stop the criminal activity or the risks associated with it, may
be lost.
8. Strength of negative feedback loops, relative to the effect they are trying to correct against: A
negative feedback loop slows down a process, tending to promote stability. The loop will keep the
stock near the goal, thanks to parameters, accuracy and speed of information feedback, and size of
correcting flows. An example is the "polluter pays principle". Say a local business has an outflow into
a wetland which causes high levels of pollution, particularly during spillage events. Asking the
company to pay for every clean up after a spillage event is likely to motivate the company to look for
1
Meadows, Donella, Places to Intervene in a System, The Sustainability Institute, Vermont, 1999. Local
government and CEPA specific examples have been added by the current authors.
1
Appendix 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
ways to reduce the number of spillage events.
7. Gain around driving positive feedback loops: A positive feedback loop speeds up a process. In
most cases, it is preferable to slow down a positive loop, rather than speeding up a negative one.
MORE POWERFUL INDICATORS
6. Structure of information flow (who does and does not have access to what kinds of
information): Information flow is neither a parameter, nor a reinforcing or slowing loop, but a loop
that delivers new information. It is cheaper and easier to change information flows than it is to
change structure. For example, a monthly public report of water pollution levels, especially near to
the pollution source, could have a lot of effect on people's opinions regarding the industry, which
may in turn provoke a response. Information about special species and their needs, placed along a
forest path, may encourage more conservation friendly behaviour along that path.
5. Rules of the system (such as incentives, punishment, constraints): Pay attention to rules, and to
who makes them. An increase of the tax amount for any water containing a given pollutant, will have
an effect on water quality in a wetland (if enforced). Issuing fines to dog walkers who don’t clean up
behind their pets along the forest path, or who don’t constrain them with a leash, should directly
improve behaviour in the forest, even more so than information.
4. Power to add, change, evolve, or self-organize system structure: Self-organization describes a
system's ability to change itself by creating new structures, adding new negative and positive
feedback loops, promoting new information flows, or making new rules. For example,
microorganisms have the ability to not only change to fit their new polluted environment, but also to
undergo an evolution that makes them able to biodegrade or bio-accumulate chemical pollutants.
This capacity of part of the system to participate in its own eco-evolution is a major leverage for
change. In a CEPA context, once forest users or youth clubs start taking joint ownership with the
local council for managing a wetland or forest, their desire and capacity for change may increase
significantly.
INFLUENTIAL INDICATORS
3. Goal of the system: Changing goals changes every item listed above: parameters, feedback loops,
information and self-organization. A city council decision to change the zoning of an urban forest
area from commercial development, to a protected conservation area with recreational access, will
have an enormous impact on the area. That goal change will affect several of the above leverage
points: information on plant and animal species numbers and status will become mandatory, and
legal punishments will be set for any destructive use of the area.
2. Mindset or paradigm from which the system — its goals, structure, rules, delays, parameters —
arises: A societal paradigm is an idea, a shared unstated assumption, or a system of thought that is
the foundation of complex social structures. Paradigms are very hard to change, but there are no
limits to changing paradigms. Paradigms might be changed by repeatedly and consistently pointing
out anomalies and failures in the current paradigm to those with open minds. A current paradigm is
"Nature is a stock of resources to be converted to human purpose". What might happen if we were
to challenge this collective idea?
1. Power to transcend paradigms: Transcending paradigms happens when we go beyond
2
Appendix 1 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
challenging fundamental assumptions, into the realm of changing the values and priorities that lead
to those assumptions, and being able to choose among value sets at will. Many today see Nature as
a stock of resources to be converted to human purpose. Many indigenous peoples on the other
hand, like Native Americans, see Nature as a living god, to be loved and worshipped. These views are
incompatible, but perhaps another viewpoint could incorporate them both, along with others.
3
Appendix 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 2
Indicators for Environmental Education Content in Publications
Source: Environment Australia, http://www.environment.gov.au/education/publications/ee-reviewschools/indicators.html
Environmental education indicators
The Curriculum Corporation for Environment Australia in 2003 developed a set of indicators to map
references to Environmental Education in curriculum documents. A total of 147 indicators were
identified through this process. For the mapping exercise, the indicators are grouped under five
categories and ten sub-categories, illustrated below.
Category
Sub-
Elements and factors that can be used
category
as the basis for indicators
Information about Ecosystems
Local
the environment
Regional
National
Global
Natural systems
Ecological
Adaptations
principles
Biodiversity
Carrying capacity
Cycles of matter
Ecological balance
Energy flow
Fauna
Photosynthesis/Flora
Food webs, interactions, biotic/abiotic, communities
Habitats
Interdependence
Population changes
Survival (factors)
Species diversity
Sustainable environment/life
Change over time
Energy and
Renewable resources
resources
Finite resources
Production and consumption
Resource use
1
Appendix 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Sustainable development
Use/Efficiency of energy
Nuclear energy
Energy conservation
Studies of humans Humans and
and the
Agricultural sustainability, food security
environment Built environment, building for survival, energy efficient
environment
housing, costs
Health and health care, urban health hazards
Indigenous lifestyle sustainability, farming
Lifestyles, how people function within an environment, quality
of life
Mass transit technology
New technologies and efficiencies
Population (growth, distribution, dynamics)
Poverty
Recreation, tourism, eco-tourism
Sustainable human settlements, development
Urban sprawl, urbanisation
General human activities
Political and
Citizenship
economic
Eco-efficiency
issues
Ecological footprint
Ecospace
Environmental assessment
Environmental law
Government environmental policies
Interconnectedness (political, economic, environmental, social)
Intergenerational equity
Land-use planning
Life-cycle analysis
Lobby groups
Management
Media
Natural resource accounting
Precautionary principle
Sustainable consumption
Cost benefit analysis
Pollution
Air pollution, air quality
2
Appendix 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Hazardous wastes, toxic chemicals
Noise pollution
Radioactive wastes, radiation
Solid wastes
Storm water, sewage
Vehicle emissions
Water pollution, water quality
Issues
Acid rain
Conservation
Deforestation, land clearing, habitat destruction
Desertification
Endangered species
Greenhouse, climate change
Introduced species
Land degradation
National parks/ remnant vegetation
Environmental disasters i.e. nuclear accidents
Ozone
Re-vegetation
Salinity
Sustainable biotechnology, bio-engineering
Water depletion - rivers, ground water
Wilderness
Recycling
Skills, problem
Experimental design
solving and
Observing
competencies
Measuring
Questioning
Mapping
Interpreting
Investigating
Collecting, analysing and organising information
Communicating ideas and information
Planning and organising activities
Working with others and in teams
Decision making
Brainstorming
Creative thinking
3
Appendix 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Designing
Future tools/forecasting
Solving problems
Environmental leadership
Environmental auditing
Evaluating/assessing
Critical thinking
Comparing evidence of change/short- and long-term impacts
Writing
Listening
Reading
Attitudes, values
Aesthetics
and view points
Appreciation of the benefits of community
Appreciation of the dependence of human life on finite
resources
Appreciation of the importance of individual action
Appreciation of the interdependence of all living forms
Care for the environment/stewardship
Ethics
Appreciation of the interrelationships between science,
technology, society and environment
Personal acceptance of a sustainable lifestyle
Respect for other culture perspectives
Respect for other living things
Social justice/equality/respect for human rights
Spirituality
Value clarification
Changing perceptions towards environments
Action
Energy conservation at school/home
Environmental citizenship
Government initiatives
Litter reduction at school/local area
Local community projects
Purchasing policies at school/home/canteen
School environment improvements/projects
Waste minimisation at school/home
Water conservation at school/home
Reducing harmful chemicals home/school
4
Appendix 2 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
Turning knowledge into action
5
Appendix 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 3
Guidelines for Environmental Education
Compiled by Dr Jim Taylor, Wildlife and Environment Society of South Africa (jt@wessa.co.za)
The terms associated with CEPA are:
C for communicating, connecting, capacity
building, change in behaviour
E for educating, empowerment (learning
and professional updating)
P for public, public awareness, public
participation, policy instrument A for awareness, action, action research.
“To create deep change, we must
find ways of managing
communication and learning across
cultures and disciplines, and
collectively creating and managing
new knowledge for sustainable
solutions.” (Keith Wheeler, Chair:
IUCN Commission on Education and
Communication)
The following guidelines have been developed through years of experience and applied research in
environmental education processes. They are shared here as a reference for developing meaningful
participation processes, training courses and other related programmes.
Environmental education processes should:
1. Be relevant and appropriate to the situation and the context of the participants.
2. Seek to connect with the context in which the learning is situated and the topics under
consideration.
3. Build on existing strengths and opportunities.
4. Mobilise and engage with the prior knowledge or understanding of participants and, where
appropriate, challenge prior knowing in a supportive, enabling ‘learning for change’ environment.
5. Engage participants in learning tasks that are related to their context (task-based learning).
6. Support dialogue, practical field-work experiences, reporting on experiences and sharing ideas as
well as ‘action taking’ related to the learning. The appropriate interlinking of such processes will
strengthen meaningful learning.
1
Appendix 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
7. Share the ‘tools of science’ so that participants become confident in using such tools to empirically
find out about their environment, explore and tackle problems. For example, learning with basic
water quality monitoring kits can enable participants to investigate and address water pollution.
8. In the case of work-based learning, relate to the work environment of the participant rather than be
removed and hypothetical.
9. Build an understanding of the social and physical world as interconnected and inter-dependent
systems.
10. Enable participants to learn, ‘un –learn’ and ‘re-learn’ from discontinuities such as a pollution
incidents or loss of water supplies.
11. Encourage and enable informed action-taking and on-going practice-based learning.
12. Engage participants in collective processes of finding out, thinking through and acting on
environmental issues, challenges and opportunities.
Another set of guidelines have been developed by Gareth Thomson, Canadian Parks and Wilderness
Society, and Jenn Hoffman, Sierra Club of Canada, BC Chapter. Drawing on their publication, Measuring
the Success of Environmental Education Programmes, we highlight and adapt the following three
guidelines:
1. Examine environmental problems and issues in an all-inclusive manner that includes social, moral,
and ethical dimensions, and is mindful of the diversity of values that exist in society.
2. Motivate and empower participants through the development of specific action skills, allowing them
to develop communities of practice and strategies for responsible citizenship through the
application of knowledge and skills as they work toward the resolution of an environmental
problem or issue.
3. Promote an understanding of the past, a sense of the present, and a positive vision for the future.
Principles for Curriculum Design
Prof Heila Lotz-Sisitka (1999) from the Southern African Development Community (SADC), drawing on
international literature and regional research, developed the following principles to inform curriculum
or course design for environmental education:
•
•
•
•
Responsiveness to the complex and changing social, environmental and economic contexts within
which participants live and work.
Meaningful opportunities for participants to contribute to and shape the content and purposes of
the course.
Open teaching and learning programmes that are flexible enough to respond to individual
participants’ needs and allow for participants to contribute to course processes.
A recognition that within any practice there is a substantial amount of ‘embedded’ theory; provide
opportunities to critically engage with this theory, its strengths and its weaknesses in different
contexts.
2
Appendix 3 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
•
•
Tutors and course coordinators work with participants in ways that enhance the contribution that
participants make in their work and community contexts.
All of the above implies and supports reflexivity in terms of evaluating what we do, understanding
why we do it in that way, considering alternatives and having the capacity to support meaningful
social transformation when appropriate.
An Open Framework for Learning
The following framework developed by Prof Rob O’Donoghue of Rhodes University, South Africa,
provides guidance for planning learning opportunities. The surrounding questions can be used to plan
opportunities for learners to seek information, encounter reality, take action, and report on what they
have come to know.
3
Appendix 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 4
Guidelines for Local Government Biodiversity Communication
Compiled with reference to the Local Biodiversity Communication, Education & Public
Awareness Strategy and Action Plan Guidelines, prepared by the ICLEI Cities Biodiversity
Center, May 2012.
The terms associated with CEPA are:
C for communicating, connecting, capacity
building, change in behaviour
E for educating, empowerment (learning
and professional updating)
P for public, public awareness, public
participation, policy instrument “To create deep change, we
must find ways of managing
communication and learning
across cultures and disciplines,
and collectively creating and
managing new knowledge for
sustainable solutions.” (Keith
Wheeler, Chair: IUCN
Commission on Education and
Communication)
A for awareness, action, action research.
Communication processes must be customized for particular contexts, needs and audiences. The
following guidelines are aimed at helping CEPA staff decide on the best approach for each unique
situation.
Local Government Communication Processes on Biodiversity should be:
1. Relevant and appropriate to the situation and the context.
2. Seek to connect with the context in which they are situated and the topics under consideration.
3. Targeted and focused - addressing the main drivers of change in biodiversity, and the people
who can really make a difference to solve the issue, rather than those who are easiest to reach.
4. Aimed at enhancing capacity to deal with CEPA priorities, encouraging local stewardship of
biodiversity and helping to build a network of partnerships and alliances for the long-term
conservation, sustainable use and effective management of biodiversity.
5. Helping to build trust, understanding and shared agreements with organisations, companies and
communities which can assist local government in conserving and sustainably using biodiversity.
1
Appendix 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
6. Helping to highlight the contribution of biodiversity and ecosystem services to human wellbeing, poverty eradication, and sustainable development, as well as the economic, social, and
cultural values of biodiversity.
7. Where possible, positive and inspiring, and where it is not possible to be positive, helpful,
providing options for action to address or appropriately respond to dire news.
8. Flexible, mindful of and responsive to the particular context and intended recipients, their
interests and needs, cultural practices and traditions.
“Traditional messages on biodiversity from governments and
NGOs urging the public and other stakeholders to change
their daily practices need to be reviewed. Often these
messages use too much jargon, are negative, too didactic,
and abstract or filled with doom. Instead of turning people
on, they risk switching them off.” (Dr. Ahmed Djoghlaf,
former Executive Secretary to the Convention on Biological
Diversity)
At the 2008 IUCN World Congress in Barcelona a Workshop on communicating biodiversity: what
works and what doesn’t provided the following research-based rules for effective communication:
Making Communication Work – 10 Rules 1:
1. People are not rational
2. Challenge people’s habits
3. Use easy and accessible words
4. Make the message relevant, make people understand that they are targeted
5. There is a lack of trust in messages; communicate through credible channels
6. Cognitive discernment: no negative messages, fear gets the opposite effect than intended
7. Create a personal link between the person and nature (emotions) – fundamentals of biodiversity
8. Make sustainable development so desirable that people will find it ‘normal’; the need is not so
much to understand biodiversity, but to understand what behavior has the power to change.
9. Achieve broad consensus
10. The message should be sustainable and last in the long-term.
.
1
Ed Gillespie, Futerra Sustainability Communications, www.futerra.co.uk/downloads/10-Rules.pdf
2
Appendix 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
inspiring
&
positive
in the
correct
tone
accurate
and
verifiable
personal
&
relevant
Messages
should be
...
credible
& to the
point
clear &
consistent
including
the big
picture
Local governments’ environmental communication designers should:
1. Ensure strategic internal and external communications by, for example, conducting a prior CEPA
assessment of communication needs within the local government and among its stakeholders.
Consider planning internal and external communication separately, as both are important but
may involve separate target audiences, objectives and messages and different communications
channels.
2. Know the issue/s and be clear on the intended role of communication:
“Sound science is fundamental to our understanding of the consequences of biodiversity loss. It
also has the potential to be a powerful incentive for conservation action. But only if you
understand what it says. And only if you care about what it means. The challenge for biodiversity
communicators across the world is to ‘translate’ complex science into compelling messages that
will inspire the action required to conserve biodiversity. Success lies in understanding the
communications formula that turns science into action” (IUCN, p.1 2).
3. Understand the relevant stakeholders and target audiences, including their existing knowledge,
attitude, level of education, cultural and socio-economic context, language, lifestyle, interests
and their involvement in the problem and solutions, how they perceive the issue/s and what will
likely motivate them to action. To identify and learn about the target audience, a variety and
combination of research methods may be used including: desk-top based research of existing
2
IUCN. Communicating biodiversity: Bringing science to life through Communication, Education and
Public Awareness. http://cmsdata.iucn.org/downloads/cepa_brochure_web.pdf
3
Appendix 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
information, interviews, questionnaires, web-based surveys, focus groups and expert interviews.
4. Understand that communications is not just about overloading an audience with facts and
information, and is mindful of the axiom “what we say is not necessarily heard, what is heard is
not necessarily understood, what is understood is not necessarily acted upon, what is done is not
necessarily repeated” .
Frequently made mistakes in communication planning, include 3:
•
•
Trying to convince stakeholders rather than listening and taking on board their points of
view, understanding their motivations and how they relate to the issue.
Seeing stakeholders in biodiversity issues as ‘enemies’, rather than agents of change and
interest groups that are as legitimate as the sustainable development experts.
5. Comply and be consistent with relevant laws, policies and regulations including internal
communications and branding guidelines.
6. Realistic in terms of the required capacity and budget to implement CEPA activities.
7. Determine which communication channels or tools (means) are most suitable. They can have a
significant impact on the effectiveness of the communication, and decisions should take into
account the communication target, target audience, credibility of the communications means,
budget, capacity and your experience with the channels. Examples of communications tools and
channels are: brochures; videos; events; campaigns; workshops; mass media like newspapers,
radio, television; face-to-face meetings; websites and social media sites. Online social networks
(like Facebook and Twitter) are increasingly being used to gather support for campaigns, share
news and information and capture public reactions and attitudes to those activities. Consider
using a mix of tools / channels to most effectively achieve communication targets.
Checklist for selecting communication tools and channels 4:
•
•
•
•
•
•
•
•
Does the tool or channel help reach the communication targets?
Does it appeal to the target group?
Is it credible?
Is the message reinforced by the tool or channel?
Can it be easily accessed by this particular target group?
What is the most effective reach and impact of the tool or channel that suits the budget?
What is past experience with this tool/channel and its impact?
Always pre-test the message and the tool/channel and check that the message is not being
interpreted in an unexpected or unintended way.
3
Hesselink F.J. et al., 2007. Communication, Education and Public Awareness: A Toolkit for the
Convention on Biological Diversity. Montreal, Canada.
4
Adapted from Hesselink et al., ibid, p.269.
4
Appendix 4 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
8. Set communication targets and timelines; benchmark against best practice. Communication
targets are different from biodiversity conservation targets, although related. To formulate
realistic communication targets, it is important to have already identified the issue, the target
audience and the target audience’s knowledge, attitude and behavior towards the issue, as the
communication targets would depend on this information. Potential communication targets
include: providing knowledge, changing perceptions, creating new lifestyle choices or practices.
An example is: 60% of CEOs in extractive industries within the city should integrate biodiversity
conservation and sustainable use issues into their business plans and objectives.
9. Build monitoring and evaluation (M&E) into communication strategies and once evaluations
have been conducted, set targets for improvement. Use the LAB CEPA Evaluation Toolkit to
decide what the aims and objectives for the planned M&E will be, prioritise what will be
monitored and evaluated, what methods will be used and what the indicators for success will be,
the timeframes and intervals for M&E, the responsible parties, both internally and externally,
and the capacity and budget to implement.
10. Consider the city’s communication, education and public awareness raising activities in relation
to each other; they can be complementary and often overlap. Collaborative planning between
City staff will help to avoid duplication that would waste scarce resources, and can increase
synergy and impact.
11. Form partnerships and networks with other agencies in order to reach broader audiences and
overcome lack of capacity. Communications can also play the role of securing more networks
and partnerships for other biodiversity and CEPA programmes, by helping to enhance the city’s
credibility among existing and potential partners and providing channels to reach them.
Networking can take many forms – including online, face-to-face and electronic networking;
giving presentations at events; attending staff meetings about the status of biodiversity
activities; sending out biodiversity newsletters; and participating in online seminars.
The following principles guided the development and implementation of City of Edmonton’s Local
Biodiversity Strategic Action Plan (LBSAP, p.18) and are helpful in planning communications
(emphasis added):
•
•
•
•
•
•
•
•
Build capacity for ecological protection in Edmonton.
Engage the community in conservation and management of natural areas to harness existing
local knowledge and raise awareness.
Think continentally and regionally, and plan locally.
Align with existing conservation plans, aiming to be additive rather than redundant.
Use best available science.
Balance public interest with property rights.
Promote Edmonton’s ecological network as a context to which urban development must be
tailored, not the opposite.
Embrace innovative approaches to conservation.
5
Appendix 5 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 5
Stories of Most Significant Change (SMSC) Methodology for Evaluation 1
In this formative evaluation process, participants are asked to share their stories of the most
significant changes that they have experienced as a result of being part of a CEPA programme.
The process typically starts by asking people at the base of a pyramid of participants, what they
regard as the most significant change that has taken place as a result of the programme.
For example, in the Cape Town Green Audits for Schools Programme 2, this question could first be
asked of the students in the participating schools. It is the students who conduct the schools’
environmental audit, decide which of their schools’ audit results they want to improve, formulate
what action to take, undertake the action, and report back on it. In response to the questions,
Student A may respond that the most significant change for her is that she has learnt things about
environmental resources that she can use for the rest of her life. Student B may respond that the
interaction between richer and poorer schools was the most significant change experience for him.
Sponsor's
chosen MSC story
CEPA staff's choice
of MSC stories
Teachers' choice of
MSC stories
Students' MSC stories
The MSCC methodology is often misunderstood to simply end at this point, i.e. the collection of
stories (which could be small case studies) is presented in the evaluation as the outcomes of the
programme. However, the methodology described by Davies and Dart involves the next layer of
stakeholders in the programme, in this case the students’ teachers, to review these collected change
stories, and decide which smaller selection of these stories represent the most significant change.
To make this choice the teachers have to discuss among themselves what they mean by ‘significant
change’, and assumptions surface at this point. Teacher A may value a story that reflects academic
outcomes more, Teacher B may value a story depicting life lessons most, Teacher C may value a
change story with clear environmental benefits most, and so on.
Davies, Rick and Dart, Jess, The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use,
2005, http://www.mande.co.uk/docs/MSCGuide.pdf
2
See the Case Study Folder on the CD.
1
1
Appendix 5 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
The teachers’ smaller selection of change stories are then sent on to the programme developers – in
this instance, the CEPA providers who introduced the Green Audits Programme to the schools and
supported them in implementing it. These partners, which may include CEPA practitioners from the
City, in turn choose one or two change stories, which reflect, in their minds, the most significant
change. Again, assumptions about the nature of the change that is most valued, will surface, and be
discussed. Incidentally, these discussions, and the choices made, all need to be documented.
The final selection of 2-3 stories is given to the CEPA programme’s sponsors, e.g. the environmental
managers in the City of Cape Town. They in turn choose one of these stories, based on their
assumptions of what the most significant change would be in the light of their intentions and the
vision they had for the initiative when they first initiated and/or approved it. All along the way there
will be differences of opinion, hard decisions to make and discussions to record.
It is precisely these differences, and how they are resolved, that are important in this methodology,
for they reflect programme participants’ assumptions of what is meaningful and what kind of change
we can and should strive for. The documented discussions provide CEPA practitioners, planners and
managers with valuable insights and reflections that can inform future programmes and evaluations.
The individual stories collected on the way can of course also be used for communication and
educational processes – but that is a secondary benefit of the methodology.
2
Appendix 6 of CEPA Evaluation Design Toolkit – Commissioned by ICLEI LAB and City of Cape Town
APPENDIX 6
Logical framework planning guide
Source: Marlene Laros, ICLEI
OBJECTIVES
TARGETS
OBJECTIVELY VERIFIABLE
INDICATORS
MEANS OF VERIFICATION
IMPORTANT ASSUMPTIONS
(1) GOAL:
Specific targets need to be set (8)Indicators are what you will (11) Tells you where
you will get the
The changed situation in society for each indicator. They should use to measure and assess
specify
‘how
much’,
‘how
change
and
effective
information required
you aim to contribute to
many’
or
‘how
well’
and
be
achievement
–
signs
of
success.
by the indicators.
achieving (gives meaning to
linked to a date. You will
what you do but you cannot
Impact indicators
generally
need
a
baseline
achieve it alone); needs to be
E.g. # (number) of red data
measure to set a useful target
aligned to broader policy
species.
and measure change.
objective of government
regarding environment.
(5) External conditions you assume
will exist, are outside your control,
but will affect what you achieve.
Risks you will need to influence or
manage.
(2) PURPOSE
The result your organisation
exists to achieve.
(9) OUTCOME INDICATORS
E.g. # and % of EIA decisions
overturned on appeal for
reasons related to the
adequacy of the EIA done.
(12)
(6)
(3) OUTPUTS
The specific results that must
be achieved to achieve the
purpose.
(10) OUTPUT INDICATORS
(13)
E.g. # and % of EAPs who meet
the continuing professional
development criteria for reregistration annually.
(7)
(4) ACTIVITIES
(14) RESOURCES/INPUTS
The actions that must be taken The resources that will be needed to achieve the activities - including people, finance, information, specific skills and
to achieve each result.
equipment, etc.
1
Download