Learning from Complexity - Outcome Mapping Learning community

advertisement
ÐÏ à¡± á >
þÿ
b
d
þÿÿÿ a ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿì¥Á %
ð ¿
bjbj%ç%ç
G•
G•
•b «
ÿÿ
ÿÿ
ÿÿ
l l
l
l
v
‚ ‚ ‚
–
¢
¢
¢
‘ F
V
V
V
V
$ ·
×
6
‚ V
V
æ
æ
æ
V
P
‚ V
8 Ú
V
V
î
V
V
V
‚ V
V
L –
6
æ
æ
V
‚ ‚ V
æ
V
K
æ
ô
‚ ‚ ô
V
:
ÀŽ‰Ÿ
pÄ –
¢
¦
ô
ô
a
0 ‘ ô
é
²
4 é
ô
æ
–
–
‚ ‚ ‚ ‚
Ù
SEQ CHAPTER \h \r 1 Learning from Complexity: The
International Development Research Centre`s Experience with Outcome Mapping By Sarah
Earl and Fred Carden Abstract This short article introduces the major concepts of Outcome
Mapping and discusses the International Development Research Centre’s experience in
developing and implementing Outcome Mapping with northern and southern research
organizations. It explores how the fundamental principles of Outcome Mapping relate to
organizational learning principles and the challenges associated with applying theory to practice.
It presents cases where planning, monitoring and evaluation processes have been used to
encourage learning and improvement. It discusses the potential of Outcome Mapping as a tool
for evaluative thinking to build learning into development programs.
Introduction In its conceptual and practical work over the past few years, the Evaluation Unit of
the International Development Research Centre (IDRC) has encountered four fundamental
challenges in assessing and reporting on development impact which inhibit learning by
development research organizations.[1] First, while development research organizations are
under pressure to demonstrate that their programs result in significant and lasting changes in the
well-being of large numbers of intended beneficiaries, such "impacts" are often the product of a
confluence of events for which no single organization can realistically claim full credit.
Therefore when an organization sets out to demonstrate that its programs are the “cause”of
development improvements, they run into serious measurement difficulties. Second, in order for
change to truly take root, ownership and control must have shifted from the external organization
to exogenous actors and organizations. In other words, ideas and approaches have integrated
with a range of events, activities, customs, laws, and policies within the local context and fall
outside the purview of the external organization. As noted by Terry Smutylo, [A] paradox exists
for external agencies under pressure to take credit for results at the “outcome” and “impact”
stages; for it is at these stages where their influence, if they have been successful, is low and
decreasing relative to that of other actors. Attribution for results which naturally goes to the
dominant influences associated with those results, may empirically overlook antecedent project
components. (Smutylo 2001: 5) Third, assessing long-term development impacts does not
usually provide the kind of information and feedback that programs require to improve their
performance. It provides “clueless feedback”, which neither tells the organization about its
specific contributions to development change, nor provides data on how to improve its efforts.
(Smutylo 2001: 6) Fourth, the heavy emphasis on demonstrating the impact of programs has
meant that the development of learning capacities within organizations themselves has been
ignored. Consequently, assessing development impacts, especially from the perspective of an
external agency, is problematic both methodologically and in terms of the value of the findings
for learning organizations. Nonetheless, many organizations continue to struggle to measure
results far beyond the reach of their programs.[2] To address this problem, IDRC has been
working with a number of organizations in Asia, Africa, and Latin America to develop and fieldtest a methodology called Outcome Mapping which takes account of the complexity of
development processes and focuses explicitly on learning.[3] It establishes a vision of the
human, social, and environmental betterment to which a program hopes to contribute and then
focuses monitoring and evaluation on factors and actors within its sphere of influence. The
richness of a program`s performance story is told using systematically collected qualitative data.
Although outlining a complete case of a program’s use of Outcome Mapping go beyond the
scope of this short article, it will present how the fundamental principles of Outcome Mapping
relate to organizational learning theory and discuss some of the challenges associated when
applying theory to practice. Our experience with a number of applied development research
programs has demonstrated that, despite the best intentions, learning does not happen naturally
but it can be built into work practices through data collection and reflection processes. Outcome
Mapping has proven to be a robust methodology to help program teams think holistically and
strategically about the results they want to help bring about and to learn from their experiences.
This article presents Outcome Mapping as it pertains to development programs[4], but it can also
be used, with adaptation, at the project or organizational levels. Regardless of the level, the
fundamental “learning agenda” of Outcome Mapping remains the same -- to encourage
evaluative thinking, participatory decision making, open sharing of successes and failures, and a
willingness to engage in regular processes of thoughtful reflection and learning. Outcome
Mapping is based on three principles which we view as essential to encourage learning: 1)
planning for and assessing both external results and internal performance; 2) the cyclical nature
of planning, monitoring, and evaluation; and, 3) systematized self-assessment as a consciousness
raising, consensus building, and empowerment tool for those working directly in a program.
Each of these principles encourages a program to think holistically about its work in order to
improve. These principles offer more generalizable lessons about encouraging learning and
reflection that may be of value to others, whether or not they are using Outcome Mapping. What
is Outcome Mapping? Maps are cognitive guides. They locate us, helping us figure out where we
are now in relation to where we have been and where we are going. Michael Quinn Patton (Earl
et al. 2001 : Foreword) Outcome Mapping is an integrated planning, monitoring, and evaluation
methodology. It takes a learning-based and use-driven view of evaluation guided by principles of
participation and iterative learning. As a process, it is embedded with organizational learning
principles and offers strategies for increasing a program`s ability to improve its performance. It
fosters program learning by incorporating self-assessment and reflection processes throughout
the planning, monitoring, and evaluation stages. It begins with a facilitated workshop to design a
program and monitoring system, followed by a series of self-assessment workshops to monitor
change and refine strategies, with periodic evaluation studies implemented as required. In terms
of results measurement, the originality of the methodology lies in its shift away from assessing
the development impact of a program (e.g., poverty alleviation, reduced conflict), to a focus on
behavioral change. Outcome Mapping is built on the premise that behavioural change is
fundamental to sustainable development. Outcomes are defined as changes in the behaviour,
relationships, activities and/or actions of the people, groups, and organizations with whom a
program works directly. By using Outcome Mapping a program will not be claiming the
achievement of development impacts but rather contributions to the achievement of outcomes.
These outcomes, in turn, enhance the possibility of development impacts but the relationship is
not necessarily one of direct cause and effect. This shift significantly alters the way a program
understands its goals and assesses its performance and results. A program's contributions to
development are planned and assessed based on its influence on the partners with whom it is
working to effect change. Focusing monitoring and evaluation on changes in partners illustrates
that, although a program can influence the achievement of outcomes, it cannot control them
because ultimate responsibility for change rests with its partners. At its essence, development is
accomplished through changes in the behaviour of people. This is the central concept of
Outcome Mapping. Planning For and Assessing External Results and Internal Performance
Outcome Mapping recognizes that development is a complex process that is made up of three
parallel processes: first, the changes in the behaviors, actions, activities, and/or relationships of
the people, groups, and organizations with whom a program works directly; second, the
strategies that a program employs to encourage change in its partners; and third, the functioning
of a program as an organizational unit. It builds an understanding not only of changes in the
development setting in which a program is working, but also monitors and assesses its strategies
and activities and whether the program is learning and adapting to new conditions. As such,
Outcome Mapping assesses a program holistically and is based on the premise that a program
needs to know not only about development results, but also about the processes by which they
were attained, and about its internal effectiveness. It is through the combination of information
and knowledge in these three areas that a program can build a better understanding of what it is
achieving and how it can improve its levels of success. Through assessing these three elements
of a program, Outcome Mapping unites process and outcome evaluation. Therefore, Outcome
Mapping is well suited to the complex functioning and long-term aspects of international
development programs where outcomes are intermeshed and cannot be easily or usefully
segregated from each other. By considering the myriad of actors and factors that contribute to
development processes, it focuses on how a program facilitates change rather than on how it
causes change and looks to assess contribution rather than attribution. Outcome Mapping
encourages a program to link itself explicitly to processes of transformation and provides a
program the information it requires in order to change along with its partners. Looking at how
the three elements interrelate and the context in which change occurs is essential to program
learning. A program does not operate in isolation of other factors and actors therefore it cannot
plan and assess as though it does. Systems thinking is not simple and straightforward, however,
and requires a commitment to on-going reflection and analysis. As Peter Senge points out,
Seeing interrelationships, not things, and processes, not snapshots. Most of us have been
conditioned throughout our lives to focus on things and to see the world in static images. This
leads us to linear explanations of systemic phenomenon. (Senge 1990: 15) International
development programs are particularly prone to excluding themselves from the system in which
development change occurs. By separating themselves from development processes (i.e.,
something “we” help “them” accomplish) and explaining change using linear reasoning,
programs lose the opportunity to explore their full potential as change agents. Outcome Mapping
encourages a program to think of itself as part of the change process and to embrace complex
reasoning and multiple logic systems. Raj Verma, of the Nagaland Empowerment of People
Through Economic Development program, described the change in the program team’s
understanding of their role after using Outcome Mapping for a self-assessment of their first
phase and the planning of their second phase. “The often repeated and echoing question in
Outcome Mapping “what or who needs the change?” raised us from being providers of
development, achieving outputs, to actually believing we were agents of change”. Raj Verma.
The Cyclical Nature of Planning, Monitoring, and Evaluation [T]he key differentiating factor in
the success of an organization is not just the products and services, not just its technology or
market share, but the organization’s ability to elicit, harness, and focus the vast intellectual
capital and goodwill resident in their members, employees and stakeholders. When that
intellectual capital and goodwill get energized and focused, the organization becomes a powerful
force for positive change in today’s business and societal environments. (Kaner 1996: viii)
Outcome Mapping is a process in which program staff engage; it is not a product that is provided
to them. In it, planning, monitoring, and evaluation are not discrete events but are designed to be
cyclical with each feeding the other. Monitoring and evaluation considerations are introduced at
the planning stage of a program and all program staff are encouraged to think evaluatively
throughout the program cycle. That is, they are encouraged to think critically and ask questions
about what they want to achieve and how they will know whether they are successful. The
cyclical nature of planning, monitoring, and evaluation is important because development
programs are a part of an open system. The context in which a program operates is continuously
changing therefore it needs to be engaged in ongoing reflection and learning in order to remain
relevant and appropriate. It is impossible to plan for all eventualities; therefore, a successful
program is one that assesses and adapts to changing situations in an intelligent way based on
thoughtful reflection. This idea resonates well with those engaged in international development
programs because they have often experienced a well-thought out plan being thwarted by an
unexpected “external factor” -- war, natural disaster, or a change in government -- and therefore
are adept at thinking about how to work in complex environments. A key challenge is that
despite the enthusiasm for iterative learning and active engagement in the planning processes,
many programs have difficulty putting in place an effective and consistent monitoring system.
The problem does not appear to be a lack of commitment, sense of utility, or ownership of the
process. Rather, it is time-constraint as other work demands take over and there is no time for
group reflection. This poses a fundamental challenge for those supporting the incorporation of
reflective practices in programs, because it is unavoidable that learning takes time. Furthermore,
it cannot be outsourced! Outcome Mapping attempts to address this problem by encouraging
program teams to be realistic about what they can manage in terms of monitoring and evaluation
and to prioritize their information needs based on intended use. Prioritizing information needs is
a difficult exercise because program staff genuinely want to know about many aspects of their
work and tend to be overly ambitious about what is feasible given available resources. For
example, a program supporting tobacco control research using Outcome Mapping to plan its
second three year phase, chose to focus data collection on only one of its partners – researchers
in developing countries. They felt that understanding changes in the researchers’ behaviours (e.g.
engaging marginalized groups in the research process, publishing articles in peer reviewed
journals, influencing tobacco control policies and programs in their countries, etc.), and the
factors and actors that contributed would best reflect the development outcomes this program
was helping to bring about in this nascent field. Furthermore, the program team could use the
data collected both to fulfill reporting requirements and to provide themselves with credible
information and knowledge on which to improve their own activities and interventions.
Systematized Self-Assessment and Group Learning Outcome Mapping is based on principles of
participation and purposefully includes the program implementers in the design and data
collection so as to encourage ownership and use of findings. It is premised on the belief that
those engaged in the program can benefit from working as a group to systematically collect,
analyze, and interpret data. It is intended as a consciousness raising, consensus building, and
empowerment tool for those working directly in a development program. By actively engaging
the team in the monitoring and evaluation process, Outcome Mapping empowers them to
articulate, with accurate and reliable data, what they do to support outcomes and to harness
group wisdom to improve their performance. In essence, it tries to implant the passion and
enthusiasm of programming into the assessment process. Recognizing this emotional element of
learning is crucial to encouraging programs to engage in learning and reflection processes. As
noted by Peter Senge, “People’s natural impulse to learn is unleashed when they are engaged in
an endeavor they consider worthy of their fullest commitment.” (Senge 1990: 13) Outcome
Mapping moves away from the notion that monitoring and evaluation are done to a program and
instead actively engages the program team in the design of a monitoring framework and
evaluation plan and promotes self-assessment. For example, a women’s health and
empowerment program in India is using Outcome Mapping to document and assess their own
capacity development in the areas of gender, monitoring and evaluation, and applied research.
They have identified behavioural markers that indicate progressive change and are using these to
negotiate expectations amongst themselves, assess progress, and determine future strategies.
Their self-assessment findings are not intended to be shared with others but will serve their own
purposes as the program implementers. Outcome Mapping is usually initiated through a
participatory workshop led by an internal or external facilitator and then regular monitoring
workshops are held in which the program team collects and analyses data in order to plan new, or
refine existing, activities. Group learning is an essential component of the methodology because
of its power to harness organizational knowledge. Michael Doyle states that the key to
engendering learning in an organization is, By creating psychologically safe and involving group
environments where people can identify and solve problems, plan together, make collaborative
decisions, resolve their own conflicts, trouble-shoot and self-manage as responsible adults.
Facilitation enables the organization’s teams, groups, and meeting to be much more productive.
And the side benefits of facilitated or self-facilitated groups are terrific: empowerment, a
deepening of personal commitment to decision and plans, increased organizational loyalty, and
the building of esprit du corps. (Kaner 1996: viii) Outcome Mapping workshops are intended to
be participatory and, wherever feasible, can involve the full range of stakeholders, including the
partners in whom behavioural change is sought. Nonetheless, genuine participation is not simple
(especially in the context of an externally funded development program) and hierarchy and
politics can affect learning. A program using Outcome Mapping needs to carefully consider who
should participate and ensure that participants feel comfortable sharing their experiences
(positive and negative), engaging in self-assessment, and brainstorming how to move forward. In
their desire to use participatory approaches, donor agencies sometimes ignore the power
imbalances that necessarily exist between the organization funding a program and its
beneficiaries. In order to create the optimum space for critical assessment and learning by a
program team, participation needs to be considered in each instance and should be requested in a
spirit of equitable collaboration acknowledging the complexity of existing relationships.
Outcome Mapping has been developed in organizations where monitoring and evaluation are
primarily intended to help program learning and improvement. An organizational priority on
reflection has proven to be a perquisite for the successful integration of the learning processes
associated with Outcome Mapping. When incentives and rewards have instead been directed
towards accountability reporting, Outcome Mapping has proven to be an inappropriate approach
to monitoring and evaluation. Outcome Mapping can only be as empowering, participatory and
learning oriented as the organizational context in which it is implemented. Conclusion Outcome
Mapping helps a program be specific about the actors it targets, the changes it expects to see, the
strategies it employs, and its effectiveness as an organizational unit. It is particularly valuable for
monitoring and evaluating development programs whose results and achievements cannot be
understood with quantitative indicators alone but also require the deeper insights of a qualitative,
contextualized story of the development process. Outcome Mapping will not help a program
create generic lists of “lessons learned” or “best practices”. Instead, it will help it weave the plots
of the three elements related to its work: first, the changes in the behaviors, actions, activities,
and/or relationships of the people, groups, and organizations with whom a program works
directly; second, the strategies that a program employs to encourage change in its partners; and
third, the internal effectiveness of a program. Outcome Mapping provides a program with
processes to collect data and to reflect on the change processes in order to guide its actions
knowledgeably. Footnotes 1. IDRC is a public corporation created in 1970 by the Parliament of
Canada. Its mandate is to initiate, encourage, support, and conduct research into the problems of
the developing regions of the world and into the means for applying and adapting scientific,
technical, and other knowledge to the economic and social advancement of those regions. 2. For
a full discussion of problems associated with measuring attribution and impact see Terry
Smutylo`s Crouching Impact, Hidden Attribution: Overcoming Threats to Learning in
Development Organizations Block Island Workshop on Cross Portfolio Learning 22-24 May
2001 at http://www.idrc.ca/evaluation/crouching_impact.pdf. 3. Dr. Barry Kibel, Pacific Institute
for Research and Evaluation, was instrumental in introducing his Outcome Engineering approach
and working closely with us to adapt some of these ideas to the development research context.
Methodological collaboration with the West Africa Rural Foundation (FRAO) and testing with
the Nagaland Empowerment of People Through Economic Development Project (NEPED) and
the International Model Forest Network Secretariat (IMFNS) have greatly informed this
adaptation process. Ongoing testing with a number of other initiatives continues to enrich the
process. 4. For the purposes of this article, a program is defined as a group of related projects and
activities with a specified set of resources (human, capital, and financial) directed to the
achievement of a set of common goals within a specified period of time. References Earl, Sarah,
Fred Carden, and Terry Smutylo (2001) Outcome Mapping: A Guide to Building Learning and
Reflection into Development Programs Foreword by Michael Quinn Patton, Ottawa: IDRC.
Kaner, Sam (1996) Facilitator`s Guide to Participatory Decision Making Foreword by Michael
Doyle, British Columbia: New Society Publishers. Senge, Peter M. (1990) AThe Leader`s New
Work: Building Learning Organizations@, Sloan Management Review 32 (1): 7-23. Smutylo,
Terry (2001) A Crouching Impact, Hidden Attribution: Overcoming Threats to Learning in
Development Organizations@, paper presented at Block Island Workshop on Cross Portfolio
Learning 22-24 May. Authors Fred Carden, PhD, is Senior Specialist in Evaluation. He has
written in the areas of evaluation, international cooperation, and environmental management. He
has taught and carried out research at York University, the Cooperative College of Tanzania, the
Bandung Institute of Technology (Indonesia) and the University of Indonesia. He holds a PhD
from the Université de Montréal and a Master’s degree in environmental studies from York
University. His current work includes the development of evaluation tools and methods in the
areas of organizational assessment, participatory monitoring and evaluation, program evaluation
and outcome mapping. Fred Carden, Evaluation Unit International Development Research
Centre PO Box 8500 Ottawa, Ontario Canada K1G 3H9 fcarden@idrc.ca phone: 613-236-6163 x
2107 fax: 613-563-0815 Sarah Earl is a Program Evaluation Officer with the Evaluation Unit and
has been with IDRC since 1998. She has carried out research and worked in Eastern Europe and
the former Soviet Union. She holds a Master’s degree in Russian politics and development from
Carleton University and an MA in Russian history from the University of Toronto. Her current
work includes the conceptual development of outcome mapping and its application for planning,
monitoring, and evaluating at the program and organizational levels. Sarah Earl Evaluation Unit
International Development Research Centre PO Box 8500 Ottawa, Ontario Canada K1G 3H9
searl@idrc.ca phone: 613-236-6163 x 2347 fax: 613-563-0815 Article published in
Development and the Learning Organization edited by Laura Roper, Jethro Pettit, and Deborah
Eade A Development in Practice Reader. Oxfam GB, 2003.
•
€ •
«
Å
P
S
… † ˆ
. YA ZA
Q
Q
Q ´Q ÚQ ‘R ©R ŽS ®Y
Z TZ †Z á] â] $^ œ^
À^
_ E` F`
c ˆc —
c Çc Íe hf zf }f ~f •f €f .g /g 0g 1g 2g 5g 6g 7g ú ú ó ñ ï êäê ï á
á ê ê ê ï ÞÙÒÙËÙËÙËÙÄÀÄÙĵ¬µ ó ᤘŽá¤˜5•OJ
QJ
j B* CJ
U
ph
5 •CJ
OJ
QJ
B* mH
ph sH
B* CJ
mH
ph sH
0J
5 •
5 •B* ph
6 •B* ph B* CJ
ph B* ph
0J
CJ
6 •mH sH
mH sH
6 • 5 • j 0J
U
j U
8
•
‚
¡
¢
£
¬
E
F
T
U
þ
ÿ
°±
K
L
ª
ù ù ù ù ù ù ö ô ô ô ô ô ô ô å Ì Ì Ì Ì
Ì Ì Ì Ì
Æ) Ð
p @
à°€P ð-À!
1$
Æ
À! ð-
„¾1$]„¾
1$
$ 1$ a$
•f *g ;g ýýý
Ä
Å
S
† ˆ k- l- m- o- ÿ# $ F$ G$ ”( •(
. æ æ æ æ Ë æ Å Å Å æ Å æ æ æ æ æ æ æ § - Æ#
Ð
p @
ª
«
à°€P ð-
„¾„Ð1$ ]„¾^„Ð
dð
1$
Æ
@
à°€P ð-À! „@
„Ð1$ ^„@
`„Ð Æ) Ð
p @
à°€P
ð-À!
1$
.
. ÿ1 2 =2 74 84 ®9 ¯9 (A )A YA ZA ÕH ÖH J ®J á È È È È ¬ È È È È È È È ” ” ” ”
Æ) Ð
p @
à°€P ð-À!
Æ
Ð
p @
à°€P ð-À! „Ð1$^„Ð
Æ)
Ð
p @
à°€P ð-À!
Ð
p @
1$ -
Æ#
à°€P ð-
„¾„Ð1$ ]„¾^„Ð
®J ïL ðL
Q
Q
Q ŽS •S •S ›S •S ØW Ù
W ÚW ÛW åW æW ã Ê Ê Ê Ê Ê Ê Ê Ê ² ¬ © ¬ © ¬ ©
1$
dð
1$
Æ) Ð
p @
à°€P ð-À!
Æ)
Ð
p @
à°€P ð-À!
1$
Æ
Ð
p @
à°€P
ð-À! „Ð1$^„Ð
æW CY DY ˆZ ‰Z Û\ Ü\ á] â] ã] ä] å] ð] ñ] ^ ®^ ;_ =_ >_ ¶_ ·_ „` †` æ æ æ æ æ æ æ æ ã Ý ã Ý ã Ý ã Ý Å
Ý Ý ã Ý Ý
Æ) Ð
p @
à°€P ð-À!
dð
1$
1$
Æ)
Ð
p @
à°€P
ð-À!
1$
†` Ž` •`
c
c #c 3c ]c ic ˆc ˜c ´c Çc Èc Ìe Íe Ùe ée
f f >f ü ü ö Ý Ý Ý Ý Ý Ý Ý Ý Ý ü ö Ä ¬ ¬ ¬ ¬ ¬
Æ)
Ð
p @
à°€P ð-À!
Æ)
Ð
p @
à°€P ð-À!
1$
Æ)
Ð
p @
à°€P
ð-À!
1$
dð
ç Ë Ë Ë ² – ” ’ z
1$
Æ)
1$
Ð
>f Lf hf {f |f }f ~f •f )g *g +g ç
p @
à°€P ð-À!
Æ)
Ð
p @
à°€P ð-À!
d 1$
Æ)
Ð
p @
à°€P ð-À!
1$
Æ)
Ð
p @
à°€P ð-À!
dh
1$
Æ)
Ð
p @
à°€P ð-À!
+g ,g g .g /g 0g 2g 3g 4g 5g 6g 8g ä Ì Ì Ì ± œ ’ ‰ ‰ } k
à À! „@8„À!&#$+Dé
$ Æ
à À!
a$
Æ
à À!
Æ
à À!
1$
Æ
à À!
d „@8„À!&#$+Dé
$ Æ) Ð
p @
Æ
à°€P ð-À!
a$
Æ)
Ð
p @
à°€P ð-À!
Æ)
Ð
p @
à°€P ð-À!
7g 8g :g öó
CJ
Ð
p @
d
5 •OJ
QJ
8g 9g :g ;g ò é ç Ë
Æ)
à°€P ð-À!
d 1$
Æ
à À!
Æ
à À!
d 1$
$ °Ð/ °à=!°
i
4 @ñÿ
4
"°
#•
$•• %°
°
°
N o r m a l
CJ
_H
mH
sH
tH
A@òÿ¡
D e f a u l t
P a r a g r a p h
F o n t 2
@
ò 2
F o o t n o t e
T e x t
CJ
( þO
(
D e f a u l t
P a r a ( þO
(
F o o t n o t e
R e f
þO
!
W P 9 _
þO
2
W P 9 _
1$ ( þO
B (
W P 9 _
1$
CJ
5
z þO
R z
B o d y
T e x t
I 1 G
$
Æ)
Ð
p @
à°€P ð-À!
„ „Ð1$ ]„ ^„Ða$
CJ
6
l þO
b l
B o d y
T e x t
I n =
$
Æ)
Ð
p @
à°€P ð-À!
dh
1$ a$
CJ
Q@
r
B o d y T e x t 3
$ 1$ a$
CJ
6
, þO
‚,
F o o t n o t e
1$
CJ
( þO
‘(
H y p e r t e x t
>* B* - þO
¡ -
W P 9 _
>* B* 8 &@¢ ± 8
F o o t n o t e R e f e r e n c e
H*
•
ª „ ÿÿÿÿ • ‚
¡ ¢ £ ¬
E
F
T
U
þ
ÿ
°±
K
L
ª
«
Ä
Å
S
† ˆ k
l
m
o
ÿ FG
”$ •$
)
)
*
* ÿ- . =. 70 80 ®5 ¯5 (= )= Y= Z= ÕD ÖD F ®F ïH ðH
M
M
M ŽO •O •O ›O •O ØS ÙS ÚS ÛS åS æS CU
DU ˆV ‰V ÛX ÜX áY âY ãY äY åY ðY ñY Z ®Z ;[ =[ >[ ¶[ ·[ „\ †\ Ž\ •\
_
_ #_ 3_ ]_ i_ ˆ_ ˜_ ´_ Ç_ È_ Ìa
Ía Ùa éa
b b >b Lb hb {b |b }b ~b •b )c *c +c ,c c .c /c 0c 2c 3c 4c 5c 6c 8c 9c =c ˜
0 € €˜
0 € €˜
0 € €˜
0 €
€˜
0 € €˜
0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0
€ €˜ 0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0
€ €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜
0 € €˜
0 € €˜
0 € €˜ 0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜
0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0
€ €˜ 0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0
€ €˜ 0 € €˜ 0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0
€ €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0
€ €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜ 0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜ 0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜
0 € €˜ 0 € €˜ 0 € €˜ 0 € €˜ 0
€ €˜ 0 € €š@
0 € €
š@ 0 € €š@ € €˜@ 0 € €š@ € €˜@ 0 € €˜@ 0 € €˜@ 0 € €˜@
0 €
€š@ € €˜@ 0 € €˜@ 0 € €˜@ 0 € €˜@
0 € €
7g 6 @
ª
. ®J æW †` >f +g 8g 7 9 : ; = > ? A
;g 8
W _ ™ Ÿ .
#
+
ó
ú
Ÿ¦
(
Ã
Ì
{
† à
é
º Ã þ(
)
*
*
,
,
,
, &, ., ñ- ô- õ- ú- %0 *0 ®1 º1 í: ÷: A 2A ½A ÂA £C ®C ÐH ÒH ÜH áH ŽI ™I ¤U U —
V œV ÌW ÔW
Z
Z
Z
Z ®Z ³Z ÀZ ÍZ >[ C[ U[ Y[ Z[ b[ ·[ ¾[ ”\ š\ •] ”] ò] ü]
_ •b ×b Ýb òb öb #c 'c *c :c =c
x
z
ÿ
_
EV•
™ u
{
Â- ×- y( Œ( >. B. f0 r0 –
1 ™1 J9 ^9 ?: I: C\ L\ ˜_ •_ ´_ ·_ Lb Qb hb zb •b
c )c *c :c =c
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
hb ~b •b
c (c *c :c =c
ÿÿ
C o r e l C o r p o r a t i o n
S a r a h E a r l K C : \ M y F i l e s \ S a r a h \ E U
G e n e r a l \ J o u r n a l A r t i c l e s \ f i n a l l e a r n i n g o r g - a r t i c l e 1 . d o c
—
C : \ D o c u m e n t s a n d
S e t t i n g s \ S E a r l . I D R C _ C R D I \ M y
D o c u m e n t s \ S a r a h \ O u t c o m e
M a p p i n g \ O M T r a i n i n g M a t e r i a l s f o r
S p a n i s h \ f i n a l - l e a r n i n g o r g a r t i c l e 1 - f o r
a n n e x . d o c *c :c =c
ÿ@ €
^u
p
@ ÿÿ
U n k n o w n ÿÿ
ÿÿ
ÿÿ
ÿÿ ÿÿ
ÿÿ
G •
‡z € ÿ
T i m e s N e w
R o m a n 5 •
€ S y m b o l 3
‡z € ÿ
A r i a l o •
‡ B a s k e r v i l l e W i n 9 5 B T T i m e s N e w
R o m a n #
A
ðÐ
…"a¦/³‡†
ç
?
7Q
)
f r o m
D e v e
E x p e
S a r a
þÿ
‘#ð 2 ¼c À
C o m p l e x
l o p m e n t
r i e n c e w
h E a r l
à…ŸòùOh«‘
2ƒð ß
i t y : T h e I
R e s e a r c h
i t h O u t c o
+'³Ù0 Ð
•
ÿÿ
n t
C e
m e
˜
i L e a r n i n g
e r n a t i o n a l
n t r e ` s
M a p p i n g
,
Œ
8
L
X
d
€
˜
¤
°
¸
À
È
ä
- j Learning from Complexity: The International
Development Research Centre`s Experience with Outcome Mapping e earSarah Earl oaraNormal.dot orm3 rmMicrosoft Word 9.0 e@ ¦÷_
@ :ÿ pÄ @ Î/Ì(•Á@ ší•
pÄ
?
7Q
þÿ
ÕÍÕœ.“—
+,ù®0 T
h
€
p
¨
°
ˆ
¸
À
•
˜
6
ä
-
IDRC g
)
¼c
j Learning from Complexity: The International Development Research Centre`s
Experience with Outcome Mapping
-
Title
! " # $ % ' ( ) * + , . / 0 1 2 3 4 5 6 7 8 9 : ; = > ? @ A B þÿÿÿD E F G H I J K
L M N O P þÿÿÿR S T U V W X þÿÿÿZ [ \ ] ^ _ ` þÿÿÿýÿÿÿc þÿÿÿþ
ÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿR o o t E n t r y
ÿÿÿÿÿÿÿÿ
À F €ëžŸ
pÄ e € 1 T a b l e
ÿÿÿÿÿÿÿÿÿÿÿÿ C é
W o r d D o c u m e n t
ÿÿÿÿÿÿÿÿ
S u m m a r y I n f o r m a t i o n (
ÿÿÿÿ Q
D o
c u m e n t S u m m a r y I n f o r m a t i o n 8
ÿÿÿÿÿÿÿÿÿÿ
ÿÿ Y
C o m p O b j
ÿÿÿÿ j O b j e c t P o o l
ÿÿÿÿÿÿÿÿÿÿÿÿ €ëžŸ
pĀ랟
pÄ
ÿÿÿÿÿÿÿÿÿÿÿÿ
þÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
þÿ ÿÿÿÿ
À F
Microsoft Word Document MSWordDoc
Word.Document.8 ô9²q
Download