Why evaluate

advertisement
LEARNERS Project Information Sheet No.2
Evaluation and impact assessment
Why conduct an evaluation?
Evaluation is usually defined as assessing the value, worth or merit of something. It is
something that we all do everyday and can therefore be readily built into community
projects and initiatives. Some of the purposes of evaluating a community initiative include:







to find out how well community or participants’ needs were met
to improve the initiative (to better meet community needs, better manage the initiative or
make it more sustainable)
to assess its outcomes or impacts
to understand why it does or does not work.
to find out how it is operating
to assess whether its objectives were met
to assess its efficiency or cost-effectiveness.
Evaluation can help various groups with an interest in an initiative (including participants,
project implementers and funding bodies) understand such things as what difference the
initiative made (or could make), whether the difference was what was intended, and what
changes could make the initiative more effective and sustainable in the future. An example
of an evaluation of a project that aimed to enhance Queensland rural women’s access to
interactive communication technologies is provided in the box on page 4.
Approaches to evaluation
There are many different approaches to evaluation and many different methods for carrying
them out. The type of approach and method used is often influenced by the perspectives
and values of those involved and their skills, knowledge and experience in doing research.
Wadsworth (1991) identifies two main approaches: ‘audit review’ and ‘open inquiry’.
The audit review approach to evaluation is measurement oriented. It involves asking
questions such as ‘what did we set out to achieve?”, ‘what are the signs that we have done
this?” and ‘what are we not doing that we intended to?’ Methods such as structured surveys
that provide statistical or numerical information are often preferred. This approach aims to
check whether a project’s original objectives were met or not and assumes what community
needs are. It requires an analytical and highly organised mind.
Like action research, the open inquiry approach to evaluation is improvement and
change-oriented. It involves asking questions such as ‘what’s working, what’s not working?’,
‘how could we improve things?’, and ‘what are community needs?’ Methods such as focus
groups and interviews, and ‘opening up’ questions are preferred. This approach enables
people to ask previously unasked questions and observe things they have not noticed
before. It looks at actual practices to uncover the assumptions and intentions that underpin
1
an initiative or project. This approach requires a questioning, interpretative and creative
mind.
It can be valuable to use both of these approaches when a evaluating a community
initiative.
Participatory evaluations
Most participatory evaluations are team projects conducted by representatives of
participants and stakeholders involved in an initiative. Professional evaluation staff often
provide training and advice about planning and conducting the evaluation and analysing the
data collected. Those involved participate in the evaluation in different ways.
Three main reasons have been put forward for increasing the involvement of participants
and stakeholders in evaluations: (1) to increase use of evaluation results, (2) to represent
the values and concerns of the multiple groups involved in decision-making, and (3) to
promote the empowerment of groups previously left out of the process (Papineau and Kiely,
1996, p.81).
Several benefits of participatory evaluations have been identified. They can:

increase the long term sustainability and success of initiatives through building
community capacity in planning and conducting evaluations and including diverse
community groups in decision making;

build trust, and enhance interaction and collaboration between community members and
between various community groups and organisations;

include the perspectives of all stakeholder groups;

provide ongoing learning about the initiative;

enable individual and collective reflection and assessment, from new angles;

foster a sense of ‘ownership’ of both the evaluation and the community initiative; and

produce community and individual empowerment of various kinds.
However, like all community participation processes, participatory evaluations have certain
limitations. These include the time and resources needed to train those involved and to take
part in the evaluation, getting representation from all the groups involved, and participants’
varying levels of skills and commitment to the process.
Impact assessments
Impact assessments involve finding out whether a project or initiative produced the effect
that was intended. For example, an impact assessment of a project that provided Internet
training and access to a community and developed a community website might aim to find
out if the project significantly increased all sectors of a community’s access to information.
This impact assessment could also identify barriers to community access such as the time
and cost involved or lack of ongoing technical support. Using methods such as in depth
interviews, impact assessments can also identify the unintended effects and ‘ripple’ effects
of initiatives on participants, organisations and communities.
The focus can be on the short-term or the long term impacts, and on the impacts at different
levels within a community (ie the individual, the community group or the whole community).
Some impact assessments have a narrow focus on economic outcomes while others take
2
the whole range of social, cultural, economic, environmental and technological factors into
account. To be most effective, such factors need to be considered.
3
Who should be involved in an evaluation?
An evaluation of a community initiative is mainly conducted for the participants or users of
the initiative or service in order to improve the initiative or better meet users’ needs. It is
therefore important that a broad representation of participants or users are involved in the
evaluation. This could include attempting to involve disadvantaged groups such as rural
women with low incomes or educational levels. Other groups that would be involved in or
interested in the outcomes of the evaluation of a community initiative include:

Staff and volunteers - the people involved in facilitating the activity, delivering the
service or providing support.

Other stakeholders - groups, organisations or people with an interest in the initiative,
such as community development organisations, service agencies and government
departments.

Managers - the people managing or administering the initiative.

Policy makers and decision makers - those who decide whether the initiative will be
started, continued, discontinued, restructured or ended.

Funding bodies and sponsors.

Competitors - groups or organisations that compete with the initiative for funding and
resources (see Rossi, Freeman and Lipsey, 1999, p.55).
How do we conduct an evaluation?
There are several steps involved in conducting an effective evaluation. One useful and
practical method involves answering the following questions:
1. What is the program or initiative to be evaluated?
2. Why is the initiative being evaluated?
3. How will people be prepared for the evaluation? - this involves thinking about those who
might feel threatened by the evaluation and those whose acceptance is essential.
4. What are the main issues/questions that the evaluation will focus on?
5. Who will do what? - responsibilities of participants should be agreed on before the
evaluation begins.
6. What resources are available for the evaluation?
7. What data need to be collected? - this needs to be specific in terms of who the data will
be collected from, how they will be collected and what information is needed.
8. How will the data be analysed? - this will influence decisions about the information
collected and the form in which it will be collected.
9. What process will be used to report the evaluation?
10. How will the results be implemented? - those responsible for making recommendations
need to be identified (see Rossi et al, 1999, p.75).
Signs of a good evaluation
Wadsworth (1991, pp.22 -23) suggests five factors that are present in a good evaluation:
1.
2.
3.
4.
5.
It did not become overly large and complex.
It did justice to everyone’s views and ideas.
We learned things from it - it broke new ground.
What it came up with was useful.
It took time - all the time over which the initiative was developed and existed.
4
An example: The evaluation of the Rural Women and ICTs project
The evaluation of the QUT research project ‘Enhancing Rural Women’s Access to Interactive
Communication Technologies’ used several different methods. They included distributing
feedback questionnaires at workshops, interviewing participants, researchers and project
partners, and holding formal and informal critical reflection sessions. This information helped the
researchers to understand what was working well and what was not working so well with the
project as a whole and with activities such as workshops and online conversation groups. The
ongoing evaluation process helped the researchers to redesign some project activities to better
meet participants’ needs and expectations.
Questionnaires distributed at workshops and when women joined the online group ‘welink’ also
provided background information about the participants. Information collected included their
age, occupation, where they lived, what experience they had with email and the Internet, and
what community or industry groups they belonged to. The researchers used this information to
develop a statistical profile of the participants. Follow up interviews and focus groups were also
conducted with a representative selection of participants to gather more in-depth feedback on
the project. Combined with the other feedback and information, the interview data enabled an
assessment of how well the project met its aims of empowering women and including a broad
diversity of women.
The evaluation found that the project’s methods and activities mostly met the needs of the
women very well. The welink group was assessed as the most empowering project activity. The
research identified four possible forms of empowerment that participants may have experienced:
social, technological, political and psychological. Four corresponding forms of disempowerment
were also identified. The analysis suggested that while some women experienced the four forms
of empowerment, they also sometimes experienced various forms of disempowerment as a
result of taking part in the project. Case studies of four diverse participants were written to
illustrate these contradictory effects. The four participants confirmed their accuracy.
(see Lennie, 2001; 2002a 2002b; and The Rural Women and ICTs Research Team, 1999)
Further reading
Lennie, J. (2001). Troubling empowerment: An evaluation and critique of a feminist action research
project involving rural women and interactive communication technologies. PhD thesis, Brisbane:
The Communication Centre, Queensland University of Technology.
Lennie, J. (2002a). Rural women’s empowerment in a communication technology project: some
contradictory effects, Rural Society 12 (3), 224-245.
Lennie, J. (2002b). Including a diversity of rural women in a communication technology access
project: Taking the macro and micro contexts into account. Proceedings, Electronic Networks Building Community: 5th Community Networking Conference, Monash University, Melbourne.
Papineau, D. and Kiely, M. (1996). Participatory evaluation in a community organization: Fostering
stakeholder empowerment and utilization. Evaluation and Program Planning, 19 (1), 79-93.
Rossi, P., Freeman, H. and Lipsey, M. (1999). Evaluation. A systematic approach. Thousand Oaks:
Sage.
The Rural Women and ICTs Research Team. (1999) The new pioneers: Women in rural Queensland
collaboratively exploring the potential of communication and information technologies for personal,
business and community development. Brisbane: The Communication Centre, Queensland
University of Technology.
Wadsworth, Y. (1991). Everyday evaluation on the run. St Leonards, NSW: Allen and Unwin.
5
Download