Why Evaluate? A practitioner perspective Robin Barrs

advertisement
Why Evaluate? A practitioner perspective
Robin Barrs
A broad question inevitably leads to a wide variety of answers, often overlapping but specific to the
perspective of the respondent. It would be unfair, and a failure of the evaluative process, not to
attempt to consider at least a fair cross-section of them in trying to answer this question.
From a widening participation practitioner perspective the answers range from the very pragmatic
‘because I’m told to’ or ‘to justify my job’, to the more fundamental ‘to check things are working
properly’ or ‘to ensure our work is helping people effectively’. As a starting point I put the question
out to members of my own outreach team to see what they thought, and indeed to benchmark what
work might need to be done to ensure good practice close to home. These are a sample of the
answers I received:
‘… a very useful way of summarising activities and passing on knowledge to new people’
‘To measure the extent to which the programme has the intended effects on participants and to see
if there are any unintended effects’
‘To measure impact, reassure ourselves that what we are doing is worth doing and to improve
processes for the future.’
‘To try and prove that our jobs are worth the money we’re paid to do them, and that our project
budgets need to not be cut.’
At the heart of the matter is the desire to ensure that the activity that we work so hard (and spend
so much) on is actually making a difference, to ascertain how much of a difference and exactly what
that difference is. However, it’s also extremely important to ensure that the objectives you set out
to achieve are being met in order to plan future activity and build our knowledge of how to design
activity to meet the (often as yet unknown) targets and areas of focus that will arise in the future. As
practitioners we have to acknowledge that one factor that focuses our minds on evaluation is the
demand to justify our funding and our jobs.
It is fortunate that often, what we really want to find out and what we are demanded to show
coincide – but this is by no means always the case. There can be pressure to put the horse before
the cart and design programmes purely to drive headline figures or to design evaluation so as to give
a positive, if uninsightful and ultimately counterproductive, result. The key is to ensure that
regardless of the spin that may be put on the outputs of the evaluative process, we have the
underlying data with which to make good decisions about future work.
A note of caution: it is always worth bearing in mind that we often only see what we look for. A
narrow evaluation of whether we are meeting a key aim may show that we are not, but may be
missing an outcome we had not previously considered. It is important to be aware of oblique
outcomes, both positive and negative. It is not uncommon to achieve things you did not set out to
achieve.
Why evaluate? Without evaluation we are working in the dark – but a specific evaluation may not
shine light into every corner.
Download