Formative evaluation

advertisement
Outline – Part I
1.
2.
3.
4.
5.
Definitions
Pseudo-evaluation vs. legitimate evaluation
Formative vs. summative evaluation
Necessary skills
Planning an evaluation
a.
b.
6.
7.
8.
Planning a Formative evaluation
Planning a Summative evaluation
Cost benefit analysis
An experimenting society
Words of warning
Definition (1)
Evaluation is the systematic acquisition
and assessment of information to provide
useful feedback about some object.

-- William Trochim (Cornell University)

3
Pseudo-evaluation vs. legitimate evaluation
Pseudo-evaluations
 Evaluation usually occurs
in a political context.
Legitimate evaluations




Be careful not to engage in
the first type (Pseudoevaluation)!
Be careful not to engage
in pseudo-evaluations
Doing so may facilitate
inappropriate decisions
It will also damage your
professional reputation
Pseudo-evaluations – a taxonomy

Here are some procedures to watch out for
(E.A. Suchman, 1967):

Eyewash – emphasis on surface
appearances

Whitewash – attempts to cover up known
failures
Pseudo-evaluations – a taxonomy

Submarine – political use of evaluation to
destroy a programme

Posture – ritualistic evaluation to satisfy a
funding requirement, without real interest in, or
intention to use, its findings

Postponement – using the need for evaluation
to delay action
Legitimate Evaluations – Four Criteria

1.
2.
3.
4.
Here are four criteria to help you recognize
a legitimate evaluation:
Utility
Feasibility
Propriety
Technical adequacy
Four criteria for legitimate evaluations
1. Utility – will someone be able to use it?

As Robson says, “the purpose of an
evaluation is not to prove, but to improve.”
(2002, p. 209)
Four criteria for legitimate evaluations
2. Feasibility – will you have the resources, time,
and co-operation you need? If not, don’t do the
evaluation.
Won’t achieve anything useful
 May damage your professional reputation.
 Especially an issue in formative evaluation, where
results may be needed for program planning.
 Remember the engineer’s maxim:
“Good, fast, cheap. Pick any two.”

Four criteria for legitimate evaluations
3. Propriety – only do an evaluation if you
can do it fairly and ethically.
No ‘submarines’
 Acceptable outcome measures
 Say ‘no’ if you believe the course of action
has already been decided on, and a decision
maker just wants ‘cover.’

Four criteria for legitimate evaluations
4. Technical adequacy – if you are satisfied on
the first three issues, carry out the evaluation
with technical skill and sensitivity.
 How can you tell whether you have the
technical skill?



What do you have to think about in planning?
What are the relevant skills?
We’ll consider these issues below…
What to think about in planning

Reasons for evaluating
Why is the evaluation being done?
 Who should have access to the information
obtained?


What value will results have?
Will action be taken?
 Will someone not want results published?

What to think about in planning

Interpretation

Is the nature of the evaluation agreed upon
by those involved?

Outcome measures

What type of change is good, or bad?
What to think about in planning

Subject


What kind of information do you need?
Evaluators

Who will gather the information?

Who will analyze the data and write the report?
What to think about in planning

Methods

What method is appropriate given the
questions?
 Can you develop your method in the time
allowed?
 Is your method acceptable to those involved?
(Service providers and consumers.)
What to think about in planning

Time


What time is available? Is this sufficient?
Permissions and control

Necessary permissions obtained?
 Is participation voluntary?
 Who decides what goes into the report?
What to think about in planning

Use

Who decides how the evaluation will be
used?
 Will those involved (providers, consumers)
see the report in a modifiable draft version?
 Is the form of the report appropriate for the
intended audience (style, length, stats)?
An evaluation culture
These ideas are based on Donald
Campbell’s (1969) concept of an
experimenting society, and Trochim’s
related concept of an evaluation culture
 To learn more about Trochim’s ideas, see:
http://www.socialresearchmethods.net/kb/
evalcult.php

An evaluation culture

An evaluation works and improves because the
culture is:

Action-oriented
 Teaching-oriented
 Diverse, inclusive, participatory, responsive and
fundamentally non-hierarchical.
 Humble, self-critical
An evaluation culture

An evaluation works and improves
because the culture is:

Interdisciplinary
 Truth-seeking, forward-Looking
 Ethical, and democratic
Words of warning

Keep it simple


Avoid complex designs and data analysis
Think defensively

Anything that can go wrong, will go wrong.
 Try to anticipate potential problems and plan
how you will deal with them.
Words of warning
Change will always have sponsors and
critics.
 People’s lives may be radically changed
 On the basis of your findings.


jobs may be on the line
careers may be advanced or slowed
a program may be expanded or cut back
Words of warning
There will be many stakeholders – politicians,
administrators, deliverers, targets, unions,
taxpayers.

It is unlikely that the interests of all these groups
will coincide.

Outline – Part II
1.
2.
3.
4.
Formative & Summative evaluation defined
Elements of a Formative evaluation
Elements of a Summative evaluation
Evaluation strategies
A.Scientific-Experimental
Paradigms
B.Management-oriented systems models
C.Qualitative-Anthropological models
D.Participant-oriented models
5.
Necessary Skills
Two Types of Evaluation

Formative evaluation


Helps in the development of a program or service.
Summative evaluation


Assesses the effects and effectiveness of the
program
Covers all effects, not just those intended
Formative Evaluation - Elements

Questions about the process being evaluated:
1.
Structured conceptualization
Logic model
Process evaluation
Implementation evaluation
2.
3.
4.
Formative Evaluation – elements
1. Structured conceptualization – helps
stakeholders define program, targets, and
desired outcomes.
Stakeholders – who are they?
 Outcomes – how do you plan to
measure them?

Formative Evaluation – elements

2. A logic model makes explicit the
steps that are expected to produce the
desired change. It is often shown as a
flow chart or map.

A good logic model may reveal hidden
assumptions about how intervention will
work.
Formative Evaluation – elements

3. Process evaluation – What alternative
procedures are available for delivery of the
program?

4. Implementation evaluation – Is program being
delivered the way it is supposed to be? Are there
unexpected consequences?
Summative Evaluation

Outcome evaluation


Did program cause demonstrable effects on
predefined outcome measures?
Impact evaluation

Broader – assesses overall effects, intended
and unintended, of a program
Summative Evaluation

Cost-benefit analysis

Questions about efficiency
 Standardizes outcomes in terms of dollar
costs and dollar benefits
 Important when you have to choose how
to spend limited amounts of money
Cost-Benefit Analysis

To do cost-benefit analysis you need to
know (in addition to program cost)
(a) magnitude of benefits a program produces and
(b) that the program produced these benefits.

These things can only be learned
through an experimental design.
Cost-Benefit Analysis
Some issues to consider before you do
CBA…
Opportunity cost
2. Present value of money
3. Fairness
4. Complexity
1.
CBA and Opportunity Cost
CBA expresses values in dollars.
This reveals opportunity cost – if you do X
with your money, you cannot do Y with the
same money.
 Some values are difficult to express in
dollars. E.g., what is the value of having mail
delivery in rural areas?
 How do you express non-market values in
dollars?

CBA & Present Value

CBA works with the Present Value (PV) of
money.
 Future outcomes are uncertain.
 Inflation alters value of money – e.g., PV of
$1m in 50 years at 5% inflation = $87,000 .
CBA & Present Value
 $100
of benefit today is worth more in Present
Value than $100 of benefit 5 years from now.
 This makes sense, but biases program
evaluation away from long-term outcomes
CBA & Fairness

CBA compares benefits and costs without
regard to who benefits and who pays costs. Is
that fair? Is it unavoidable?

For example, people who live in the city
subsidize mail delivery to people who live in the
country. Is that fair? CBA doesn’t answer that
question.
CBA & Complexity

Most social problems, and many problems in
the private sector are complex.


They have many interacting causes, so
establishing cause may be difficult.
Any program is likely to make only a small
difference.

But it still makes sense to quantify the value of a
program, to see if we could spend our money to
better effect.
The relevant skills? (Robson, 2002)






Writing a proposal
Clarifying purposes of an evaluation
Identifying, organizing and working with an
evaluation team
Choosing design & data-collection techniques
Interviewing
Questionnaire construction and use
What are the relevant skills?






Observation
Management of complex information systems
Data analysis
Report-writing
Encouraging people to use the findings
Sensitivity to political concerns
Download