An Introduction to Programme Evaluation for Decision Makers

advertisement
An Introduction to Programme Evaluation for Decision Makers
Mike Brewer and Thomas Crossley
This course is designed for those who design policy experiments or demonstration projects, those who
commission or manage projects which undertake evaluations (or impact assessments), or make
decisions on the basis of the estimated impact of policies. It will introduce participants to the various
empirical methods that can be used to estimate the impact of a specific policy intervention. The
intention is not to teach participants how to estimate the impact of a specific policy intervention or
programme but to give an understanding of the suitability of these methods given the nature of the
policy under consideration and the available data.
By the end of the course, participants will be able to:


Assess whether an actual or proposed design for a programme evaluation is likely to give
reliable results given the nature of the policy under consideration and the available data.
Understand what factors to consider when the results from a programme evaluation are being
used in policy-making.
Detailed statistical knowledge is not a pre-requisite.
The course will include the following sessions:
1. The impact evaluation problem
2. How randomized experiments “solve” it
3. Methods that mimic an experiment: natural experiments, instrumental variables and the
regression discontinuity design
4. Methods for selection on observables: multiple regression, matching and taking advantage of
longitudinal data
There will also be a group session where participants will be apply their knowledge to comment on the
suitability of specific evaluation designs.
The course will run from approximately 10am to 4:45pm.
An Introduction to Programme Evaluation for Decision Makers
9:45
Registration
10:00
Welcome
10:05
The impact evaluation problem





Impact evaluations versus other kinds of evaluations
The key role of the counterfactual in causal analysis
The potential outcomes framework
The impact evaluation problem as a “missing data” problem or a “selection”
problem
Impact on whom?
10:55
Break
11:15
Randomized control trials




How randomized experiments solve the impact evaluation problem
Issues in the design of experiments. Power and inference.
The limitations of experiments
Concepts of “internal validity” and “external validity”
12:05
Lunch Break
12:50
Methods that approximate an experiment



Natural Experiments
Instrumental Variables
The Regression Discontinuity Design
13:50
Break
14:10
Methods for selection on observables



Multiple Regression
Matching
Taking advantage of longitudinal data
15:10
Group work
16:00
Discussion of group work
16:30
Recap and final remarks
16:40
Close
Download