MPP 500

advertisement
MPP 500: Public Policy Evaluation
Spring 2014
Tuesdays, 4:15-6:45
Professor: Meghan Condon, Ph.D.
Email: mcondon1@luc.edu
Office Hours: Room 427, Granada Center, Mondays 3:30 – 5:30 p.m.
Course Overview
This course is designed to give graduate students in public policy and urban affairs a foundation
in the methods and applications of program evaluation. Program evaluation is the systematic use
of data to guide decision-making. You will build on the analytic skills acquired in quantitative
courses and bring them together with your substantive interests. You will also learn how to
navigate the increasingly data driven policy and nonprofit world. The demand for evaluation has
increased dramatically: now funders require it; policy makers rely on it; and legislation even
mandates it. Our goal is to prepare you to interpret, discuss, and assess evaluations conducted by
others and to conduct basic evaluations yourself. Topics include program theory, working with
stakeholders, quantitative and qualitative data collection techniques, advanced statistical methods
for making causal inferences, and the communication of evaluation findings.
This seminar is designed to build your expertise in program evaluation through academic reading
and practice, but also through practical evaluation experience. Throughout the semester, we will
work with a client, Jesuit Commons: Higher Education at the Margins (JC:HEM), to conduct an
evaluation in the Dzaleka, Malawi refugee camp. JC:HEM provides bachelor and certificate
degrees for people in refugee camps around the world. They want to gauge their impact on the
wider refugee camp communities - asking whether their graduates provide community leadership,
service, and conflict resolution in the camps. Answering that question will be our central
task. Your work in class will have a real impact on an exciting, innovative program, and you will
have real evaluation experience for your resumes. The client evaluation will also give us great
fodder for concrete discussions about the concepts and tools addressed in class.
In sum, by the end of the course, you will be able to:




Evaluate research that makes causal claims, deciding which evidence is strongest.
Use policy evaluations conducted by others to inform policy and programmatic decisions.
Design and conduct basic, high quality evaluations for clients and employers.
Communicate evaluation findings to stakeholders and others without training in statistics.
To have mastered “method” and “theory” is to have become a self-conscious thinker, a man at
work and aware of the assumptions and the implications of whatever he is about. To be mastered
by “method” or “theory” is simply to be kept from working, from trying, that is, to find out about
something that is going on in the world. Without insight into the way the craft is carried on, the
results of study are infirm: without a determination that study shall come to significant results, all
method is meaningless pretense.
– C. Wright Mills (1959:120-1)
1
Texts
Required Books:
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental
design for generalized causal inference. Boston: Houghton-Mifflin.
(Referred to on the syllabus as SCC)
Fitzpatrick, Jody, Christina Christie, Melvin M. Mark. 2009. Evaluation In Action: Interviews
with Expert Evaluators. Thousand Oaks, CA: Sage Publications.
Other Required Reading
Beyond the textbook, you will read selections from books, academic articles, evaluation reports,
and various online evaluation resources. Most of these readings are freely available online and are
linked to the syllabus. All other readings are available on Sakai. A goal of this course is to
provide you with a toolkit of resources that you can after graduation use if you choose a career
that involves professional evaluation. To that end, where possible, I have attempted to select
readings from excellent, freely available online evaluation resources (including funders and
foundations, professional organizations, private and public evaluation groups, and research
centers). When reading online selections, I strongly encourage you to go beyond the assigned
selection and familiarize yourself with the other resources available from the site.
Supplemental:
Most weeks on the syllabus list supplemental reading. You may choose to read some of these
selections if it is your week to provide critique and an annotated bibliography. In general though,
I do not expect you to read the supplemental readings this semester. My hope is that those of you
who choose careers in evaluation can return to the syllabus when you want to learn more about a
particular topic. In addition to the selections listed under individual weeks, the following books
are excellent resources:
Morgan, S. and Winship, C. 2007. Counterfactuals and Causal Inference: Methods and
Principles for Social Research, Cambridge University Press.
Gelman, Andrew and Jennifer Hill. 2006. Data Analysis Using Regression and
Multilevel/Hierarchical Models, Cambridge University Press.
Wholet, Hatry, and Newcomer, Eds. 2010. Handbook of Practical Program Evaluation. New
York: John Wiley and Sons.
Angrist and Pischke. 2008. Mostly Harmless Econometrics: An Empiricist’s Companion,
Princeton University Press.
I would be happy to provide further suggestions for students interested in additional reading.
2
Assignments and Grading
1. Grading Scheme




Annotated Bibliography and Critique: 20%
Client Evaluation Assignments: 40%
Final Project: 30%
Participation: 10%
2. Assignments
Annotated Bibliography and Critique (Independent)
Each student will select one week during the course for this assignment. Prior to the seminar, you
will prepare two documents for your classmates and come prepared to lead a short discussion in
class. Your first document will be an annotated bibliography. This document takes the form of an
executive summary of a body of literature followed by a paragraph summarizing each reading.
You will include all assigned course readings as well as 3 additional relevant articles or reports
that you find beyond the course readings. These additional readings should be applications of the
method, topic, or technique discussed in the assigned readings. Your summary of each reading
should not exceed one paragraph and should give the author’s main question or goal, main
theoretical argument (if appropriate), explanation of the method(s), description of findings (if
appropriate), and a sentence or two about any weaknesses or criticisms you have. The paragraph
should follow that format and should not be a reiteration of the article’s abstract. Your overall
summary of the literature should be 1-2 paragraphs; in it you can briefly explain the method or
topic for the week. At the end of your annotated bibliography, you should include a glossary of
unfamiliar or technical terms. You may also include a list of additional recommended readings at
the end, though this is not required. All of the course weeks will be covered, so students should
leave the course with a comprehensive set of notes including literature beyond what is assigned
each week. I am happy to provide suggestions for additional readings. More information about
crafting an annotated bibliography can be found here.
The second document you will provide is a 400-500 word essay raising questions, criticisms, or
over-arching conclusions about the week’s readings. You should have a single, clear thesis
supported by evidence from the readings. You should end the essay with 3 discussion questions to
begin our seminar. The entire document should not exceed one side of a sheet of paper. All
students will read your essay and questions and come prepared to discuss both in class.
Both documents must be uploaded to the course Sakai site by 9am on Monday morning to give all
of us ample time to read them. Please note that others’ annotated bibliographies are intended to
aid and supplement your own reading, not replace it. I will ask questions in class that go well
beyond basic summary, and I will definitely cold-call on people to answer.
Client Evaluation Assignments
There are four Client Evaluation Assignments on the syllabus. Each will allow you to apply the
skills you learn in service to our JC:HEM client. Assignments will be described in greater detail
in class. You will each work on all components of the evaluation. You will develop a data
collection instrument (focus group protocol and script and potentially a survey), an IRB
3
application, a logic model for the program, a training program for people collecting data in
Malawi, recruitment materials, transcriptions of interviews, and analysis reports. I will act as the
Principal Investigator (PI) for the evaluation. I’ll take the best of your work and use it in the
actual evaluation. We will also discuss, critique, and work on these assignments in class during
two JC:HEM Workshop Sessions.
Final Project
For your final project, you will respond to an (in-class) Request for Proposals (RFP) for a followup evaluation for JC:HEM. You will develop a plan for a second evaluation from which we could
draw causal inferences about the effects of the JC:HEM program on the refugee population.
Excellent proposals may be shared with the client. The RFP will tentatively be distributed on
February 25th, and proposals will be due in class on April 29th. You will formally present your
proposals in class on the final exam date. Details will be discussed in class.
3. A Note on Choosing Group Size
There are pros and cons to working collaboratively or independently in this class, and often the
choice will be yours. Professional evaluations are nearly always conducted by teams, so it is a
good idea to gain experience working collaboratively on these tasks. More importantly, working
together can produce superior results; the best evaluations are often the product of brainstorming,
divided responsibility, (many) meetings, and repeated peer-review and criticism. However,
students who want to pursue careers in the field may want to leave class with a sample or two of
independent work. Therefore, for all assignments (except creating the training materials and the
Annotated Bibliographies) it is your choice whether you work independently, with one partner, or
in a group of three. You may choose to do some assignments alone and others with groups.
However, please keep in mind that even though collaborative work has its own challenges, I will
expect assignments done in pairs or groups to reflect the added time put in by all members, and I
will grade them accordingly. I recommend completing the final project independently if you want
an individual evaluation project for your professional portfolio.
4. A Note on Managing Your Workflow
Please keep in mind that JC:HEM is our client, and we will follow their lead in many ways. Our
evaluation will be conducted under challenging circumstances in Malawi, and our in-country
partners will often have much more pressing responsibilities and concerns. As a result, this class
will require flexibility. All deadlines, assignments, and reading schedules will be subject to
change. In this way, the class will be much more like a job, where you have a basic sense for how
much time it requires each week, but the tasks aren't all planned out in advance, and things
change. That said, I will aim to give you as much lead-time as possible for all changes, keep the
amount of reading and work relatively steady, and work to keep chances to a minimum. My
advice is that you work to get ahead when you have extra time.
Additional Information
1. Late work will not be accepted except in the case of an emergency. Any work handed in late
will be recorded with a grade of zero.
2. Regrades: students requesting regrades or contesting grades must make these requests within
4
one week of receiving the grade. I may regrade the entire assignment or exam or recalculate all
grades when such a request is made.
3. Accommodations: every effort will be made to accommodate students with exceptional
learning needs. Please come talk to me right away if you require accommodations for
assignments or exams.
4. Academic Integrity: all students are expected to know and abide by Loyola’s statement on
academic integrity.
Reading Schedule
PART I: Evaluation Basics
Week 1: Introduction (1/14)
Who is our client? What are their needs? What is program evaluation?
(1) JC:HEM website
(2) JC:HEM internal briefing memo (distributed via email)
(3) Moon, Bob, Jenny Leach, and Mary-Priscilla Stevens. 2005. Designing Open and Distance
Education in Sub-Saharan Africa- A Toolkit for Educators and Planners, World Bank. (Read
section 5 on evaluation and skim the rest.)
(4) Learn more about Sub-Saharan Africa here.
(5) University of Wisconsin Extension. “Planning a Program Evaluation: Booklet and
Worksheet.” These resources and others can be found here.
(6) Harris, Erin. 2011. “Afterschool Evaluation 101: How to Evaluate and Expanded Learning
Program.” Harvard Family Research Project. Related evaluation resources can be found here.
(7) Begin the Loyola CTI Human Subjects Research Certification. Proof of certification must be
obtained by Jan. 28th to participate in the JC:HEM evaluation.
Week 2: Logic Models, Needs Assessments, Ethics, and Stakeholders (1/21)
How can we align evaluation and performance measurement with an organization’s mission
and goals? What strategies can help us work effectively with stakeholders and subjects?
(1) Logic Model Development Guide. 2004. W.K. Kellogg Foundation.
(2) Milstein B. and T. Chapel. 2013. Community Toolbox, Developing a Logic Model or
Theory of Change.
(3) American Evaluation Association Guiding Principles for Evaluators.
(4) Preskill, Holly and Nathalie Jones. 2009. “Engaging Stakeholders in Developing Evaluation
Questions.” Robert Wood Johnson Foundation.
(5) Brest, P. 2012. “A Decade of Outcome-Oriented Philanthropy.” Stanford Social Innovation
Review. Spring, 42-47.
(6) National Science Foundation “Strategies that Address Culturally Sensitive Evaluation.”
(7) Bernstein et al. 2009. “Impact Evaluation of the U.S. Department of Education’s Student
Mentoring Program.” Skim Executive summary, read Ch. 1
*LUC CITI IRB certification should be complete prior to this class.
Supplemental:
5






Rossi, Peter H., Mark W. Lipsey, and Howard E. Freeman. 2004. Evaluation: A
Systematic Approach, 7th Edition. Thousand Oaks, CA: Sage Publications.
Additional Sections of the NSF 2002 User-Friendly Handbook for Project Evaluation.
Related James Bell Associates Research Briefs
Privacy Technical Assistance Canter Toolkit, U.S. Department of Education.
Do a quick Google search of “engaging stakeholders evaluation.” Note all of the
foundations, federal agencies, and research entities in your results. Scan a few pages to
get a sense of the importance funders and evaluators place on this step.
Browse through the website of Abt Associates, the group contracted to conduct the
mentoring evaluation.
Week 3: Causal Inference and Validity (1/28)
I don’t want to be wrong! How can I systematically think through the chances that I am?
How can I increase the chances that I’m not?
(1) SCC Ch. 1-3
(2) Barbara Schneider et al., Estimating Causal Effects: Using Experimental and Observational
Designs. Washington, D.C.: American Educational Research Association.
(3) Heckman, J.J., Moon, S.H., Pinto, R., Savelyev, P.A. and A. Yavitz. 2010. “The Rate of
Return to the HighScope Perry Preschool Program. Journal of Public Economics, 94 (1-2):114128.
Supplemental:
 King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social Inquiry:
Scientific Inference in Qualitative Research. Princeton, NJ: Princeton University Press.
Entire, especially Ch. 3.
 Ravallion, M. 2001. “The Mystery of the Vanishing Benefit: An Introduction to Impact
Evaluation.” World Bank Economic Review 15(1):115-140.
 Holland, Paul W. 1986. Statistics and Causal Inference. Journal of the American
Statistical Association. 81(396):945-960.
Week 4: Collecting Data (2/4)
What are some best practices for data collection? Where can I go to find more information?
N.B. Since our client evaluation will center on focus groups, please pay special attention to this
week’s readings in that area. You might consider purchasing or checking out one of the
recommended books on focus groups for reference.
(1) National Science Foundation. 2002. “An Overview of Quantitative and Qualitative Data
Collection Methods.”
(2) New York State Teacher Centers “Focus Groups”
(3) Sewell, Meg “The Use of Qualitative Interviews in Evaluation.”
(4) Richard A. Kreuger. 1994. Focus Groups: A Practical Guide for Applied Research. 2nd ed.
Thousand Oaks, CA: Sage Publications. pp.53-73, 100-25. (Entire is strongly recommended, any
edition.) (Sakai)
(5) David Morgan. 1997. Focus Groups as Qualitative Research. 2nd ed. Newbury Park: Sage
Publications. pp.7-30. (Entire is recommended, any edition.) (Sakai)
(6) University of Texas at Austin, Analyzing Focus Group Data.
(7) University of Idaho, Focus Group Moderator Training.
(8) Miller, Candace, Maxton Tsoka, and Kathryn Reichert. 2008. Impact Evaluation Report:
External Evaluation of the Mchinju Social Cash Transfer Pilot. Skim all and read sections
pertaining to focus group data collection and analysis.
6
(9) Fanning, E. 2005. “Formatting a Paper-based Survey Questionnaire: Best Practices” Practical
Assessment, Research, and Evaluation 10(12).
Supplemental:
 Patton, Michael Quinn. 1987. How to Use Qualitative Methods in Evaluation. Thousand
Oaks, CA: Sage Publications.
 Rubin, Herbert J. and Irene S. Rubin. 2005. Qualitative Interviewing: The Art of Hearing
Data. Thousand Oaks, CA: Sage Publications.
 Gerring, John. 2001. Social Science Methodology: A Critical Framework. Cambridge:
Cambridge University Press.
 Ambert, Anne-Marie, Adler, Patricia A., Adler, Peter, Detzner, Daniel F. 1995.
“Understanding and Evaluating Qualitative Research.” Journal of Marriage and Family,
Vol. 57, No. 4
 Reports from the Wilder Research Center. For focus groups see: “Ginew/Golden Eagle
Program Evaluation.”
 Solomon, D.J. 2001. “Conducting Web-Based Surveys” Practical Research and
Evaluation 7(19).
 RTI. Question Appraisal System.
Week 5: JC:HEM Workshop Session (2/11)
Goals: generate the data collection instruments, recruiting materials, and plan for training
JC:HEM partners
*Assignment 1 Due: Logic Model, Instrument, Recruiting Materials, and Ideas for Training
Module
Part II: Drawing Valid Conclusions
Week 6: RCTs: Basics (2/18)
What is the “gold standard” for causal inference? What are the pros and cons of
experimental research? What are some best practices for implementing it?
(1) SCC Ch 8 and pp 488-497
(2) Bernstein et al. 2009. “Impact Evaluation of the U.S. Department of Education’s Student
Mentoring Program.” (You have already read the summary and Ch. 1)
(3) Miller, Candace, Maxton Tsoka, and Kathryn Reichert. 2008. Impact Evaluation Report:
External Evaluation of the Mchinju Social Cash Transfer Pilot. (You have skimmed this, return to
the sections on the RCT and read carefully.)
(4) Rodrik, D. 2008. “We Shall Experiment, But How Shall We Learn?”
(5) Passell, Peter. 1993. “Like a New Drug, Social Programs Are Put to the Test.” The New
York Times.
(6) Brooks, David. 2012. “Is Our Adults Learning?” The New York Times.
(7) Reynolds, Gretchen. 2012. “For Weight Loss, Less Exercise May Be More.” The New York
Times.
*Assignment 2 Due: IRB Application
Week 7: RCTs: Common Problems and Solutions (2/25)
What challenges am I likely to encounter when running an RCT? What are some best
practices for addressing those challenges?
7
(1) SCC Ch 9-10
(2) Michalopoulos, Charles. 2005. “Precedents and Prospects for Randomized Experiments.” In
Learning More from Social Experiments: Evolving Analytic Approaches, Howard Bloom Ed.
Thousand Oaks, CA: Sage Publications. (Sakai)
(3) Condon, Meghan. “The Effect of School-Based Social Capital on Voter Turnout: An
Experiment in Two Southwestern Cities.” Working Paper. (Sakai)
(4) Gueron, Judith M. 2001. “The Politics of Random Assignment: Implementing Studies and
Impacting Policy.” Manpower Demonstration Research Corporation (MDRC).
Supplemental:
Please see additional readings on Geoffrey Borman’s syllabus “Randomized Trials to Inform
Education Policy”
SPRING BREAK
Week 8: Control: Regression and Matching (3/11)
How can I approximate an experiment if I don’t have experimental data? What are the pros
and cons of these designs? What are some best practices for implementing them?
(1) SCC Ch 4-5
(2) Cook, T. D. , Shadish, W.R., and Wong, V.C. 2008. Three Conditions Under which
Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings
from Within-study Comparisons. Journal of Policy Analysis and Management 27(4) 724-750.
(Sakai)
(3) Donohue, J.J. and S.D. Levitt. 2001. “The Impact of Legalized Abortion on Crime” The
Quarterly Journal of Economics 116(2):379-420. (Sakai)
(4) Binggenheimer, J.B., Brennan, R.T. and Earls F.J. 2005. “Firearm Violence Exposure and
Serious Violent Behavior.” Science 308: 1323-1326. (Sakai)
(5) John F. Witte, Patrick J. Wolf, Joshua M. Cowen, David J. Fleming, Meghan Condon, and
Juanita Lucas-McLean. “Milwaukee Parental Choice Program Longitudinal Educational Growth
Study Third Year Report,” with 2010. University of Arkansas Educational Working Paper
Archive. University of Arkansas, Department of Education Reform. (Sakai)
Supplemental:
 Gelman and Hill, Chapters 9 and 10.
 Heinrich, C.J., Burkhardt, B. C., and Shager, H.M. 2011. “Reducing Child Support Debt
and Its Consequences: Can Forgiveness Benefit All?” Journal of Public Analysis and
Management, 30(4): 755-774.
 Shadish, W.R., Clark, M.H. and Steiner, P.M. 2008 “Can nonrandomized experiments
yield accurate answers? A randomized experiment comparing random and nonrandom
assignments.” Journal of the American Statistical Association. 103(484): 1334-1356.
 Dehehia, R. H. and S. Wahba. 1999. Causal Effects in Non-Experimental Studies:
Reevaluating the Evaluation of Training Programs.” Journal of the American Statistical
Association, 94:1053-1062.
*Assignment 3 Due: Training Materials (Team)
8
Week 9: Quasi Experiments I: Difference in Difference, IV, and Fixed Effects (3/18)
How can I approximate an experiment if I don’t have experimental data? What are the pros
and cons of these designs? What are some best practices for implementing them?
(1) Kearney, Melissa S. and Phillip B. Levine. 2014. “Media Influences on Social Outcomes: The
Impact of MPV’s 16 and Pregnant on Teen Childbearing.” NBER Working Paper.
(2) Abadie, A. and J. Gardeazabal 2003. “The Economic Costs of Conflict: A Case Study of the
Basque Country. American Economic Review. 93(1):113-132. (Sakai)
(3) Tose, R.A. and Stone, S.I. 2011. Instrumental variable estimation in social work research: A
technique for estimating causal effects in nonrandomized settings. Journal of the Society for
Social Work and Research, 2(2):76-88. (Sakai)
(4) Card, David, and Alan B. Krueger. 1994. “Minimum Wages and employment: A Case Study
of the Fast-food Industry in New Jersey and Pennsylvania” American Economic Review 84:
772-793.
Week 10: Quasi Experiments II: Regression Discontinuity and Interrupted Time Series
(3/25)
How can I approximate an experiment if I don’t have experimental data? What are the pros
and cons of these designs? What are some best practices for implementing them?
(1) SCC Ch. 6-7
(2) Ludwig, J. and Miller, D.M. 2007. “Does Head Start Improve Children’s Life Chances?
Evidence from a Regression Discontinuity Design.” The quarterly Journal of Economics
122(1):159-208. (Sakai)
(3) Figlio, David N. 1995. “The Effect of Drinking Age Laws and Alcohol-related Crashes:
Time-series Evidence from Wisconsin.” Journal of Policy Analysis and Management 14(4):
555-566. (Sakai)
Week 11: JC:HEM Workshop Session (And Mini-lecture on Writing Reports and
Communicating Results) (4/1)
Goal: Data Analysis
Week 12: Additional Advanced Topics: Power, Effect Size, and Meta-Analysis (4/8)
(1) SCC Ch. 13
(2) Optimal Design Manual. 2011. Spybrook et al. Read the introduction and play around with
the software. Data will be provided for power analysis practice.
(3) Lipsey, M.W. 1997. What can you build with thousands of bricks? Musings on the
cumulation of knowledge in program evaluation. New Directions for Evaluation 76: 7-23.
(Sakai)
(4) Anderson, C. A., et al. 2010.Violent video game effects on aggression, empathy, and
prosocial behavior in eastern and western countries: A meta-analytic review. Psychological
Bulletin, 136(2): 151-173. (Sakai)
Supplemental:
 Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to Meta-Analysis.
John Wiley & Sons, Ltd. 2009.
 Sterne JAC (editor). Meta-Analysis in Stata: An updated Collection from the Stata
Journal. Stata Press. 2009.
*Assignment 4 Due: Analysis Reports
9
PART III: Evaluation in Action
Week 13: Evaluation in Action (4/15)
(1) Fitzpatrick, Jody, Christina Christie, Melvin M. Mark. 2009. Evaluation In Action:
Interviews with Expert Evaluators. Thousand Oaks, CA: Sage Publications. Parts 1 and 2.
Week 14: Evaluation in Action and Advanced Topic: Students’ Choice (4/22)
(2) Fitzpatrick, Jody, Christina Christie, Melvin M. Mark. 2009. Evaluation In Action:
Interviews with Expert Evaluators. Thousand Oaks, CA: Sage Publications. Parts 3, 4, and 5.
We will also leave time for a lecture on an advanced topic TBD based on class interest. Options
include, but are not limited to: PART, Funding and Funders, Process/ Implementation
Evaluation, or an advanced treatment of any of the previous topics.
Week 15: Final Presentations (4/29)
Additional Resources
General Sources for Evaluation Studies and Reports
Online Evaluation Resource Library
Applied Survey Research
Western Michigan University Evaluation Center
Campbell Collaboration
Think Tanks, Research Institutes, and Government Agencies
Rand Corporation
The Urban Institute
The Brookings Institution
The American Enterprise Institute
CATO Institute
Pew Charitable Trusts
The U.S. General Accountability Office (Reports and Testimonies)
10
Download