NEW YORK UNIVERSITY

advertisement
1
NEW YORK UNIVERSITY
ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE
PADM-GP 2171/GPH-GU 2371 Section 3
Program Analysis and Evaluation -- Spring 2014
Wednesday 6:45-8:25 pm; Tisch LC9
Adjunct Prof Jodi F Paroff
LAST REVISED – 1/6/13
To contact professor: Jodi.Paroff@nyu.edu, Phone: (973) 934-7214
Adjunct mailbox: 57 (in the Puck Building)
Office hrs: Wed 4:15-6:15pm by appointment, Adjunct Office 3047
(Please use email to request an appointment in advance or to notify the professor of any
papers left in the adjunct mailbox.)
Course Pre-requisites: You must have completed (or waived) P11.1011 (Statistical Methods)
and P11.1022 (Introduction to Public Policy). This course builds on these introductory
courses and lays the foundation for P11.2875 (Evaluation of Health and Social Programs).
Course Description and Objectives: Program evaluation is a critical part of designing and
operating effective programs. Evaluations supply information to program managers and
policymakers that can assist them in making decisions about which programs to fund,
modify, expand or eliminate. Evaluation can be an accountability tool for program
managers and funders. This course serves as an introduction to evaluation methodology
and evaluation tools commonly used to assess publicly funded programs. Students taking
this course can expect to
 become familiar with the concepts, methods and applications of evaluation research;
 learn how to read evaluation research critically;
 understand how to use evaluation results to anticipate or improve program
performance; and
 be able to propose an appropriate evaluation plan to assess the implementation and
effectiveness of a program.
Course Structure: The class includes lecture, readings, and discussion. There is no
specific policy or sector focus to this course, as evaluation tools are used in all policy areas
and by public (government) and private (foundation) funders as well as by public and
private sector program managers. You are encouraged to relate the general material of the
course to your specific policy interests.
Readings: The required textbook for this course is:
Carol H. Weiss (1998) Evaluation: Methods for Studying Programs & Policies
2nd edition. Prentice Hall
An optional and recommended text is:
Peter Rossi, Howard Freeman, and Mark Lipsey (2004) Evaluation: A
Systematic Approach, 7th ed. Sage Publications. (abbreviated in syllabus as
“RFL”)
2
In addition to the required text, you will need to read one chapter from the optional
textbook and many journal articles. All of the articles are available through Bobst
electronic journals and I will make them available on NYU Classes. The readings that are
chapters from books and are not available for downloading can be downloaded directly from
NYU Classes.
There is a sizable body of literature which deals with program evaluation and policy
analysis. The journal Evaluation Review (previously Evaluation Quarterly) is an especially
rich source on the subject, as is the Evaluation Studies Review Annual (Sage, more or less
annually). Evaluation Practice, Evaluation and Program Planning, New Directions for
Program Evaluation, and Journal of Policy Analysis and Management are also recommended.
There are also evaluation journals for specific fields, including Evaluation and the Health
Professions, Evaluation in Education, and Evaluation and Human Services.
Course requirements: Class preparation and participation are important for this “tool
based” course. You need to read required text and articles in advance and be prepared to
participate in class discussion. In addition to class participation, you will: write two brief
memos, complete one take-home exam, create a poster with your group for peer review, and
write a final evaluation design paper. It is expected that you are familiar with NYU’s honor
code and will uphold the university’s standards for academic honesty in all of your work.
Attendance: No grades are given for attendance, but you should notify the professor of
any days you will be missing. You are responsible for any missing information and meeting
all assignment deadlines during your absences.
Program Statement Memo, due Feb 26, (5%)
Students will submit a short (1 - 2 pages) description of a selected program, indicating the
problem to be addressed by the intervention, the intended beneficiaries or targets of the
program, the intended benefits, and the causal model/program theory underlying the
program. This memo is a preliminary step in writing the final design paper.
Midterm Examination:
Take-home essay style exam due Mar 12, (40%)
Measurement Memo, due Apr 9, (5%)
Using the program model developed in the first memo, students will specify the research
questions, operational definitions, and specific measures they would use in an evaluation of
the program.
OPTIONAL Evaluation Review (for extra credit) due May 7
It is important to become a good evaluator and a good consumer of evaluations. Review
one of three selected evaluation articles. In 2 - 3 pages, summarize the type of evaluation
described, its design and methods, and write a critique of the evaluation.
Poster Sessions & Feedback due Apr 23 and 30th, (10%)
Each group will create a poster that presents the evaluation design proposal of their final
paper/project to be submitted for peer critique. Posters will include program theory and
descriptions and outcome measures from earlier assignments, as well as design rationale,
and data collection strategies. Clarity, application of concepts discussed in class, and the
rationale behind your evaluation design choices will be emphasized.
3
Final Paper: Impact Evaluation Design, due May 14, (35%)
The final paper builds on earlier assignments. You will design a comprehensive evaluation
plan/design for your chosen program. The design proposal will focus on outcome or impact
evaluation but will include a brief section on process evaluation as well.
Relative Weight of Assignments
Midterm Exam
40%
Final Paper
35%
Two memos
Class Participation
Poster session
10%
5%
10%
Note: the above descriptions are not enough to complete the assignments adequately. More
detailed instructions for each assignment will follow.
Course Schedule
Part I:
Planning and Implementation
Jan 29
Class 1: Introduction to the course and the field of program evaluation;
stakeholders; ethical considerations
 Weiss Chapter 1, Chapter 5 pp. 100-108
Feb 5

Rogers, P. (2012). Introduction to Impact Evaluation. Guidance note found
here: http://www.interaction.org/document/introduction-impact-evaluation.

Marciewicz, A. (2005). A balancing act: Resolving multiple stakeholder
interests in program evaluation. Evaluation Journal of Australasia. Vol 4
Nos. 1 & 2. March/April. p. 13-21.

Read “Michael Scriven on the Differences between Evaluation and Social
Science Research (ww.hfrp.org)

Oakes, J.M. (2002). Risks and wrongs in social science research: an
evaluator’s guide to the IRB. (pp. 443-454)

Optional: Sieber, J.E. and M.B. Tolich (2013). Planning Ethically
Responsible Research. Chapter 5: Journalist Ethics ≠ Social Scientist
Ethics. pp. 77-92.
Class 2: Pre-program evaluation activities: needs assessment
 Weiss Chapter 2 (Purposes of eval)

Watkins, R. et al. (2012). A Guide to Assessing Needs: Tools for collecting
information, making decisions, and achieving development results. Section
1: Needs Assessment FAQ.

Berberet, H.M. (2006). Putting the pieces together for queer youth: a model
of integrated assessment of need and program planning. (CS)

http://india.blogs.nytimes.com/2012/07/22/mapping-toilets-in-amumbai-slum-yields-unexpected-results/?emc=eta1
4
Feb 12
Feb 19
Class 3: Explicating and assessing program theory
 Weiss Chapter 3 (Understanding the program)

Kaplan, S.A. and Garrett, K.E. (2005). The use of logic models by
community-based initiatives.

Chen, W. & Lin (1997). Evaluating the process and outcome of a garbage
reduction program in Taiwan. (CS)

Proscio, Tony (2000). In Other Words. The Edna McConnell Clark
Foundation. New York, NY.

Optional: RFL Chapter 5

Optional: Cooksy, G. & Kelly (2001). The program logic model as an
integrative framework for a multi-method evaluation.
Class 4: Formative evaluation, program monitoring, and implementation
analysis
 Curran, Gittelsohn, Anliker, et al (2005). Process evaluation of a storebased environmental obesity intervention on two American Indian
reservations.

Heinz & Grant (2003). A process evaluation of a parenting group for parents
with intellectual disabilities. (CS)

Weiss Chapter 11 (Qualitative Methods)

Optional: Onyskiw, Harrison, Spady, & McConnan. (1999). Formative
evaluation of a collaborative community-based child abuse prevention
project.

Optional: Giorgio, Kantor, Levine, & Arons (2013). Using chat and text
technologies to answer sexual and reproductive health questions: Planned
Parenthood pilot study. Journal of Medical Internet Research [J Med
Internet Res], Sep 20; Vol. 15 (9).
Part II:
Measuring the Impacts of Programs
Feb 26
Class 5: Outcome/Impact evaluation: design, internal and
external validity
 Weiss Chapter 8 (Design of evaluation, pp 180-214)
Mar 5

http://www.ted.com/talks/esther_duflo_social_experiments_to_fight_povert
y.html

DUE: Program memo
Class 6: Outcome/Impact evaluation: randomized experimental design
 Weiss Chapter 9

Seron, C., Ryzin, G.V., Frankel, M., & Kovath, J. (2001). The impact of legal
counsel on outcomes for poor tenants in New York City’s housing court:
results of a randomized experiment. (CS)
5
Mar 12

Robertson, A., St. Lawrence, J., & Morse, D.T. et al (2011). Comparison of
health Education and STD Risk Reduction Intervention for Incarcerated
Adolescent Females.

Optional: RFL Chapter 8

Program memo returned, full midterm question pool shared (actual
questions released Sun 3/9 at noon)
Class 7: Outcome/Impact evaluation: quasi-experimental designs with
comparison groups
 RFL Chapter 9 pp 265-286

Optional: Cumberland, P., Edwards, T., Hailu,G., et al. (2008). The impact of
community level treatment and preventative interventions on trachoma
prevalence in rural Ethiopia.

Optional: Chemin, M. (2008). The benefits and costs of microfinance:
evidence from Bangladesh

Midterms due to Professor via email before class!
Mar 19
NYU closed for Spring Break
Mar 26
Class 8: Formulating research questions and measurement
 Weiss, Chapter 6 (Measures), Chap 4 (Planning the Eval) pp. 74-82.
Apr 2
Apr 9

Litwin, M.S. (2003). How to Assess and Interpret Survey Psychometrics, Ch
2 (Reliability) & 3 (Validity)

http://www.nytimes.com/2011/09/23/opinion/why-cyberbullyingrhetoric-misses-the-mark.html

Mid-terms returned to you
Class 9: Measurement, continued
 Lester, P. et al. (2012). Evaluation of a family-centered prevention
intervention for military children and families facing wartime deployments.

Optional: Dufrene, R. (2000). An evaluation of a patient satisfaction survey:
validity and reliability.

Optional: Chandrika Ismail A., et al. (2009). A model of substance abuse
risk: Adapting to the Sri Lankan context.

Optional: Fisher, A. Designing HIV/AIDS Intervention Studies. Chapter 6
(Operational Definitions) See:
http://www.popcouncil.org/pdfs/horizons/orhivaidshndbk.pdf
Class 10: Sampling
 Babbie, E. (2013). The Practice of Social Research, Chapter 7.

Weiss Chapter 7, pp. 163-166.
6
Apr 16
Apr 23

Optional: Bartlett III, J.E., Kotrlik, J.W., Higgins, C.C. (2001).
Organizational research: determining appropriate sample size in survey
research.

Optional: Taylor-Powell, E. (1998). Sampling. University of WisconsinExtension.

DUE: Measurement memo
Class 11: Strengthening full coverage and reflexive designs
 Weiss, review Chapter 8 pp. 191-199

RFL Chapter 9 pp. 289-295

Ballart, X. & Riba, C. (1995), Impact of legislation requiring moped and
motorbike riders to wear helmets. (CS)

Pettifor, A., Taylor, E., Nku, D., Duvall, S., Tabala, M., Mwandagalirwa, K.,
Meshnick, S., & Behets, F. (2009). Free distribution of insecticide treated
bed nets to pregnant women in Kinshasa: an effective way to achieve 80%
use by women and their newborns. (CS)

Optional: Cook, C. (2002). The effects of skilled health attendants on
reducing maternal deaths in developing countries: testing the medical
model.

DUE: Ungraded optional homework assignment: Design ideas and
possible comparisons for your final evaluation project.

Measurement memo returned
Class 12: Final review of research design & Ethical Considerations in
Program Evaluation (continued)
 Weiss, Chapter 14 (Ethics)

Bluestein, J. (2005). Toward a more public discussion of the ethics of
federal social program evaluation. (pp. 823-840)

Oakes, J.M. (2002). Risks and wrongs in social science research: an
evaluator’s guide to the IRB. (pp. 460-467)

Optional: Shaw, I.F. (2003). Ethics in qualitative research and evaluation.

DUE: Share poster text with designated critical friends (TBA)
April 30
Class 13: Poster Session, critique and feedback
May 7
Class 14: Evaluation Synthesis and Meta-Analysis
 Weiss Chapter 10, pp 235 – 244 (good design)

Webb, T.L., Joseph, J., Yardley, L., and Michie, S. (2010). Using the internet
to promote health behavior change: a systematic review and meta-analysis
of the impact of theoretical basis, use of behavior change techniques, and
mode of delivery on efficacy. (CS)

Gansle, K.A. (2005). The effectiveness of school-based anger interventions
and programs: a meta-analysis.
7
May 14

http://www.nytimes.com/2013/02/02/opinion/health-cares-trickcoin.html by Ben Goldacre

Optional: Independent Evaluation Group. (2010). What Can We Learn from
Nutrition Impact Evaluations? Lessons from a Review of Interventions to
Reduce Child Malnutrition in Developing Countries. Washington, DC: World
Bank.

Optional: Visher, C.A., Winterfield, L., & Coggeshall, M.B. (2005). Exoffender employment programs and recidivism: a meta-analysis.

OPTIONAL Evaluation review memo due (for extra credit)
Final Paper Due
Download