SOWO 510 – Spring 2016 – Testa Course:

advertisement
SOWO 510 – Spring 2016 – Testa
University of North Carolina at Chapel Hill
School of Social Work
Course:
SOWO 510, Foundations for Evidence-Based Practice and Program Evaluation
Spring 2016, Friday 9:00 – 11:50 A.M., Forsyth County Department of Social Services
Building, Winston-Salem, 5th Floor, Classroom B
Professor:
Mark F. Testa
Spears-Turner Distinguished Professor
School of Social Work
University of North Carolina at Chapel Hill
CB#3550, 325 Pittsboro St., Suite #245-E
Chapel Hill, NC 27599-7590
Phone: (919) 962-6496
mtesta@unc.edu
Office Hours:
By appointment in my Chapel Hill Office 245-E or with Skype: mark.f.testa
COURSE DESCRIPTION:
Students will develop knowledge of evidence-based practice, including skills needed to identify, acquire
and assess appropriate interventions for practice and basic skills required to evaluate their own social
work practice.
COURSE OBJECTIVES:
Social workers, whether they are front-line practitioners, program managers, administrators, or
policymakers, routinely face complex human situations involving individuals who come from diverse
backgrounds. The social interventions social workers have at their disposal vary in their degree of
effectiveness with any given individual, family, group, organization or community. In order to provide the
most effective social work programs, policies and interventions, social workers must be able to determine
if what they are doing is beneficial to the individuals, families, groups, organizations, or communities
they serve. To this end, students will develop knowledge of the purposes of research and evaluation and
the approaches and methodologies necessary to evaluate social work interventions. Upon completion of
this course students will be able to demonstrate:
1. Skill in developing and implementing social intervention evaluations that promote evidencebased social work practice and policy;
2. Skill in evaluating social intervention research and applying findings to social work practice and
policy;
3. Skill in qualitative and quantitative evaluation design, measurement, data analysis, and
knowledge dissemination;
4. Knowledge of the practical, political, and economic issues related to the evaluation of social
interventions;
5. Skill in accessing and assessing public databases and research literature as a foundation for
evidence-based practice;
6. Skill in designing social intervention research that is sensitive to racial, religious, gender, sexual
orientation, social, economic, and other issues of difference; and
7. Ability to apply social work ethics and values to the evaluation of social interventions.
SOWO 510 – Spring 2016 – Testa
REQUIRED TEXTS:
Rubin, R. & Babbie, E. R. (2016). Essential research methods for social work. (4th ed.) Belmont, CA:
Brooks-Cole.
The required text is available in the UNC Student Stores. Additional Required and Supplemental readings
are available on Sakai.
POLICY ON INCOMPLETE OR LATE ASSIGNMENTS: Students must notify the instructor at least
12 hours before an assignment is due if an assignment is going to be turned in late. Extensions may be
given at the instructor’s discretion. Students will lose five points for each 24-hour period beyond the due
date and time (including weekends) for unexcused late assignments. Assignments that are more than 5
days late will not be accepted. A grade of “Incomplete” will be given only in extenuating circumstances
and in accordance with School of Social Work and University policies.
POLICY ON ACADEMIC DISHONESTY: Academic dishonesty is contrary to the ethics of the social
work profession, is unfair to other students and will not be tolerated in any form. Please refer to the APA
Style Guide, The SSW Manual, and the SSW Writing Guide for information on attribution of quotes,
plagiarism and appropriate use of assistance in preparing assignments. All written assignments should
contain a signed pledge from you stating that, "I have not given or received unauthorized aid in preparing
this written work.” In keeping with the UNC Honor Code, if reason exists to believe that academic
dishonesty has occurred, a referral will be made to the Office of the Student Attorney General for
investigation and further action as required.
FORMAT FOR WRITTEN WORK: APA format should be used for all written assignments. Students
should refer to the Publication Manual of the American Psychological Association (6th ed.) for
information on APA format. A self-paced APA tutorial can be found at
http://www2.lib.unc.edu/instruct/citations/index.html?section=apa
POLICY ON ACCOMMODATIONS FOR STUDENTS WITH DISABILITIES: Students with
disabilities that affect their participation in the course and who wish to have special accommodations
should contact the University’s Disabilities Services (Voice/TDD 962-8300, 966-4041). Students must
have a formal letter from the University’s Department of Disabilities Services to receive disability-based
accommodations. Students should discuss the need for specific accommodations with their instructor at
the beginning of the semester.
COURSE REQUIREMENTS: The course requirements are as follows:
1. Written Assignments: During Parts I and II of the course, you will complete three written
assignments as follows:
(a) Sources of Practice Knowledge: Your first assignment is intended to help you become
self-aware of your own practice or policy beliefs based on authority, faith, tradition, and
popular opinion. Describe some belief you once held unquestionably but now feel less
strongly committed to the belief. Describe the reasons you changed your mind. Also
identify a social work practice or social justice cause to which you are now strongly
committed but which is not based on rigorous scientific evidence. Discuss your
willingness or unwillingness to be scientific about the validity of your belief and the
reasons for your point of view. This paper (2 pages) should be submitted electronically
on Sakai before midnight on Thursday/Friday, January 28/29.
(b) Qualitative Analysis of Focus Group Transcripts: In lieu of meeting face-to-face in
Week 4, you will devote the scheduled class time (on your own) to analyzing, coding,
2
SOWO 510 – Spring 2016 – Testa
and discovering meaningful patterns in several pages of transcripts of focus groups
conducted with kinship foster parents for your second written assignment. The steps you
should follow for completing this assignment are listed as follows: 1) Read the Chicago
Tribune article posted on Sakai by Rob Karwath (1993), "Relatives' care for DCFS wards
carries big price tag." Write a two-paragraph theoretical note (see Rubin & Babbie, 2016,
pp. 365-66), which summarizes the article's perspective on the dimensions and the nature
of the kinship foster care phenomenon in Illinois; 2) Following the guidance in Rubin and
Babbie (2016, pp. 363-366), write two code notes that describe the code labels and
meanings of two key concepts in your theoretical note, e.g. attitudes toward adoption,
financial incentives for kinship foster care, alternatives to foster care, etc.; and 3) Discuss
how well your discovered patterns align with the theoretical note you wrote prior to
beginning your analysis. Your code notes, theoretical note, pattern analysis, and your
discussion (3-4 doubled spaced pages in total) should be submitted electronically on
Sakai before midnight on Thursday/Friday, February 11/12. Bring hard copies of the
assignment and your coded transcripts to class.
(c) PICO and Logic Model: Your third assignment asks you to write out the PICO question
for one of six studies, which Van Berkel, Crandall, Eidelman and Blanchar (2015)
describe in Week 6’s assigned reading. You will also construct a logic model of the
condition or intervention they studied using the modified template posted on Sakai under
Resources. Both the PICO question and logic model template are described in Week 6’s
assigned reading by Testa and Poertner. The particular study you will analyze will be
selected by the throw of the dice on the first day of class. There are many special terms
and symbols used in the article. Even though unfamiliarity with these terms and symbols
shouldn’t impede your general understanding of the population, conditions/interventions,
outcomes, and comparisons the authors are studying, they could leave you wondering
what they all mean. Identify in question form any confusing terms and symbols you
encounter in the article, and append to the assignment two to three questions about them.
You should submit the PICO question, logic model, and the two to three clarifying
questions electronically on Sakai before midnight on Thursday/Friday, February 18/19.
Bring hard copies to class.
2. In-Class Midterm and Final Exams: The midterm exam will cover the readings, lectures,
concepts, and other materials of Parts I and II (through week 6) of the course. The final exam will
cover the remainder of the course. Exams will consist of true/false, multiple choice, short answer,
and essay questions. Each exam will be one and one-half (1.5) hours in length and will be
administered during the second half of each of the class periods.
3. CITI Research with Human Subjects Training: You will complete the on-line CITI ethics
training developed by the Collaborative IRB Training Initiative (https://www.citiprogram.org/) to
certify that you are familiar with the ethical principles and guidelines governing research
involving human participants. The CITI Research with Human Subjects Training provides an
opportunity to review historical and current standards for ethical research that involves human
subjects. Allow a minimum of 3 hours for completion of online training. You are required to
submit electronically on Sakai or email a completion certificate no later than April 22 (this
assignment is required, but is not graded).
3
SOWO 510 – Spring 2016 – Testa
4. Evaluation Report: During Parts III & IV of the course, you will work on an evaluation project
related to your field placement or a topic of special interest to you. You can choose one of two
options: 1) you can conduct a real evaluation of a program or practice in the field (i.e., design an
evaluation and collect real data), or 2) you can design an evaluation and analyze secondary data
supplied by the instructor. Drafts of written sections b, c, and d may be submitted for feedback on
the dates listed below. The final product will be a report no longer than 2200-2400 words (i.e., 910 double spaced pages excluding the Abstract and References). It is due April 29 and will
contain the following sections:
(a) Abstract: A summary of the project (90-100 words).
(b) Research Question and Significance of the Problem: Each paper will begin with a
“well-built” PICO question that consists of the following components: 1) the target
population about which you wish to draw inferences; 2) the outcome you intend to
achieve or problem you hope to address; 3) the intervention you are interested in
evaluating; 4) and the alternative with which you will draw a comparison. This should be
followed with a brief statement about the significance of the problem or outcome you are
studying. Students who elect to conduct evaluations in the field should meet with the
instructor during week 3 of the course. A first draft of this section (1 double spaced page)
may be submitted for feedback on March 4.
(c) Literature Review: Your review should be based on a computerized search of electronic
databases using keywords from your PICO question. It should bring the audience up to
date on the previous research in the area as described in Rubin and Babbie (2016: p. 390).
Select the strongest two to three (2-3) articles that bear on your topic, and write up a brief
narrative description that also assesses the strength of the studies’ evidence, limitations of
these studies, and their applicability to your population. A first draft of this section (2
double spaced pages) may be submitted for feedback on April 1.
(d) Methods and Results: As described in Rubin and Babbie (2016: pp. 390-391), your
methods section should describe how you measured each variable, and should give
readers enough information to ascertain whether your measurement procedures were
reliable and valid. Also following Rubin and Babbie (2016), the results section should
present quantitative data so that the reader can recompute them, or if you’re doing a
qualitative analysis, you should provide enough detail so your reader acquires a sense of
having made the observations with you. A first draft of this section (4 double-spaced
pages) may be submitted for feedback on April 15.
(e) Discussion and Conclusions: These final sections should present your interpretations of
the implications of your results for your PICO question, draw implications of your
interpretations for practice or policy, discuss the methodological limitations of your
study, draw implications for future research, and conclude with a section that summarizes
the aims, key results, and explicit conclusions from your study. The length of this section
should be between 2 to 3 pages.
(f) References: Immediately following the conclusion should be a list of all the references
you cited in the report using APA format.
The project will be graded according to the following criteria and a number grade from 0 – 100
will be assigned:
 Mechanics (grammar, spelling, style, typing)
4
SOWO 510 – Spring 2016 – Testa




Organization
Logic
Content
Ability to summarize and draw conclusions
GRADING
Evaluation Report
Midterm Exam
Assignment Average
Final Exam
25%
25%
25%
25%
100%
Points
94 – 100
80 – 93
70 – 79
< 69
Grade
H
P
L
F
QUESTIONNAIRES: At the start of the semester you will be assigned a random number that only you
will know. You will use this number when filling out anonymously a questionnaire (pre-test) that assesses
your understanding of numerical expressions, such as probabilities, percentages, and frequencies. In the
remaining sections, you will be asked to indicate the importance or strength of agreement with a list of
statements that have been designed to measure people’s cultural worldviews and moral beliefs. At the
start of Week 3, you will complete a second numeracy assessment (post-test) as well a questionnaire that
measures your knowledge of a conceptual framework and asks your opinions on a variety of public
issues. The two sets of responses will be linked using the random number you were assigned. We will
draw on these linked datasets periodically in the course to illustrate key research concepts and procedures.
5
SOWO 510 – Spring 2016 – Testa
COURSE OUTLINE, READINGS & DUE DATES
Date
Topics for Discussion
Readings & Assignments To Be Completed
before Class
PART I: INTRODUCTION TO SCIENTIFIC INQUIRY
Week 1 (Jan 15)
Course Overview & Syllabus
Week 2 (Jan 22)
Evaluation & Evidence-Based Practice
Rubin & Babbie, ch. 1; Gambrill (2001),
166-175.
Week 3 (Jan 29)
Bridging the Research to Practice Gap
Testa, DePanfilis, Huebner, Dionne,
Deakins & Baldwin (2014), 333-353;
Children’s Bureau’s Dissemination
Materials
Assignment 1: Sources of Practice
Knowlege due before midnight Jan. 28/29
PART II: EXPLORATIVE RESEARCH: IDENTIFY & EXPLORE
Week 4 (Feb 5)
Qualitative Analysis (No class;
Qualitative Analysis Assignment)
Rubin & Babbie, chs. 15, 18
Week 5 (Feb12)
Culturally Competent Research
Rubin & Babbie, ch. 6; Dionne, Davis,
Sheeber & Madrigal (2009), 911-921.
Week 6 (Feb19)
Problem Formulation
Assignment 2: Qualitative Analysis of Focus
Groups due before midnight Feb. 11/12
Rubin & Babbie, chs. 2, 7; Van Berkel,
Crandall, Eidelman & Blanchar (2015)
1207-1222; Testa, & Poertner (2010), 75100.
Assignment 3: PICO and Logic Model due
before midnight Feb.18/19
Week 7 (Feb 26)
In-Class Midterm Exam
6
SOWO 510 – Spring 2016 – Testa
PART III: FORMATIVE & SUMMATIVE EVALUATION:
DEVELOP, TEST, COMPARE & LEARN
Week 8 (Mar 4)
Program Evaluation
Week 9 (Mar
11)
Quantitative Data Analysis
Week 10 (Mar
18)
Week 11 (Mar
25)
Spring Break
Week 12 (Apr
1)
Experimental Designs
Rubin & Babbie, ch. 14; Testa & White
(2014), 157-172.
Draft Evaluation Report Section: Research
Question and Significance of the Problem
may be submitted for feedback.
Rubin & Babbie, chs. 8, 17.1-17.5.
Freedman, Pisani & Purves (2007), 202217.
Holiday
Rubin & Babbie, ch. 12.1-12.5b.
Freedman, Pisani & Purves (2007), 3-11.
Draft Evaluation Report Section: Literature
Review may be submitted for feedback.
PART IV: TRANSLATIVE & CONFIRMATIVE EVALUATION:
REPLICATE, ADAPT, APPLY & IMPROVE
Week 13
(Apr 8)
Quasi-Experimental & Single-Case
Designs
Rubin & Babbie, chs. 12.6-12.13a, 13.
Freedman, Pisani & Purves (2007), 12-28.
Week 14 (Apr
15)
Sampling & Statistical Significance
Testing
Rubin & Babbie, chs. 17.6-17.7, 11
Week 15 (Apr
22)
Week 16 (Apr
29)
Ethical Issues
Draft Evaluation Report Section: Methods
and Results may be submitted for feedback.
Rubin & Babbie, ch. 5; The Belmont Report
Final Exam
CITI Research with Human Subjects
Training Certificate due
Evaluation Report Due
PART I. INTRODUCTION TO SCIENTIFIC INQUIRY
Week 1 (January 15) — COURSE OVERVIEW & SYLLABUS
Week 2 (January 22) — EVALUATION & EVIDENCE-BASED PRACTICE
7
SOWO 510 – Spring 2016 – Testa
A key tenet of evidence-base practice is that scientific knowledge is always open to question. Scientific
belief is at best provisional and subject to refutation based on empirical observation and systematic
evaluation. It contrasts with beliefs based on authority, faith, tradition, and popular opinion. These other
ways of “knowing” are how people first learn to function as members of their family, peer group, and
local community. Relying exclusively on these other sources of knowledge to guide social work practice
without ever evaluating alternative ways of solving problems is not only unprofessional but also violates
social work ethics and can potentially cause harm. Evidence-based practice is an approach to problemsolving, which attempts to safeguard against such risks and harms by viewing all practice knowledge as
provisional and subject to refutation or improvement through empirical observation, systematic review of
research, rigorous testing, and respect for the dignity and preferences of clients.
An initial step in becoming an evidence-based practitioner is to become self-aware of your own
practice or policy beliefs based on authority, faith, tradition, and popular opinion. This week’s reading
will help you reflect on your own commitments to unscientific sources of social work practice knowledge
and your willingness to allow those beliefs to be questioned and potentially refuted by scientific evidence.
This week’s assignment asks you to recall some belief you once held unquestionably but now feel less
strongly committed to the belief. It asks you to describe the reasons you changed your mind and to
identify a social work practice or social justice cause to which you are now strongly committed but which
is not based on rigorous scientific evidence. Be prepared to discuss in class your willingness or
unwillingness to be scientific about the validity of your belief and the reasons for your point of view.
Required Readings:
Rubin & Babbie, ch. 1.
Gambrill, E. (2001). Social work: an authority-based profession. Research on Social Work Practice,
11(2), 166-175.
Week 3 (January 29) —BRIDGING THE RESEARCH TO PRACTICE GAP
There is a growing emphasis on evidence-based decision-making in social work practice and social
welfare policy. To meet these expectations, it is essential that social work practitioners and researchers
forge research partnerships: 1) to build systematically the scientific basis for practice-informed
innovations and 2) to scale-up with integrity evidence-supported interventions to better serve the needs of
individual clients and the interests of the general public. Figure 1 illustrates a framework that was
developed by a team of social work practitioners and researchers who were commissioned by the U.S.
Children’s Bureau. Their charge was to conceptualize and articulate a translational framework for
building evidence in child welfare and bringing evidence-supported interventions to scale. Even though it
is geared primarily to child welfare, the framework can be generalized to other fields, such as mental
health, education, gerontology, criminal justice, and community organization.
The framework builds on the premise that social work research serves five distinct but
interrelated purposes: 1) identify & explore: understand a problem, establish the intended outcomes,
describe the target population, develop a theory of change, and select a promising innovation or evidencesupported intervention to effect the desired outcomes; 2) develop & test: operationally define a promising
innovation based on direct practice, data analysis, and theory and evaluate its efficacy in effecting the
desired outcomes; 3) compare & learn: render a summary judgment of the merit and worth of one or
more promising innovations by rigorously comparing the results to what happens in the absence of the
8
SOWO 510 – Spring 2016 – Testa
innovations; 4) replicate & adapt: integrate evidence-supported interventions with practitioner expertise,
client preferences, and culturally specific values to achieve desired client outcomes; and 5) apply &
improve: sustain and improve the excellence of evidence-based practice by regularly evaluating its
integrity and validity.
Figure 1—A Framework to Design, Test, Spread, and Sustain Effective Practice
These five interrelated purposes, which medical and social scientists call a “phased model of
increasingly generalizable studies” (Shadish et al., 2002, p. 419), involves a series of multi-level
transactions. The phases move from the micro level of initial implementation and formative evaluation of
a practice-informed innovation to the macro level of full implementation and summative evaluation of the
innovation’s overall impact. If the efficacy of the innovation is empirically supported after rigorous
evaluation, the evidence-supported intervention is generalized back down to the micro level through
translative evaluation and evidence-based practice that replicates or adapts the interventions to different
populations, local contexts, and individual clients. After an evidence-based practice has been
implemented for a period of time, its validity and integrity are periodically reexamined through quality
improvement and confirmative evaluation, which help to decide whether the practice or program should
be maintained as is, changed in some way, or discarded completely, with or without replacement. The
decision to change or replace initiates another phased cycle of evidence building.
9
SOWO 510 – Spring 2016 – Testa
This week’s reading by Testa, DePanfilis, Huebner, Dionne, Deakins and Baldwin (2014)
describes the work of the Translational Framework Workgroup. You should also view and study carefully
the video series distributed by the U.S. Children’s Bureau on the use and application of the framework
and be prepared to take a short quiz about the Framework at the start of class.
Required Readings:
Testa, M.F.,DePanfilis, D., Huebner, R., Dionne, R., Deakins, B. & Baldwin, M. (2014). Bridging the
gap between research and practice: The work of the steering team for the child welfare research
and evaluation translational framework workgroup. Journal of Public Child Welfare, 8 (3), 333353.
Required Video Series: A Framework to Design, Test, Spread, and Sustain Effective Practice in
Child Welfare
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtual-summit/framework
View the series of four animated videos that the U.S. Children’s Bureau produced to explain the major
components of A Framework to Design, Test, Spread, and Sustain Effective Practice in Child Welfare.
The series illustrate how they can be applied to build evidence and spread effective child welfare practice.
Each brief video builds upon the next, and together they demonstrate how administrators, evaluators, and
funders can partner systematically to use evaluation, apply findings, and make decisions that will increase
the chances that the programs, policies, and practices will improve outcomes for children and families.
Part 1- Introducing a New Framework (7 min. 54 sec.)
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtualsummit/framework/video1
Part 2 - Identify & Explore (10 min. 4 sec.)
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtualsummit/framework/video2
Part 3 – Develop & Test and Compare & Learn (10 min. 5 sec.)
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtualsummit/framework/video3
Part 4 – Replicate & Adapt and Apply & Improve (12 min. 22 sec.)
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtualsummit/framework/video4
PART II: EXPLORATIVE RESEARCH: IDENTIFY & EXPLORE
Week 4 (February 5) — QUALITATIVE ANALYSIS
At the Identify-&-Explore phase of evidence-based practice, researchers may draw from a wide array of
sources and research methods to explore problems, define outcomes, develop a theory of change, and
formulate potential solutions. This week’s reading covers a variety of qualitative methods for conducting
explorative research, including ethnography, case studies, life history and focus groups. Whereas
10
SOWO 510 – Spring 2016 – Testa
quantitative research strives to gauge numerically the causal effect of a social intervention on intended
outcomes, qualitative research aims to understand the values and meanings people attach to their own and
other people’s motivations, actions, and choices in usual environments. Qualitative research is especially
informative at the explorative phase of scientific inquiry when precise predictions are challenging to
formulate in advance and social action must be interpreted in order to make sense of how best to effect
meaningful change.
This week the class will not meet face-to-face. Instead you will devote the class time to
analyzing, coding, and discovering meaningful patterns in several pages of transcripts of focus groups
conducted with kinship foster parents as outlined in Assignment 2. Be prepared to discuss your results in
the next class that deals with culturally competent research.
Required Readings:
Rubin & Babbie, chs. 15, 18.
Week 5 (February 12) — CULTURALLY COMPETENT RESEARCH
Culturally competent research and qualitative methods are closely aligned. Both are concerned with
gaining an interpretive understanding of the ways in which culturally-specific factors affect problem
formulation and the design of culturally sensitive innovations. This is especially vital when conducting
research on minority and oppressed groups, which are the focal populations for much of social work
practice. When conducting research on minority and oppressed populations, it is important to acquire an
understanding of each group’s cultural historical experience, as Rubin and Babbie (2016) emphasize:
“including the effects of prejudice and oppression—and how those experiences influence the ways in
which members live and view members of the dominant culture” (p. 109). This understanding can be
acquired through a reading of the literature on the group’s culture and through face-to face interviews,
participant observation, and focus groups.
This week we’ll discuss your results from last week’s assignment on kinship foster parents. We
shall consider how the dominant culture’s perspective on kinship foster care aligns with the perceptions of
the extended families involved in the actual fostering of grandchildren, nieces and nephews. We will
supplement our discussion with an examination of the process by which Renda Dionne (2009) and her
colleagues adapted an evidence-support intervention developed for the dominant culture to the specific
cultural context of an American Indian community.
Required Readings:
Rubin & Babbie, ch. 6.
Dionne, R. Davis, B., Sheeber, L. & Madrigal, L. (2009). Initial evaluation of a cultural approach to
implementation of evidence-based parenting interventions in American Indian communities.
Journal of Community Psychology, 37(7), 911-921.
Week 6 (February 19) — PROBLEM FORMULATION
The identify-and-explore phase of evidence-based practice begins with the formulation of a well-built
research question that can be parsed into the four components of population, intervention, comparison,
and outcome (PICO). A well-built PICO question emerges from the identify-and-explore phase of
11
SOWO 510 – Spring 2016 – Testa
identifying and understanding a problem, constructing a theory of change, researching solutions, and
selecting or developing an intervention for implementation and evaluation. The PICO question can be
elaborated into a logic model that depicts the mediating activities that link interventions and population
conditions to the short-term outputs and proximal outcomes which are predicted to produce the long-term
distal outcomes. A full logic model also specifies the external conditions that prompted concern over the
problem, the underlying theory of change, the end-values for evaluating the ultimate worth of the
resulting change, and any moderating conditions that may affect the generalizability of the intervention’s
efficacy to other populations, settings, cultures, and time periods.
This week’s assignment asks you specify the PICO question that you think Van Berkel et al.
(2015) address in this week’s assigned reading, construct a logic model of one of the six conditions or
interventions they studied, and spell-out two to three questions about the special terms and symbols the
authors use, which you don’t fully understand.
Required Readings:
Rubin & Babbie, chs. 2, 7.
Van Berkel, L., Crandall, C.S., Eidelman, S. & Blanchar, J.C. (2015). Hierarchy, dominance, and
deliberation: Egalitarian values require mental effort. Personality and Social Psychology Bulletin,
41(9), 1207-1222.
Testa, M. F. & Poertner, J. (2010) Fostering accountability: Using evidence to guide and improve child
welfare policy (pp. 75-100). Oxford: Oxford University Press.
Week 7 (February 26) — MID-TERM EXAM
PART III: FORMATIVE & SUMMATIVE EVALUATION:
DEVELOP, TEST, COMPARE & LEARN
Week 8 (March 4) — PROGRAM EVALUATION
At the conclusion of the Identify-&-Explore phase, you will have either selected an evidence-supported
intervention that you can immediately replicate or adapt to the needs of your target population (Replicate&-Adapt phase), discovered a promising innovation that requires further evaluation to validate its
superiority to services as usual (Compare-&-Learn phase), or decided that you need to develop anew and
test a policy, program, or practice innovation, which based on direct experience, data analysis, or theory,
you hypothesize may better achieve the outcomes desired by your clients, funders, or the general public
(Develop-&-Test phase). This section of the course will focus on these later two phases of the framework.
Program evaluation, or what is also known as intervention research, can be sub-divided into
formative and summative evaluations. These are the technical names for the phases the framework labels
“Develop & Test” and “Compare & Learn,” respectively. As described in Rubin and Babbie (2016),
formative evaluations focus on planning a program and improving its initial implementation. Its purpose
is to design a stable set of practice principles, program definitions, or intervention guidelines that are
replicable and show predictable patterns of association between the desired outcomes and measures of
fidelity and intervention dosage. After you are satisfied that an innovation has a good chance of
12
SOWO 510 – Spring 2016 – Testa
succeeding, it is then appropriate to subject it to summative evaluation that renders a summary judgment
of its comparative merit and worth among alternative options, including services as usual.
This week’s reading examines the history and purposes of program evaluation. It considers the
challenges of forging a research partnership with practitioners, funders, and the subjects of the evaluation.
We will explore how the success of a program is a product of both the validity of the intervention and the
integrity with which it is implemented. We will consider some of the practical pitfalls in carrying out
experiments and quasi-experiments and the threats they pose to the validity and integrity of program
evaluation in social work agencies.
Required Readings:
Rubin & Babbie, ch. 14.
Testa, M. F. & White, K. R. (2014). Insuring the integrity and validity of social work interventions: The
case of the subsidized guardianship waiver experiments. Journal of Evidence-Based Social Work
11(1-2), 157-172.
Required Video Series: A Framework to Design, Test, Spread, and Sustain Effective Practice in
Child Welfare
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtual-summit/framework
Part 5 – Tying It Altogether (11 min. 2 sec.)
http://www.acf.hhs.gov/programs/cb/capacity/program-evaluation/virtualsummit/framework/video5
Week 9 (March 11) — QUANTITATIVE DATA ANALYSIS
The goal of most program evaluations is to generate reliable and valid estimates of the average causal
effects of interventions on desired outcomes. The first set of readings examines the first half of this
statement: the reliability and validity of causal estimates. The second set examines the second half: the
statistical methods used to generate these estimates. We will spend most of the class time reviewing the
main ideas involved in analyzing statistical data in quantitative research studies. We will engage in
exercises that help you practice the following: the conversion of data items into numerical codes, the use
of these codes to generate estimates of central tendency (mean, mode and median) and dispersion (range,
variance, and standard deviation), and simple measures of association between outcomes and population
conditions (odds ratios, correlation coefficients) and interventions (effect sizes).
Required Readings:
Rubin & Babbie, chs. 8, 17.1-17.5.
Freedman, D., Pisani, R. & Purves, R. (2007). The Regression Line. Statistics- 4th edition, (pp.
202-217). New York: W.W. Norton & Company.
Week 10 (March 18) — SPRING BREAK
13
SOWO 510 – Spring 2016 – Testa
Week 11 (March 25) — HOLIDAY
Week 12 (April 1) — EXPERIMENTAL DESIGNS
The Explore-&-Identify and Develop-&-Test phases of the framework are concerned mainly with what
researchers call construct validity (i.e., how reliably and validly do the study particulars represent the
higher order PICO constructs of interest?) and statistical conclusion validity (i.e., how large and reliable
is the observed statistical association between the I and the Os?). After an evaluator is reasonably
confident in the construct and statistical conclusion validity of the intervention that is being initially
implemented, the focus turns at the Compare-&-Learn phase to internal validity (i.e., is the observed
statistical association causal or would the same association have been observed without the intervention?).
Answering this last question involves your imagining the difference in potential outcomes that would be
obtained if an individual client or population received the intervention of interest compared to if that
identical individual or population did not receive the intervention. This approach to causal inference
defines causal effects as the differences between potential outcomes, which are imaginable but impossible
to observe directly at the individual level. We’ll practice this approach to causal inference by viewing
snapshots from the Sliding Doors movie in which the actress Gwyneth Pathrow alternates between two
potential outcomes based on the two paths her life could take depending on whether or not by chance she
catches a train.
Even though causal effects are unobservable at the individual level (i.e., it is impossible to
observe simultaneously the same individual in both the intervention and comparison states), it is possible
to approximate the difference statistically between potential outcomes at the aggregate level. Quantitative
data analysis at the Compare-&-Learn phase involves different ways of approximating potential outcomes
using a variety of evaluation designs and statistical models. The evaluation design that poses the fewest
threats to internal validity involves random splitting of a target population into intervention and
comparison groups. Randomization helps to ensure that all observed and unobserved characteristics of the
population, on average, are not associated with the intervention (i.e., the distributions of characteristics
between the intervention and comparison groups are statistically equivalent within the bounds of chance
error). Therefore, if any differences in outcomes are later observed between the intervention and
comparison groups, you can be reasonably confident that it was exposure to the intervention that was the
cause of the difference rather than any preexisting differences at baseline (selection), changes that would
have occurred in any event (maturation), happenings that unfold over time (history), or differences in how
the measurements are made (instrumentation).
Required Readings:
Rubin & Babbie, ch. 12.1-12.5b.
Freedman, D., Pisani, R. & Purves, R. (2007). Controlled experiments. Statistics- 4th edition, (pp. 3-11).
New York: W.W. Norton & Company.
14
SOWO 510 – Spring 2016 – Testa
PART IV: TRANSLATIVE & CONFIRMATIVE EVALUATION:
REPLICATE, ADAPT, APPLY & IMPROVE
Week 13 (April 8) — QUASI-EXPERIMENTAL & SINGLE-CASE DESIGNS
After you are reasonably confident in the causal efficacy (internal validity) of a selected intervention,
based on either a systematic review of the literature or on an internal evaluation conducted by your
agency, the focus next turns at the Replicate-&-Adapt phase to the intervention’s external validity (i.e.,
how generalizeable is the causal relationship over variations in populations, interventions, times, and
settings?). It is during this phase that we add the time (T) and settings (S) components to the well-built
question acronym to form PICOTS.
Even though random assignment still remains the best design for evaluating whether an evidencebased intervention that worked there also works here, other less rigorous methods can also be used to test
the external validity of an intervention. These methods are called quasi-experimental, and they involve
finding two or more existing groups that appear to be similar and using statistical models to compare the
outcomes after one of the groups is exposed to the intervention. The reason that quasi-experimental
methods are considered less rigorous than experimental designs is that there may be other pre-existing
characteristics that are associated with exposure to the intervention, such as differences in age, education,
and motivation, which are confounded with the intervention effect and make it challenging to distinguish
between a spurious association and a true causal effect.
In order to estimate a causal effect in quasi-experimental studies, it necessary to estimate a
statistical model that purges all important confounding influences from your analysis of the “net”
association between the intervention and the outcomes. There are many statistical methods that have been
developed to eliminate confounding influences, such as matching, contingency table analysis, partial
correlation, and regression adjustment. We will focus on the regression-adjustment approach to statistical
purging, which is the most common method for assessing both internal and external validity in quasiexperimental studies. When your interest is learning whether a causal relationship holds for an individual
client, an individual’s past can sometimes be used as the comparison baseline for examining differences
in outcomes before (pre-test) and after (post-test) an individual is exposed to an intervention. These socalled single-case evaluation designs are best applied to interventions that more rigorous evaluation and
systematic reviews have already shown to be evidence supported. They can be applied in your own
clinical practice with clients to learn whether the same beneficial effects also hold for all, only some, or
none of your particular sample of clients.
Required Readings:
Rubin & Babbie, chs. 12.6-12.13a, & 13.
Freedman, D., Pisani, R. & Purves, R. (2007). Observational Studies. Statistics- 4th edition, (pp.
12-28). New York: W.W. Norton & Company.
Week 14 (April 15) — SAMPLING & STATISTICAL SIGNIFICANCE TESTING
The concepts of sampling and statistical significance will have come up several times in the readings. We
have held off until now from examining the statistical theory underlying probability sampling and
statistical significance testing because the statistical conclusion validity of a causal inference is something
you should ponder only after you are convinced of the substantive significance of the observed
association between an intervention and an outcome. Substantive significance (also known as practical or
15
SOWO 510 – Spring 2016 – Testa
clinical significance) refers to the importance of an observed association from a practical standpoint
(Rubin & Babbie, 2016). With the advent of big data and sample sizes running into the tens of thousands
of cases, almost any difference will pass a test of statistical difference no matter how trivial or
unimportant the difference. Nearly all statistical packages test for statistical significance by estimating the
probability that the observed difference is distinguishable from no difference or exactly zero in the entire
population. If you were able to access data on an entire population, for example from census records, any
amount of difference between two groups could be detectable as “statistically significant” no matter how
trivial the amount.
The best approach to assessing the statistical conclusion validity of a causal inference is to decide,
before statistical significance testing, on a minimum difference that you, funders, experts, clients and
other stakeholders consider practically important. Once you have found a result that exceeds this
threshold, then it becomes important to test whether a difference of this magnitude is real or could have
arisen simply by chance because of the smallness of or amount of dispersion in your sample. We will run
a variety of computer simulations in class to illustrate the concepts of the central limit theorem,
confidence limits, statistical power, and null hypothesis significance testing.
Required Readings:
Rubin & Babbie, chs. 17.6-17.7 & 11
Week 15 (April 22) — ETHICAL ISSUES
As Rubin and Babbie (2016) emphasize in the readings, social workers have both a professional and an
ethical responsibility to utilize research and contribute to the development of the profession’s knowledge
base. There are times, however, when development of the knowledge base obliges practitioners to adhere
to research protocols that may appear to go against the grain of what “feels” like the right thing to do.
The word “feels” is in quotations because intuition may not always be the best guide for deciding what to
do. For example, Scared Straight was a widely implemented program in the 1970s, which brought
juvenile offenders into prisons where convicts tried to scare youths by portraying the harsh nature of
prison life. Was it ethical to spread this popular program before the efficacy of the innovation had been
rigorously tested through randomized controlled experimentation? Similarly social work often involves
intrusion into a person’s private life, which the person hasn’t requested. Is it ethical to share information
obtained from such encounters with agency researchers without the person’s consent even if the
information is stripped of personal identifiers? As Rubin and Babbie (2016) note: decisions about the
ethics of research involve subjective value judgments that must weigh the contributions of the research to
social work knowledge against its potential risks to research subjects. This week’s reading examines a
number of ethical issues and dilemmas that arise when trying to balance these competing goals. We shall
consider the topics of informed consent, anonymity, and the inherent conflict with a client’s to desire to
receive services versus the ethical responsibility to evaluate service effectiveness.
Required Readings:
Rubin & Babbie, ch. 5.
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.
(1979). The Belmont report: Ethical principles and guidelines for the protection of human
subjects of research. Washington, DC: U.S. Government Printing Office.
Week 16 (April 29) — FINAL EXAM
16
Download