T U N C

advertisement
THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
SCHOOL OF SOCIAL WORK
COURSE NUMBER: SOWO 910
COURSE TITLE:
Research Methods for Social Intervention
SEMESTER & YEAR: Fall, 2011
INSTRUCTOR:
Mark Testa, PhD
Spears-Turner Distinguished Professor
School of Social Work
University of North Carolina at Chapel Hill
Tate-Turner-Kuralt Building
325 Pittsboro St., Campus Box 3550
Chapel Hill, NC 25599-3550
Tel: (919) 962-6496 Fax: (919) 962-1486
mtesta@unc.edu
OFFICE HOURS:
Fridays, 12:00 – 1:30 PM or by appointment
COURSE DESCRIPTION:
This course provides an introduction to basic research processes and methods for use in
planning, implementing, evaluating, and improving social interventions at the formative,
summative and translational stages of program implementation and evaluation. Topics include
outcomes monitoring, problem formulation, needs assessment, construct measurement, research
review, human subjects’ protection, evaluation design, implementation integrity, data analysis,
and the application of findings to practice improvement and theory refinement.
COURSE OBJECTIVES:
This course affords students an opportunity to gain knowledge about the following issues in
social intervention research:
•
•
•
•
•
The need for broadly inclusive processes to plan, implement, and evaluate social
interventions at the formative, summative and translational stages of program
implementation and evaluation, and how researchers' approaches to these processes can
facilitate or impede research;
The quantitative-comparative experimental (potential outcomes) paradigm that currently
prevails in social intervention research;
How various policy and implementation constraints sometimes necessitate the use of
designs other than fully randomized experiments;
Special legal and ethical issues pertaining to the protection of human subjects; and
The need for culturally aware social intervention research that is responsive to the
diversity of community values and preferences.
1
Students taking the course will be able to:
•
•
•
•
•
•
develop “well-built” research questions for estimating the causal impact of social
interventions on desired outcomes for target populations;
develop logic and other conceptual models to support proposed social interventions and
explicate underlying theories of change;
assess the validity and reliability of alternative qualitative and quantitative measures of
constructs in conceptual models that guide social intervention research;
understand basic aspects of data analysis, sample design and statistical power analysis;
critically evaluate various experimental, quasi-experimental, and non-experimental
research designs by identifying various threats to the validity of each design; and
prepare an application for IRB approval of human subjects research.
Recommended Prerequisites
SoWo 102 or equivalent
SoWo 292 or equivalent
Relationship to Other Courses in the Curriculum
This course introduces basic concepts and skills that are reinforced and further developed
in Development of Social Intervention Models (SOWO 940) and Advanced Research Methods in
Social Intervention (SOWO 913) during the 2nd and 3rd years of the program. It extends students’
knowledge about basic research processes and orients them to the variety of professional issues,
theoretical perspectives, statistical methods, and advanced methods of intervention research that
they will examine in related doctoral-level courses (SOWO 900, 911, & 914) and can pursue in
courses outside of the School.
REQUIRED TEXTS:
Ayres, I. (2007). Super crunchers: Why thinking-by-numbers is the new way to be smart.
New York: Bantam Books
Shadish, W.R, Cook, T.D., & Campbell, D.T. (2002). Experimental and quasiexperimental designs for generalized causal inference. Boston: Houghton Mifflin
Company.
Testa, M. F. & Poertner, J. (2010). Fostering accountability: Using evidence to guide and
improve child welfare policy. Oxford: Oxford University Press.
REQUIRED AND SUPPLEMENTARY READINGS:
Required readings outside of the textbook are listed in the course outline. Supplementary
readings are assigned at the instructor’s discretion.
2
TEACHING METHODS
The development of a supportive learning environment, reflecting the values of the social
work profession, is essential for the success of this class. A supportive learning environment is
fostered by listening to the ideas and views of others, being able to understand and appreciate a
point of view which is different from your own, articulating clearly your point of view, and
linking experience to readings and assignments. Everyone will appreciate your contributions to
making this a safe and respectful class for learning and growth.
CLASS ASSIGNMENTS
SEMINAR PARTICIPATION
1. Register with JSSR: Sign up as a subscriber to the Journal of the Society for Social
Work and Research (edited by Mark Fraser) by registering online. September 2.
2. Obtain Certification in Protection of Human Participants: Students must complete
the on-line course developed by the Collaborative IRB Training Initiative (CITI) to
certify that they are familiar with the ethical principles and guidelines governing research
involving human participants; completing the course requires several hours; each student
is required to submit electronically on Sakai or email a completion certificate before
September 9 (this assignment is required, but is not graded).
3. Identify potential resources and collaborators. Become familiar with potential sources
of information about obtaining financial support for your research. Also you may wish to
make yourself known to potential research collaborators by registering as a member of
the Community of Science. September 16.
4. Participate in seminar: Students are expected to read assignments and come to class
prepared to share concepts from the readings, ask questions, and respond to questions
about the material. Each student should prepare a 1-page self-evaluation of his or her
efforts to make this a productive learning experience. Award yourself 1-5 points for those
efforts (see description of grading). This is due the last day of class (December 2) and
should be submitted to the instructor electronically on Sakai or by email.
WRITING ASSIGNMENTS
During the semester, you will write five papers (varying in length from 1 to 7 pages) on
different components of the research process. The final assignment will require revision and
expansion of each of these components into a full IRB application. The written assignments are
as follows:
1. Research Question: After meeting with the instructor to discuss preliminary ideas, each
student will formulate a “well-built” PICO question that consists of the following
components: 1) the target population about which you wish to draw inferences; 2) the
outcome you intend to achieve or problem you hope to address; 3) the intervention you
are interested in evaluating; 4) and the alternative course of action with which you will
draw a comparison (e.g., no intervention, regular services, or different interventions).
3
2.
3.
4.
5.
6.
This one (1) - pager, including a brief statement about the significance of the problem or
outcome you are studying, will be due September 16.
Research Review: Conduct a computerized search of electronic databases using
keywords from your PICO question. Select the strongest four to six (4-6) studies that bear
on your topic. Write up a research review using narrative descriptions that also assess the
strength of the evidence for supporting the use of your intervention and identify the
limitations of these studies and their applicability to your population. This paper (5-7
pages) will be due October 7.
Logic Model: Expand your PICO question into a logic model that lays out the expected
causal mechanisms and mediating pathways from the intervention to the desired outcome.
Your description of the target population should also list any population conditions that
you believe may moderate the intervention’s impact on the outcome. The model should
also enumerate any background factors and external conditions that contribute to the
significance of problem you are addressing. It should list the key assumptions of the
theory of action you are positing will effectuate the desired change. Finally the model
should identify general end-values for reconciling diverse outcomes and evaluating the
ultimate worth of the resulting change. A modified Logic Model Template for filling in
this one (1) page figure is available under Resources on Sakai. It will be due October 14.
Measurement Review: Identify and assess the relative strengths and weaknesses
of alternative measures or approaches to measuring a construct (population, intervention,
or outcome) referenced in your research question. The purpose of the exercise is to select
the measure (or set of measures) that will yield the most valid and reliable data
concerning this construct. This paper (5-7 pages) will be due October 28.
Evaluation Design: Based on your research question, you will outline the basic features
of an experimental or quasi-experimental design for evaluating the impact of the
identified social intervention on your outcome construct. The description should identify
the unit(s) of analysis, comparison group(s), and how threats to the validity of your
research will be addressed. The discussion should provide a rationale for the method you
are proposing and how concessions to design constraints may make the research
vulnerable to criticism. This 5-7 page paper will be due November 11.
IRB Application: This final assignment will follow the instructions issued by the UNC
Office of Human Research Ethics for application for IRB approval. We will be reviewing
various resources throughout the course that will be helpful in preparing it. It will
incorporate material from each of the other written assignments as well as a description
of risks to human subjects and measures to minimize those risks and a discussion of the
benefits to subjects and/or society. The application form can be downloaded from
http://research.unc.edu/offices/human-research-ethics/researchers/forms/index.htm. The
Word document runs 18 pages and with the narrative can expand to 25-30 pages. This
application will be due on December 2.
WRITING ASSIGNMENT GUIDELINES:
All written assignments must be typed and follow APA format. Several writing resources
are posted on the website. Students should also refer to the following:
4
•
•
American Psychological Association. (2009). Publication manual of the American
Psychological Association (6th ed.). Washington, DC.
Note: You can find a self-paced tutorial for APA style at
http://www.lib.unc.edu/instruct/citations/apa/index.html
ONLINE RESOURCES:
Course materials for SOWO 910 will be accessible to you on
https://sakaipilot.unc.edu/portal/. In addition, Angela Bardeen, Behavioral and Social Sciences
Librarian at UNC, has created a SOWO 910 course library website for finding articles, evidencebased practice resources, measurement tools, and human subjects protection. It can be accessed
at http://ris.lib.unc.edu/course-guide/124.
COURSE PERFORMANCE ASSESSMENT:
Final grades will be determined on the basis of points earned on each assignment and on
participation in seminars. Letter grades will correspond to the following point totals:
94 - 100:
80 - 93:
70 - 79:
< 70:
High Pass
Pass
Low Pass
Fail
Letter grades in this course generally follow a distribution with a mean of 87 and a standard
deviation of 6 points.
Points are apportioned to individual assignments as follows:
1.
2.
3.
4.
5.
6.
7.
Seminar participation: 10 points (5 points by instructor and 5 points by self-evaluation)
Research Question: 10 points
Research Review: 15 points
Logic Model: 10 points
Outcome Measurements: 15 points
Evaluation Design: 15 points
IRB Application: 25 points
POLICY ON INCOMPLETES AND LATE ASSIGNMENTS
The instructor will only entertain requests to hand in assignments late in circumstances of
special hardships or emergencies. The potential for handing in papers late needs to be discussed
with the instructor in person at least three days before the assignment is due. If not approved
beforehand, late assignments will be graded five points off for each day the assignment is late,
including weekends. A grade of Incomplete is given on rare occasions when there is sufficient
reason to warrant it. It is the student’s responsibility to initiate a conversation with the instructor
to request an Incomplete—instructors have no responsibility to give an Incomplete without such
a request.
5
POLICY ON ACADEMIC DISHONESTY
Please refer to SSW Writing Resources and References website for information on
attribution of quotes, plagiarism and appropriate use of assistance in preparing assignments. All
written assignments should contain a signed pledge from you stating that, "I have not given or
received unauthorized aid in preparing this written work".
In keeping with the UNC Honor Code, if reason exists to believe that academic
dishonesty has occurred, a referral will be made to the Office of the Student Attorney General for
investigation and further action as required.
POLICY ON ACCOMMODATIONS FOR STUDENTS WITH DISABILITIES
Students with disabilities that affect their participation in the course and who wish to
have special accommodations should contact the University’s Disabilities Services and provide
documentation of their disability. Disabilities Services will notify the instructor that the student
has a documented disability and may require accommodations. Students should discuss the
specific accommodations they require (e.g. changes in instructional format, examination format)
directly with the instructor.
OUTLINE OF CLASS TOPICS
Date
Topics for Discussion
Reading Assignments To Be
Completed before Class
UNIT I: INTRODUCTION TO SOCIAL INTERVENTION RESEARCH
Week 1
PICO Questions, Logic Models &
Start readings and complete PICO &
(August 26)
Other SHAs (No Class)
Logic Model exercise for next week.
Week 2
Overview of Social Intervention
Testa & Poertner, pp. 75-100; Kenny et
(September 2) Research
al., pp. 294-324.
Week 3
Agency Integrity and Scientific
Testa & Poertner, pp. 3-13; Shadish,
(September 9) Validity
Cook & Campbell, pp. 34-42; Merton,
pp. 267–276.
UNIT II: FORMATIVE, SUMMATIVE & TRANSLATIONAL RESEARCH
Week 4
Outcomes Monitoring
Testa & Poertner, pp. 114-135;
(September 16)
Goodwin, pp.100-106.
Week 5
Data Analysis
Ayres, pp. 1-45; Testa & Poertner, pp.
(September 23)
136-147, 153-165; Freedman, et al., pp.
202-217.
Week 6
Evaluation Designs
Ayres, pp. 46-80; Testa & Poertner, pp.
(September 30)
269-290; Freedman, et al., pp. 3-28.
6
Week 7
(October 7)
Week 8
(October 14)
Week 9
(October 28)
Week 10
(November 4)
Week 11
(November 11)
Week 12
(November 18)
Week 13
(December 2)
Research Reviews
Ayres, pp. 81-102; Testa & Poertner, pp.
166-194; Shadish, Cook & Campbell,
pp. 417-455.
UNIT III: INTERVENTION EVALUATION DESIGNS
Experimental Designs
Testa & Poertner, pp. 195-205;
Boruch, et al. pp. 330-353.
Quasi-experimental Designs
Testa & Poertner, pp. 205-230; West et
al., pp. 1359-1366; Doyle, pp. 1-9.
UNIT IV: RESULTS-ORIENTED ACCOUNTABILITY
Human Subjects Protections
The Belmont Report
Statistical Precision & Power
Analysis
Implementation & Quality
Improvement
Qualitative Research and Reflexive
Practice
Ayres, pp. 192-220; Testa & Poertner,
pp. 147-152; Orme et al., pp. 3-10.
Ayres, pp.103-128; Testa & Poertner,
pp. 231-268, 291-327; Moynihan, pp.
203—216.
Ayres, pp. 156-191; Testa & Poertner,
pp. 357-379; D’Cruz et al., pp.73–90.
UNIT I. INTRODUCTION TO SOCIAL INTERVENTION RESEARCH
Week 1 (August 26)— PICO QUESTIONS, LOGIC MODELS, AND OTHER SHAs
(No Class)
The research process begins with the formulation of a well-built research question that can be
parsed into the four components of population, intervention, comparison, and outcome. Grouped
together under the acronym PICO, the formulation of a well-built question guides the process of
data analysis, computerized research reviews, and the construction of logic models. There are a
variety of designs for constructing logic models. The design used in this course elaborates on the
PICO framework and visually depicts the mediating activities that link interventions and
population conditions to the short-term outputs and proximal outcomes produced by the activities
and the longer-term distal outcomes these intermediate actions are expected to produce. A full
logic model also identifies the external conditions that prompted concern over the problem, the
underlying theory of change, end-values for evaluating the ultimate worth of the resulting
change, and any moderating conditions that may affect the intervention’s impact.
PICO questions and logic models can be considered examples of what Professor James Flynn
calls Shorthand Abstractions (SHAs). These refer to concepts drawn from science which make
people smarter by providing widely applicable templates. Concepts such as control group,
7
random sample, regression coefficient, the 2SD rule, placebo, and falsifiability are examples of
SHAs that make available complex ideas in unified cognitive chunks that can be used as
elements in critical thinking and debate. As you read the assigned readings for each class, take
note of a concept or two that you think could qualify as a SHA. The concept can be understood
in a broad sense as a valid and reliable way of gaining knowledge as long as it is a rigorous
conceptual tool that may be summed up succinctly in a word or a phrase and has broad
application to interpreting the world. Come prepared to each class with a SHA that you found
improved or challenged your thinking and which you think could also improve the cognitive
toolkit of your fellow classmates and instructor.
In preparation for contributing to the course list of SHAs, you should spend this first week
working on the PICO question (Testa and Poertner, 2010: 81-82) that you think Kenny et al.
(2004) addressed in next week’s reading assignment. Also construct a logic model of the
intervention they studied using the modified template posted on Sakai under Resources and
described in Testa and Poertner (2010: 85-98). Submit both the PICO question and logic model
electronically on Sakai and bring hard copies to next week’s class.
Week 2 (September 2)— OVERVIEW OF SOCIAL INTERVENTION RESEARCH
Social intervention research serves four distinct but interrelated purposes: 1) formulating or
shaping a social intervention to improve practice and policy (formative research); 2) rendering a
summary judgment of the efficacy and effectiveness of a social intervention (summative
research); 3) translating empirically-supported interventions to different local contexts and subpopulations (translational research); and 4) describing and explaining the effects of a social
intervention as a contribution to scientific knowledge and theory. Although the research process
varies somewhat depending on the specific purpose, in general, it builds on a common
foundation that conceives of social intervention research as cycling through the following five
successive stages: 1) outcomes monitoring, 2) data analysis, 3) research review, 4) evaluation
design, and 5) quality improvement.
The course is organized around these five stages of social intervention research, which
align with five principles of agency integrity and four types of scientific validity as illustrated in
Figure 1. A valuable lesson that has been learned from social intervention research is that agency
success in attaining social work outcomes involves drawing a distinction between
implementation integrity and intervention validity. Results-oriented accountability involves
holding practitioners and other agents answerable both for the integrity of the actions they take
on behalf of their clients and other principals and for the validity of those interventions in
achieving the outcomes valued by their principals and the public at large. An agency’s failure to
achieve the intended outcomes thus may reflect either a problem with the integrity of the
implementation or a problem with the validity of the intervention itself. During the semester, we
shall examine the inter-relationships between implementation integrity and intervention validity
in conducting socially responsible and culturally sensitive social intervention research.
8
Your engagement with these issues begins by identifying yourself to potential collaborators by
registering as a member of the Community of Science.
Required Readings:
Testa & Poertner, pp. 75-100.
Kenny, D.A. et al. (2004). Evaluation of treatment programs for persons with severe mental
illness: Moderator and mediator effects. Evaluation Review, 28, 294-324.
Supplementary Readings:
The Evaluation Exchange, 11(2): pp. 2-15.
Figure 1 Cycle of Results-Oriented Accountability
5
1
Quality
Improvement
Outcomes
Monitoring
(Reflexivity/
Construct validity)
(Scope of interest/
Construct validity)
4
2
Evaluation
Data Analysis
(Causality/
Internal validity)
(Transparency/
Statistical validity)
3
Research
Review
(Evidence-supported/
External validity)
Week 3 (September 9)—AGENCY INTEGRITY AND SCIENTIFIC VALIDITY
(Guest Instructor, Angela Bardeen, Behavioral and Social Sciences Librarian, UNC)
Social intervention research entails working with people – those who design and implement
interventions (agents) and the people that provide funding for testing them and the
participants who are affected by the interventions (principals). Best practice involves holding
agents answerable for the integrity and validity of the actions they take on behalf of their
individual and corporate principals. As such, social intervention research is an “agency
9
relationship” in which agents bear responsibility for abiding by a set of scientific standards and
ethical principles in fulfilling the interests of their principals. In this respect, research
accountability goes beyond accumulating valid evidence of the efficacy and effectiveness of
child welfare interventions to ensuring that the implementation process reliably and responsibly
serves the purposes valued by clients, research sponsors, and the public at large. To be
accountable, social intervention researchers must appreciate the context in which they work and
develop the interpersonal skills as well as the technical knowledge and skills that can make them
both effective and responsible in their roles. Your engagement with these issues begins with
completing the on-line course related to the protection of human participants in research. Each
student will submit a CITI certificate indicating their completion of the course. This certification
and the automatic registration with UNC's Office of Human Research Ethics satisfy the
requirements of many of the research assistantships held by students in the doctoral program.
Scientific validity refers to the best available approximation to the truth or falsity of a given
hypothesis, inference, or conclusion and provides a set of standards by which the quality of the
research can be judged. There are four generally recognized types of scientific validity: 1)
statistical conclusion validity that is concerned with whether there is a statistically significant
association between an intervention and the desired outcome; 2) internal validity that focuses on
whether the statistical association results from a causal relationship between the intervention and
the outcome or is a spurious association; 3) construct validity that assesses the degree of
correspondence between the observational particulars of a population, intervention, comparison,
and outcome (PICO) and their higher-order constructs; and 4) external validity that addresses
how generalizable the particular causal relationships are over variations in PICO. The
demonstration of scientific validity usually proceeds cumulatively and the order of
demonstration varies with the purpose or purposes served by the social intervention research. For
example, for purposes of rendering a summary judgment of the efficacy and effectiveness of a
social intervention, the cumulative order typically proceeds from statistical conclusion validity to
internal validity, construct validity and finally to external validity.
We will be joined this class period by Angela Bardeen, Behavioral and Social Sciences
Librarian, who will provide an overview of library resources for conducting research reviews in
preparation for completing the assignment due October 7.
Required Readings:
Testa & Poertner, pp. 3-13.
Shadish, Cook & Campbell, pp. 34-42.
Merton, R. K. (1996). The ethos of science. In P. Sztompka (Ed.), Robert K. Merton: On social
structure and science (pp. 267–276). Chicago: The University of Chicago Press.
Supplementary Readings:
10
NASW (1997). Code of Ethics, section 5.02, Evaluation and research (available at
http://www.socialworkers.org/pubs/code/code.asp).
UNIT II: FORMATIVE, SUMMATIVE, AND TRANSLATIONAL RESEARCH
Week 4 (September 16)—OUTCOMES MONITORING
The construction of a well-built research question begins with the identification and
measurement of an outcome or set of outcomes that principals and their agents want to monitor
and, if desired, change. Outcome is synonymous with the effect, result or dependent variable of a
cause, intervention or independent variable. In mathematical terms, it is y or a function of x: y =
f(x). In this course we will also substitute O for y in keeping with the PICO framework. In order
to estimate the effect of an intervention on an outcome, it is necessary to translate the higherorder, theoretical construct of the outcome into a lower-order, operational measurement of the
variable. Construct validity refers then to the degree of correspondence between the higher order,
theoretical constructs and the lower-order, observational particulars. To demonstrate construct
validity, you need to show evidence that the observational particulars (data) support the
theoretical structure of the construct. How researchers conceptualize this task is evolving, and we
will review some of the latest thinking on these endeavors.
Some researchers oppose the routine use of outcome indicators to improve public management.
They argue that monitoring and evaluating agency units, e.g., schools, departments, and courts,
invariably lead to the corruption of the indicators used to monitor results and to the degradation
of the agency relationships that program evaluation is supposed to improve. It is important to
acknowledge these agency risks and take cognizance of such threats to agency integrity. We will
consider the precautions that can be taken which can help decrease these agency risks and
increase the opportunities for responsible public policy and management.
Required Readings:
Testa & Poertner, pp. 114-135.
Goodwin, L. (2002). Changing conceptions of measurement validity: An update on the new
Standards. Journal of Nursing Education, 41, 100-106.
Controversial Issues:
Pelton, L. (2008). A note contesting Mark Testa's version of national foster care population
trends, Children and Youth Services Review, doi:10.1016/ j.childyouth.2008.09.005.
11
Testa, M. (2008). How the bear evolved into a whale: A rejoinder to Leroy Pelton's note
contesting Mark Testa's version of national foster care population trends, Children and
Youth Services Review, doi:10.1016/ j.childyouth.2008.10.009.
Week 5 (September 23)—DATA ANALYSIS
Observation of a deterioration in outcomes, which is of both practical importance and statistical
significance, does not necessarily indicate that agency performance is deficient or in need of
correction. There could be some other antecedent conditions beyond the practitioners’ and
agents’ immediate control that could be causing the difference. For example, a practically
important and statistically significant difference in post-operative mortality rates between two
hospitals may not necessarily mean that the hospital with the lower rate is the superior
performing hospital. It may instead mean that the hospital with the higher rate admits a sicker
group of patients, on average, than the other. Therefore before a summary judgment can be
rendered about agency performance or the type of improvement needed, it is first important to
identify those factors that may be clouding the comparison with data analysis methods for
purging or adjusting for external confounding influences.
In this week, we will introduce the regression method of adjusting for confounding variables that
can exaggerate or obscure the true causal effect of an intervention on an outcome. Freedman et
al. offer a simple description of the correlation coefficient and how it is easily transformed into a
regression coefficient. We will then replicate the results obtained by one of the first applications
of the regression method to a social policy question: G. Udny Yule’s investigation into the
causes of changes in pauperism in England published in 1899.
Required Readings:
Ayres, pp. 1-45.
Testa & Poertner, pp. 136-147, 153-165.
Freedman, D., Pisani, R. & Purves, R. (2007). The Regression Line. Statistics- 4th edition, (pp.
202-217). New York: W.W. Norton & Company.
Supplementary Readings:
Yule, G. U. (1899). An investigation into the causes of changes in pauperism in England, chiefly
during the last two intercensal decades (part I.). Journal of the Royal Statistical Society,
62, (2), 249-295.
12
Week 6 (September 30)—EVALUATION DESIGNS
The question that clients, practitioners and policymakers really want to answer from social
intervention research is what would happen if people who are given a certain treatment or
intervention option were instead denied this possibility. Of course such “potential outcomes” can
never be compared at the individual level because it is impossible simultaneously to deny and to
offer a treatment option to an individual person. Instead researchers have to fall back on high
quality approximations to this impossible “what if” scenario (what statisticians call the
“counterfactual”) by conducting rigorous studies that allow them to draw causal inferences at the
macro level.
A compelling case can be made for more routine use of randomized controlled experiments in
social work than is currently the practice. But there are situations in which controlled
experimentation is inadvisable, unethical, or just plain impossible. Over the past several
decades, researchers have made tremendous strides in conceptualizing the assumptions that need
to be satisfied in order to draw valid causal inferences from non-experimental research and how
best to approximate the necessary conditions using statistical methods. The material in this
section will help students acquire technical knowledge related to the development and
application of different research designs—experimental, quasi-experimental, and nonexperimental.
Required Readings:
Ayres, pp. 46-80.
Testa & Poertner, pp. 269-290
Freedman, D., Pisani, R. & Purves, R. (2007). Controlled experiments & Observational Studies.
Statistics- 4th edition, (pp. 3-28). New York: W.W. Norton & Company.
Supplementary Readings:
Rubin, D.B. (2004). Teaching statistical inference for causal effects in experiments and
observational studies. Journal of Educational and Behavioral Statistics, 29 (3), 343-367.
Week 7 (October 7)—RESEARCH REVIEWS
Research reviews involve the explicit search and selection of relevant studies, the assessment of
their scientific validity, and a critical synthesis of their findings to reach a tentative conclusion
about the efficacy and effectiveness of social interventions. This stage of research-oriented
accountability is what the field most commonly understands as evidence-based practice (EBP). A
commonly cited definition by Sackett et al. (1997) is that EBP is “the conscientious, explicit and
13
judicious use of current best evidence in making decisions about the care of individual [clients].”
EBP is emerging also as the guiding paradigm within which social intervention research is
pursued by university-based researchers and applied by various actors in the areas of medicine,
psychology, public health, criminology, social work and public policy. While paradigms provide
a framework within which knowledge can be developed, they can also intellectually straitjacket
their adherents. Given this potential agency risk, there is a need to subject EBP to the same
methods of critical review and reflective assessment that its adherents espouse for validating
clinical practices and public policies.
Required Readings:
Ayres, pp. 81-102.
Testa & Poertner, pp. 166-194.
Shadish, Cook & Campbell, pp. 417-455.
Supplementary Readings:
Chalmers, I. (2003). Trying to do more good than harm in policy and practice: The role of
rigorous, transparent, up-to-date evaluations. Annals of the American Academy of
Political and Social Science, 589, 22-40.
Littell, J. (2008). Ch. 4: How do we know what works? The quality of published reviews of
evidence-based practices. In Lindsey, D. & Shlonsky, A. (Eds.) Child welfare research:
Advances for practice and policy (pp. 66-93). Oxford: Oxford University Press.
Winokur, M., Holtan, A., & Valentine, D. (2009) Kinship care for the safety, permanency, and
well-being of children removed from the home for maltreatment. Cochrane Database of
Systematic Reviews, 1, Art.No.: CD006546. DOI:10.1002/14651858.CD006546.pub2.
Controversial Issues:
Webb, S.A. (2001). Some considerations on the validity of evidence-based practice in social
work. British Journal of Social Work, 31: 57-79.
Gibbs, L. & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections.
Research on Social Work Practice, 12: 452-476.
UNIT III.
INTERVENTION EVALUATION DESIGNS
14
Week 10 (November 4)—EXPERIMENTAL DESIGNS
The most rigorous method for drawing causal inferences about the potential effects of a
treatment is the randomized controlled experiment. By employing a process such as a lottery,
coin flips or a table of random numbers to treat a chance selection of persons or group, we can
feel confident that the characteristics of the intervention group (who are offered the option) and
the comparison group (who are denied the option) are statistically equivalent on average within
the bounds of statistical error. The thinking is that if the two groups start out looking statistically
similar at the initiation of the experiment and then if any significant differences later emerge
after implementation, it is reasonable to conclude that the cause of the differences is the
intervention itself rather than any pre-existing group dissimilarities or concurrent policy changes
(which affect both groups).
Despite these advantages, some social work practitioners find randomized controlled
experiments to be ethically suspect because of its denial of services to the comparison group and
to be of limited use because of the lengthy observational period before a summative judgment
can be confidently rendered. This section will consider the appropriateness of constraining
practitioner discretion by experimental protocols and the conditions under which randomization
constitutes a justifiable interference with agent discretion when the empirical evidence for the
efficacy or effectiveness of a promising intervention is weak.
Required Readings:
Testa & Poertner, pp. 195-205.
Shadish, Cook & Campbell, pp. 279-291.
Boruch, R. F., Victor, T. & Cecil, J.S. (2000). Resolving Ethical and Legal Problems in
Randomized Experiments. Crime Delinquency, 46, 330-353.
Supplementary Readings:
Testa, M. (2002). Subsidized guardianship: Testing an idea whose time has finally come. Social
Work Research, 26 (3), 145-158.
Ludwig, J., Liebman, J.B., Kling, J.R., Duncan, G.J., Katz, L.F., Kessler, R.C. & Sanbonmatsu,
L. (2008). What can we learn about neighborhood effects from the moving to opportunity
experiment? American Journal of Sociology, 114(1), 144-188.
October 21: Fall Break – No Class
15
Week 9 (October 28)—QUASI-EXPERIMENTAL DESIGNS
(Guest Instructor, Joseph Doyle, MIT Sloan School of Management)
Although experiential research is the “gold standard” in social work and many related fields such
as medicine, education, mental health, and criminology, randomization in and of itself provides
no guarantee of construct or statistical validity. Sample attrition, failure to receive the intended
intervention, and crossovers from the comparison to the intervention group can result in
misestimates of the actual treatment effect. Many randomized controlled experiments in social
work are actually better understood as “randomized encouragement designs” that involve the
randomization of subjects to an encouragement condition, which is intended to induce
compliance with an intended plan of treatment. As a consequence, studies that start out as
experimental can up ‘‘quasi-experimental’’ at the end because of differential selection into
alternative compliance states (e.g. compliers, defiers, always treated, never treated).
Evaluation methods that approximate the statistical equivalence that is best obtained through
random assignment are called quasi-experiments. Econometricians and statisticians have
developed a variety of statistical methods for handling selection biases in order to uncover the
genuine causal effect of an intervention on an outcome. The most commonly used approaches for
adjusting for selection biases in the absence of randomization include regression discontinuity
designs, propensity score and other matching methods, and instrumental variable analysis. This
section will also look at recent developments in the estimation of intent-to-treat (ITT) and
treatment-on-treated (TOT) effects in randomized encouragement designs.
Required Readings:
Testa & Poertner, pp. 205-230.
West, S. G., Duan, N., Pequegnat, W., Gaist, P.,Des Jarlais, D.C., Holtgrave, D., Szapocznik, J.,
Fishbein, M., Rapkin, B., Clatts, M., & Mullen, P.D. (2008). Alternatives to the
randomized controlled trial. American Journal of Public Health, 98(8), 1359-1366.
Doyle, J. J. (2011). Causal effects of foster care: An instrumental-variables approach. Children
and Youth Services Review. doi:10.1016/j.childyouth.2011.03.014
Supplementary Reading:
Angrist, J.D. (2006). Instrumental variables methods in experimental criminological research:
What, why and how. Journal of Experimental Criminology, 2, 23-44.
16
UNIT IV: RESULTS-ORIENTED ACCOUNTABILITY
Week 11 (November 12)—HUMAN SUBJECTS PROTECTION
(Guest Instructor, Mary Anne Salmon, Clinical Associate Professor, UNC)
This section will provide the “nuts-and-bolts” on how to prepare a UNC application for IRB
approval of research proposals. We will be joined by Mary Anne Salmon who will provide an
overview of the IRB application process at the School. After this, you should have the tools and
knowledge needed to complete the final assignment.
Required Readings:
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research (1979). The Belmont Report: Ethical Principles and Guidelines for the
Protection of Human Subjects of Research.
Supplementary Readings:
White, V. M., Hill, D.J. & Effendi, Y. (2004). How does active parental consent influence the
findings of drug-use surveys in schools? Evaluation Review, 28, 246-260.
Week 8 (October 14)—STATISTICAL PRECISION AND POWER ANALYSIS
When steady progress is being made toward remedying a problem or attaining a desired
outcome, the monitoring process continues another round of assessment and review. Otherwise,
the detection of a worrisome gap between a valued result and an observed outcome may signal
the need for some corrective action or at least a reasonable accounting for the shortfall. When the
gap is of both practical importance and statistical significance, data analysis can be initiated to
identify the areas for improvement and the risk factors that may be contributing to the result. The
first step in assessing whether a gap is worrisome enough is to decide on the smallest difference
or distance that is deemed important enough to matter. This is best done in conjunction with
practitioners and administrators who are able to draw comparisons from experience, history, an
established standard, or some other reference point. With the availability of large administrative
databases, an assessment of practical importance can proceed without much concern for the
statistical significance of the conclusion. However when information is based on a sample from
a larger population, there is always a chance that the conclusion will be wrong due to the lack of
complete information about the population. Sampling theory provides a basis for evaluating the
chances of error and for taking those risks into account when designing a study.
17
Statistical power is a concept that has been commanding greater attention in social intervention
research. It refers to the probability that a real difference, effect size, or pattern of association of
a certain magnitude in a population will be detectable in a particular study. Power analysis
proceeds from assumptions about available sample size and the amount of error one is willing to
tolerate. It can also be used to determine sample size. To do this, we set an effect size that we
want to be able to detect and select a high probability (typically, 80%) that a statistical test will
reject the null hypothesis when the hypothesis of no effect is false. In this section, we will look at
some statistical power software that is typically used to determine the sample size necessary to
demonstrate an intervention’s effect.
Required Reading:
Ayres, pp. 192-220.
Testa & Poertner, pp. 147-152.
Orme, J. G. & Combs-Orme, T. D. (1986). Statistical power and type II errors in social work
research. Social Work Research & Abstracts, 22, 3-10.
Supplementary Readings:
Boruch, R. (2007). The null hypothesis is not called that for nothing: Statistical tests in
randomized trials. Journal of Experimental Criminology, 3, 1–20.
Week 12 (November 19)— IMPLEMENTATION AND QUALITY IMPROVEMENT
Quality improvement requires that researchers move beyond causal description and acquire a
behavioral and deeper interpretative understanding of the intervening processes by which
program resources are disbursed within a social system and transformed into program outputs
and client outcomes. Initially quality improvement takes the form of repeating a ”single-loop”
learning cycle several times as needed to ensure that the intervention is supplied in sufficient
dosage and with adequate fidelity to the original program design. If the results of the formative
or translational evaluations fall short of the desired targets, show no change, or else run contrary
to other end-values, quality improvement takes the form of a ”double-loop” leaning cycle in
which the existing theory of action and its assumptions and values are questioned and reflexively
changed.
Required Readings:
Ayres, pp.103-128.
Testa & Poertner, pp. 231-268, 291-327.
18
Moynihan, D. P. (2005). Goal-based learning and the future of performance management. Public
Administration Review, 65(2), 203—216.
Supplementary Readings:
McBeath, B. & Meezan, W. (2009) Governance in motion: Service provision and child welfare
outcomes in a performance-based, managed care contracting environment. Journal of
Public Administration Research and Theory. The State of Agents: Special Issue, 20, i101i123.
November 25 University Holiday – No Class
Week 13 (December 2)— QUALITATIVE RESEARCH AND REFLEXIVE PRACTICE
Some of the misunderstanding and distrust that occurs during the conduct of research arises from
the failures of researchers to acquire an interpretative understanding of client values and
perspectives and to take account of those preferences. To establish the mutual trust upon which a
research enterprise must rest, it is important to create recurring opportunities for the impulses,
desires, and values of the intended beneficiaries to enter into the design of a social intervention
and continuous revision of existing routines in light of new knowledge about the impact of those
practices. The coordination of the quantitative results with the qualitative feedback from clients,
practitioners, and researchers through peer review and “learning forums” constitutes an essential
ingredient in the integration of ethical, evidentiary, and practical concerns that is necessary for
the validity and integrity of social intervention research.
Required Readings:
Ayres, pp. 156-191.
Testa & Poertner, pp. 357-379.
D’Cruz, H., Gillingham, P. & Melendez, S. (2007). Reflexivity, its meanings and relevance for
social work: A critical review of the literature. British Journal of Social Work 37, 73–90.
Supplementary Readings:
19
Shadish, W.R. (1995). Philosophy of science and the quantitative-qualitative debates: Thirteen
common errors. Evaluation and Program Planning, 18 (1), 63-75.
20
Download