Evaluating Disaster Mental Health Programs for Children and Families

advertisement
CHILD AND FAMILY
DISASTER RESEARCH
TRAINING AND EDUCATION
Northwest Center for
Public Health Practice
1
Federal Sponsors
NIMH
National Institute of Mental Health
NINR
National Institute of Nursing Research
SAMHSA
Substance Abuse
and
Mental Health Services Administration
Northwest Center for
Public Health Practice
2
Principal Investigators
Betty Pfefferbaum, MD, JD
University of Oklahoma Health Sciences Center
Alan M. Steinberg, PhD
University of California, Los Angeles
Robert S. Pynoos, MD, MPH
University of California, Los Angeles
John Fairbank, PhD
Duke University
Northwest Center for
Public Health Practice
3
Evaluating Disaster Mental
Health Programs
Part I
Clark Johnson, Ph.D.
Adopted / Modified from materials prepared by:
Fran Norris Ph.D., Craig Rosen, Ph.D.
Helena Young, Ph.D.
National Center for PTSD
Northwest Center for
Public Health Practice
4
Primary sources for presentation
Owen, J.M. (2007).
Program Evaluation: Forms and Approaches.
New York: Guilford Press.
Rosen, C., Young, H., & Norris, F. (2006).
On a road paved with good intentions, you still need a
compass: Monitoring and evaluating disaster mental health
services.
In C. Ritchie, P. Watson, & M. Friedman (Eds.),
Mental health intervention following disasters or mass violence
(206-223). New York: Guilford Press.
Northwest Center for
Public Health Practice
5
Learning Objectives

After completing this module you will be able to:
•
Identify evaluation methods that support the
intervention program from conception to outcome
•
Engage in evaluation activities prior to a disaster
•
Recognize the barriers and challenges in conducting
evaluations of disaster mental health programs
•
Understand the crucial role of both community and
agency stakeholders as key informants and
participants in all evaluation activities
Northwest Center for
Public Health Practice
6
Let’s start with your experience

Give an example (past present or future) of a
program evaluation or one you wish would be
evaluated(!)

Please focus on:
•
•
What is being evaluated?
Why
•
•
•
what is the objective of this evaluation ?
what is the “product” this evaluation should generate?
How
•
Method(s)
Northwest Center for
Public Health Practice
7
Evaluation: Traditional Perspective

Program evaluation as a “judgment of worth”
•
How good is this program?
•
Did the program work?
•
Was the program worthwhile from a monetary
perspective?
Northwest Center for
Public Health Practice
8
Logic of Evaluation

Establish criteria of worth
•

Constructing standards
•

How well should the evaluand perform?
Measuring performance/compare with standards
•

On what dimensions must the evaluand do well?
How well did the evaluand perform?
Synthesizing & Integrating evidence
Northwest Center for
Public Health Practice
9
Steps in Conducting
Program Evaluation
1. Engage the stakeholders
2. Describe how the program works
3. Articulate evaluation questions & design
4. Gather credible evidence
5. Justify conclusions
6. Share results
Northwest Center for
Public Health Practice
10
Evaluation: Global Perspective

Before
•
•

During
•
•

What is needed?
How does this program meet these needs?
What is happening in this program?
How can we improve this program?
After
•
•
How good is this program?
Did the program work?
Northwest Center for
Public Health Practice
11
Categories of Evaluative Inquiry

Proactive
•

Clarificative
•

Quantifies both the program’s process and
objectives – make program assumptions explicit
Interactive
•

Guides the early planning so that it incorporates
the views of stakeholders and the accumulated
knowledge from previous work in the field
Think of this as evaluation design to enable the
program to make “mid-course corrections”
Impact
•
The “traditional” evaluation category
Northwest Center for
Public Health Practice
12
Proactive Evaluation

Purpose: Synthesis
•

What is already known should influence action.
Typical Issues:
•
•
What is the “need”
What is known about this problem
•
•
•
•
•
experience,
relevant literature,
conventional wisdom
What is recognized as best practice in this area
Who are the stakeholders & how do their perspectives
differ
Northwest Center for
Public Health Practice
13
Engaging Stakeholders

Who are the “stakeholders?”
•
•
•
program leaders and staff
communities who are served by the program
funding and administrative agencies

Identifying and engaging stakeholders helps to create
a sense of ownership by ensuring that their
perspectives are understood and that essential
elements of the program are not being ignored

However, it is also important to identify the primary
client at the start of the process: Who will “own” the
data, and who gets to put the “spin” on results?
Northwest Center for
Public Health Practice
14
How Stakeholders are Engaged

Evaluators often begin by asking,
•
•
•
•
What will this evaluation do for you?
What is it that you want to know?
Who do you have to answer to?
What does that mandating authority care about?”

Evaluators often invite discussion about immediate,
intermediate and long-term concerns

Often evaluators explore policies the stakeholder is
attempting to inform or influence and incorporate
these choices into the design
Northwest Center for
Public Health Practice
15
Clarificative Evaluation

Purpose: Clarification
•

Define (make explicit) the internal structure and
functioning of an intervention or program.
Typical Issues:
•
Define program:
•
•
•
•
•
outcomes,
rationale,
methods
How is program designed to achieve the outcomes
Is the program plausible?
Northwest Center for
Public Health Practice
16
Interactive Evaluation

Purpose: Improvement
•

Assist with ongoing service provision and structural
arrangements – with a strong emphasis on process
Typical Issues:
•
•
What is this program trying to achieve
Is the delivery:
•
•
•
•
•
Working
Consistent with the program plan
How could the delivery be changed to maximize efficiency &
effectiveness
Is program reaching the target population
Is there a site which needs attention to ensure effective delivery
Northwest Center for
Public Health Practice
17
Impact Evaluation

Purpose: Learning / accountability
•
•

Assess the effects of completed program.
Determine what did (not) work and why
Typical Issues:
•
•
•
Program implemented as planned?
Program achieved stated goals / objectives
What were unintended outcomes
Northwest Center for
Public Health Practice
18
So, what is our definition our
of Program Evaluation

Program Evaluation is more than a “judgment of
worth” – it also contributes to:
•
•
•

Planning
Fine tuning &
Execution
Expanded definition emphasizes the production
of “useful knowledge for decision making”
Northwest Center for
Public Health Practice
19
Categories of Evaluative Inquiry




Proactive
Clarificative
Interactive
Impact
Great! But how / when is this done?
Next slide series will focus on the “Methods” associated with various categories
Northwest Center for
Public Health Practice
20
Proactive Evaluation

Major focus: Program Context

State of Program: None

Key approaches:
•
•
•
•
Needs assessment
Research synthesis (evidence-based practice)
Review of best practice (benchmarking)
Generate input from Stakeholders, key informants,
and target population
Northwest Center for
Public Health Practice
21
Needs Assessment Sidebar
Focus on problems not solutions

A sampling of “needs assessment” field notes
•
•

What kind of need is this?
•
•

“We need to Minimize psychological trauma following
a disaster
“For that purpose we need a new health center in the
neighborhood”.
1) Need as the difference b/w pre and post disaster
2) Need as the solution
Always use the “need as discrepancy” definition
when conducting a “needs assessment”.
Northwest Center for
Public Health Practice
22
Key words for Google search
(and other useful references)

Concept mapping
•

Focus groups
•

Sutherland & Katz (2005). Concept mapping methodology: A
catalyst for organizational learning. Evaluation and Program
Planning, 28, 257-269
Strickland (1999) Conducting Focus Groups Cross-Culturally:
Experiences with Pacific Northwest Indian People, Public
Health Nursing, 16(3),190-197.
Needs Assessment
•
Roth (1990). Needs and the needs assessment process.
Evaluation Practice, 11(2), 39-44.
Northwest Center for
Public Health Practice
23
Clarificative Evaluation

Major focus: All elements

State of Program: Development

Key approaches:
•
Evaluability assessment
•
•
•
Stakeholders: Identify and determine their perceptions, concerns and
interests.
Logic development -- identify assumed cause and effect
relationships as well as interplay of resources and activities
Ex-ante
•
An investigation undertaken to estimate the impact of a future
situation
Northwest Center for
Public Health Practice
24
Describing How the Program Works
Evaluation is grounded in an understanding
of how a program operates, known as
“program theory” or “logic model”
Northwest Center for
Public Health Practice
25
Example Logic Model
Event
Outcomes
• Type of disaster
• Estimated need
Outputs
Activities
Inputs
• Budget
• Other resources
Community
• Density, income
• Age & ethnic dist.
• Service mix
• Referrals
• Training
• Diversity activities
• Number of
people served
• Number of
counseling
contacts
• Number of
minorities served
• Number of
children served
• Improved
functioning of
individuals and
families
• Improved
community
cohesion &
resilience
• Reduced
stigma about
seeking
treatment
• Legacy of public
mental health
orientation
Northwest Center for
Public Health Practice
26
Key words for Google search
(and other useful references)

Evaluability Assessment
•

Program Logic.
•

Smith (1989) Evaluability Assessment: A Practical Approach.
Norwell, MA: Kluwer.
Patton (1997) Utilization Focused Evaluation. 3rd ed.
Thousand Oaks, CA: Sage.
Ex-ante evaluation
•
Ex-ante Evaluation: A practical guide for preparing proposals
for expenditure programmes
(http://ec.europa.eu/budget/evaluation/pdf/ex_ante_guide_en.
pdf)
Northwest Center for
Public Health Practice
27
Interactive Evaluation
(New program)



Major focus: Delivery
State of Program: Development
Key approaches:
•
•
•
•
•
Responsive
Action research
Developmental
Empowerment
Quality review
Northwest Center for
Public Health Practice
28
Key words for Google search
(and other useful references)

Responsive
•

Empowerment
•

Stake (1980). Program evaluation, particularly responsive
evaluation. In Dockrell & Ganuktib (eds) Rethinking Evaluation
Research. London: Hodder & Stoughton.
Fetterman & Wandersman (2004). Empowerment Evaluation
Principles in Practice. New York: Guilford Publications.
Also read summary overview sections in
•
Owen (2006). Program Evaluation: Forms and approaches.
New York: Guilford Publications Pg 217-236
Northwest Center for
Public Health Practice
29
Impact Evaluation

Major focus: Delivery / outcomes

State of Program: Settled

Sidebar on study design
Northwest Center for
Public Health Practice
30
Designs For Outcome Evaluation



Pre-experimental or pre-post
• In the simplest case, consumers are compared with
themselves before and after an intervention
Experimental
• When people are randomly assigned to receive the
intervention or not, groups are equivalent in all ways
others than receipt of the intervention. So it is
reasonable to attribute differences to the intervention
Quasi-experimental
• Sometimes it is possible to identify a reasonable
comparison group, even though people were not
randomly assigned. When this is not possible, repeated
measures are helpful
Northwest Center for
Public Health Practice
31
Pre-Post Designs
Hypothetical Results



Longitudinal measures
Change over time
Better than retrospective
estimates of “change”
Symptoms over time
45
40
35
30
25
Pre
Post
Follow-up
Northwest Center for
Public Health Practice
32
When Are Pre-post Designs
Adequate, and When Not?

Pre-post designs are adequate to assess an
immediate outcome, such as knowledge gained, that
normally would not change with time

Pre-post designs are typically inadequate for
evaluating intermediate or long-term outcomes. Other
things not controlled for can account for the change.
People receiving the intervention might have
improved anyway because symptoms normally
improve over time. People may be most like to seek
help when their distress is at its peak

Pre-post designs are often used for pilot testing to
justify the cost of experimental designs later
Northwest Center for
Public Health Practice
33
Experimental and
Quasi-experimental Designs
Hypothetical Results



The “gold” standard
Randomized
Treatment vs. Control
Symptoms over time
45
40
35
30

What can we infer
•
•
New > Service as usual
Persistent effect
25
Pre
Post
Follow-up
Service as usual
New intervention
Northwest Center for
Public Health Practice
34
Outcome Evaluation:
What Do You Do When There is No
Feasible Comparison Group?
An Example:
“InCourage,”
The Baton Rouge Area Foundation’s
Mental Health Initiative
Northwest Center for
Public Health Practice
35
Repeated Assessment as an
Quasi-Experimental Strategy


In the BRAF initiative, each client is receiving “Treatment
for Postdisaster Distress,” which requires 8-10 sessions.
The first two sessions are psycho-education, very much
like crisis counseling. The heart of the treatment
(including cognitive restructuring or “CR”) begins at
Session 3.
Each client is assessed (briefly) at five time-points:
• At point of referral
• At enrollment (beginning of first session)
• At beginning of third session
• At beginning of last session
• At follow-up (3 months after completion)
Northwest Center for
Public Health Practice
36
The treatment effect is plausible if, on average, the data
looked something like this:
Total distress score
30
25
20
15
10
5
0
Referral
1st
Session
3rd
Session
Last
Follow-up
Session
Why?
Northwest Center for
Public Health Practice
37
We’d have less confidence if, on average, the
data looked like this:
Total distress score
30
25
20
15
10
5
0
Referral
1st
Session
3rd
Session
Last
Follow-up
Session
Why?
Northwest Center for
Public Health Practice
38
What are the Lessons Here?



There is middle ground between “clinical trials” and simple
“pre-post” designs. It is usually true that “something is better
than nothing”.
Although there is only one group, the repeated assessments
will allow us to evaluate competing explanations of observed
improvements.
Comment – As indicated in the example, a quasi
experimental design can be used to demonstrate that no
effect exists but usually will not provide convincing evidence
(beyond plausibility) that the observed effect was “caused” by
the intervention
Northwest Center for
Public Health Practice
39
Let’s take a break

When we come back we’ll focus on moving
these concepts:
From theory to practice
Northwest Center for
Public Health Practice
40
We will start the next session in about 10 minutes and will begin with a discussion of the following text
“Disaster research is different from most other fields in that much of the work is motivated by a sense of
urgency and concern. Disaster research has both benefited and suffered from this. It has benefited because
the cadre of researchers is fluid, and new ideas are accepted and welcomed. It has benefited also because
the result has been an impressively diverse database that includes samples from all different regions of the
United States[...]. However, disaster research has also suffered from this situation. Scholarship is not always
the best because studies often are undertaken under conditions where there simply is not time to absorb a
literature that is scattered across a variety of journals and is mixed in quality. Concerns about experimental
designs and scientific rigor must often take a back seat to provider beliefs, consumer demands, and clinical
necessities. Most of the research is atheoretical and little of it is programmatic. On the basis of this review,
we will state our opinion unequivocally that we do not need more research that establishes only that severely
exposed disaster victims develop psychological disorders or, worse, that barely exposed disaster victims do
not. We need carefully conceived and theory-driven studies of basic process that are longitudinal in design.
[...] We need more research that addresses the needs of diverse populations. We need more complex studies
of family systems and community-level processes. We need to identify and investigate novel approaches to
community intervention, where the intervention itself has been designed to produce collective rather than
individual improvements.”
Source :
Norris, Friedman, & Watson. (2002) 60,000 Disaster Victims Speak: Part II. Summary and Implications of the Disaster Mental Health Research.
Psychiatry 65(3), 240-260
Northwest Center for
Public Health Practice
41
Blank
Northwest Center for
Public Health Practice
42
Download