Measurement and Data Collection

advertisement
Outcomes and Program Improvement:
Designing Effective Evaluations
Understanding Measurement Tools
and
Data Collection
Presented by
Christine A. Ameen, Ed.D.
Ameen Consulting & Associates
(ameenca@ameenconsulting.com)
Agenda
1.
2.
3.
4.
5.
6.
7.
8.
9.
Brief overview of Continuous Program
Enhancement
Levels of measurement
Basic measurement methods
Technical characteristics of quality tools
Steps for finding new tools
The last resort: creating tools
Tips about using tools
Issues related to measuring outcomes
Don’t forget to ask the customer!
What is Continuous Program
Enhancement (CPE)?
• A commitment to ongoing assessment
of service delivery and client outcomes
to achieve the best outcomes possible.
Planning for CPE
1.
A program model which describes the services for
delivering services to participants and the outcomes
participants should achieve
2.
An objectives model which defines objectives for the
services and objectives for the outcomes
3.
An evaluation model which defines how data will be
collected and used to monitor the attainment of process
objectives and outcome objectives.
A Conceptual Framework for
Behavior Change
1.
Awareness: the initial consciousness, perception, or sense of
a concept
2.
Knowledge/Skills: understanding, comprehension of a
concept; demonstration of ability of that understanding or
comprehension
3.
Behavior: performance or conduct in a specified way
4.
Modeling behavior: demonstration of specified behavior to
others, most often to teach the behavior to others.
The Levels of Measurement

Nominal: labels descriptive of a variable, e.g.,
gender, ethnicity. These have no order and have
no numerical value. These data are usually
reported in percentages, e.g., percentage of
shoreline restoration projects completed

Ordinal: numerical values are assigned, e.g., a
Likert 5-point scale. These have a logical order,
but the distance between the points cannot be
quantified. These data may be reported using
percentages, e.g., percentage or participant whose
participation level changes
The Levels of Measurement

Interval: numerical values are assigned, e.g., risk
points. The distance between points is equal, but
there is no zero. These data may be reported using
averages, e.g., average score on a drug use
attitude survey

Ratio: numerical values are assigned, e.g.,
restitution hours. The distance between points is
equal and there is a real zero. These data may be
reported using averages, and comparing averages,
e.g., an average of 26 correct answers on the
substance abuse quiz is twice as many right as 13.
Why Does the Level of Measurement
Matter?
Level of
Measurement
Description
For Services
or Activities
For Outcomes
Nominal
Labels used to
describe a variable,
e.g., gender
Not usually used
Not usually used
Ordinal
Numerical values in
a logical order, e.g.,
Likert scale
Frequency counts
Percentages
Averages (scales)
Sometimes used in
calculating change
scores, averages
Interval
Numerical values at
equal distance, no
Not usually used
true 0, e.g., attitudes
Percentages
Change scores
Averages
Numerical values at
equal distance, true
0, e.g., number of
acres
Percentages
Change scores
Averages
Ratio
Not usually used
Let’s Try It!
1.
A program wants to use a staff behavior checklist,
with a 3 point rating scale, to assess the outcome
of improved behavioral functioning. What level of
measurement is this and how appropriate is it in
this example?
Let’s Try It!
2.
A program team wants to use a restitution log, to
assess the outcome of increased empathy. What
level of measurement is this and how appropriate is
it in this example? How might a log be used?
Basic Measurement Methods

Observation/activity logs: service delivery,
participation, etc. EXAMPLE1
Interviews: satisfaction, risk assessment, etc.
EXAMPLE2
 Surveys: self esteem, life skills, behavioral
functioning, etc. EXAMPLE3


Formal testing or assessment: achievement
tests, psychological measures, behavioral
functioning, etc. EXAMPLE4
What Methods Fit Best for CPE?
Measurement Method
Observation
Interviews
Surveys
Formal testing or
assessment
Examples for
Services/Activities
Examples for
Outcomes
Staff behavior ratings
Attendance rolls
Activity logs
Not usually used, requires
inter-rater reliability to be
used as an outcome tool
Exit interviews
Follow up interviews
Client satisfaction
Participant feedback
Empathy survey
Environmental care survey
Not usually used
Achievement testing
Skills assessment
The Technical Characteristics of a
Quality Data Collection Tool
1.
2.
Has VALIDITY – it measures what the program is believed to
be doing or to be impacting. The judgment of how valid a tool
is should be made by program staff

Does the content of the tool reflect what happens in the
program?

If the tool is an outcome measure, will the participants’
responses to the tool change over time?
Has RELIABILITY – it measures the program activity or
service the same way every time it is used. The judgment of
how reliable a tool is based on a goal of an alpha coefficient
of .75 for scales
The Technical Characteristics of a
Quality Data Collection Tool
3.
Can be used relatively easily by the staff
4.
Does not require any special credentials to be
administered
5.
Has a reading level appropriate to the participants
(usually between 5th and 7th grade level)
6.
Doesn’t take longer than 10-15 minutes to
administer
The Technical Characteristics of a
Quality Data Collection Tool
7.
Uses language and examples that are culturally
sensitive
8.
Is available at no cost
9.
Has a written procedure at the local level to assure
appropriate use
Steps for Finding New Measurement
Tools
1. Determine if the measurement is
related to service or outcome
2. Determine if the program is
already using a tool that might be
modified to work
Steps for Finding New Measurement
Tools
3.
Discuss with the staff the following items:





Define the outcome in your own words
Describe the kind of awareness a participant would
have to have to reach this outcome
Describe the kind of knowledge a participant would
have to know to reach this outcome
Describe the kind of behavior a participant would have
to show to reach this objective
What kinds of questions would you ask the participants
that would give you answers that would help you to
know if the outcome was reached
Steps for Finding New Measurement
Tools
4.
If a tool does not exist, especially an outcome tool, search
resources to find something that responds to the information
you are interested in (more on this is a minute…)
5.
If the tool is to be used for a service objective, ask the staff to
pilot the tool for at least two weeks to be assured it will work
6.
If a tool is to be used for an outcome objective, ask the staff to
pilot the tool with 30 participant, making sure to include
participants with varying characteristics thought to impact
participant response (e.g., gender, age, ethnicity?)
7.
When piloting, assure the staff follow the procedures for how
data will be collected, under the same conditions, at the same
time, etc.
Steps for Finding New Measurement
Tools
3.
Begin your search by looking at these resources:
 Youth Development
 Community-based Programs
 Troubled Youth
 Personal change
 Mental Health
Steps for Finding New Measurement
Tools
3.
Begin your search by looking at these resources:






Youth violence
Substance abuse
Psychological skills
Environmental programs
Environmental Education
Outcomes in Community Development
Finding it DOESN’T mean you
can use it!
1.
Read very carefully the introductory language on the website
describing the tool or tools it has. On some, permission to
use them is given right there – here’s an example:
http://www.ibr.tcu.edu/pubs/datacoll/intro.html
2.
On other sites, you need to review the specific tool to look for
evidence of copyright or permission;
3.
In general, if a tool is published in its entirety in a journal
article, it is in the public domain and may be used;
Finding it DOESN’T mean you
can use it!
5.
It is always a good idea, when in doubt, to contact the website
or the author of the tool to confirm it is in the public domain;
6.
If you decide to contact the author, it is also a good idea to
ask him/her for any information they have about the technical
quality of the tool;
7.
Sometimes the site will not have an email address for the
author. If the author is affiliated with a university, I go to that
website and look there. If that’s not available, I usually
“Google” the person, using their name and the name of the
tool – very often you will find an article referencing the tool
and can go from there.
What if I can’t find anything from
our resources? Google!
1.
Go to: http://www.google.com/advanced_search?hl=en
2.
In the “ with all of the words” selection enter these words:
3.
In the “with the exact phrase” enter the word or words that
describe what you’re interested in measuring, e.g., decisionmaking
4.
In the “with at least one of these words” selection enter words
that might narrow the search, e.g., delinquent, youth, juvenile,
offenders, etc.
outcomes measures evaluation instruments
More About Google!
5.
Don’t be concerned if you end up with thousands of records at
first. As you review what you get, you can go back to the
search criteria and enter into “without the words” those words
that will eliminate things that just don’t apply;
6.
You will begin to notice patterns of references that will likely
lead you to a potential tool – the same tool name will appear
multiple times. This might be worth pursuing. If any of those
references mention a publishing company or a book in which
the tool is contained, it is likely NOT in the public domain.
The Last Resort: Creating Tools
1.
Avoid using “yes/no” responses in data collection for variables
where a scale can be used – a scale representing choices
from 1 to 4, for example, will demonstrate change more
clearly and usually more quickly
2.
In building scales for data collection forms, it is often useful to
use an even-numbered scale, e.g., 4 points, to avoid a middle
point where the respondent can go to avoid making a choice
response
3.
Most scales have between 4 and 7 points. Fewer than 4
makes it more difficult to show change; more than 7 may be
confusing to the respondent
The Last Resort: Creating Tools
5.
The points on scales should be anchored with language that
helps guide how they’re to be used (e.g., 1=not at all; 3 =
sometimes, etc.). This should be included in written procedures
for the tool
6.
Before you get too far along, ask for feedback to make sure you’re
on the right track
7.
Make sure to pilot the tool
8.
Here’s a great resource that will help: Guide to Developing Tools
Some Tips About Tools Used for
Measuring Services or Activities
1. The tools focus on information about the delivery
of service, and often are completed by the staff
rather than the participant
2. Use currently existing tools whenever possible,.
3. If an existing tool is to be used, review it to
assure it contains all of the information desired
about the activity
4. Don’t assume staff know how to use it; do some
training
5. Write a procedure about how it is to be used
Some Issues to Address Related to
Measuring Outcomes
1.
Will service delivery result in the impact on the participant?
2.
Will service delivery result in the participant being able to
maintain a positive impact after leaving the program?
3.
An increase or a decrease in something, e.g., empathy,
requires the measure be used at the beginning and the
end of the program (aka, pre-post test design)

4.
Note: It is also possible to ask the respondent to estimate
their pre-test status at the time they take the post test only.
This is called a post-retro pre design
The achievement of a specific level of an outcome, e.g.,
the family will be able to name 3 community resources
they can utilize, requires the measure be used at the end
of the program only (aka, post test only design)
Some Issues to Address Related to
Measuring Outcomes
5.
Will instruments such as surveys be read to participant? If
read to some, read to all. Because reading to participant
requires more staff time and supervision, doing so should
be discouraged wherever appropriate
6.
Instruments should be administered by staff with whom
participant have limited or no interaction, to reduce the
potential, unintended bias of participant responding based
upon the relationship to the staff administering the
instrument
Don’t Forget to Ask the Customer!
1. One of the most important tools in program
evaluation is customer/client feedback
2. The typical key elements to ask about are:
•
•
•
•
•
•
•
Physical Surroundings
Procedures
Interactions with Staff
Timeliness
Understanding of client need
Outcomes
General Satisfaction
Some Examples of Feedback Surveys
1. Parent Satisfaction
2. Referral Agency Satisfaction
3. Patient Satisfaction Survey
Let’s Try It!

My program has this objective: 85% of participant
who complete the program will demonstrate
increased personal self-confidence. What kind of
tool would be appropriate to this objective and
provide the rationale for your idea.
Let’s Try It!

My program has this objective: 65% of participant
who complete the program will be able to function
independently in the community. What kind of tool
would be appropriate to this objective and provide
the rationale for your idea.
Let’s Try It!

My program has this objective: 85% of participant
will experience a positive and supportive
relationship with their advocate. Create a tool
appropriate to this objective and provide the
rationale for the tool
Let’s Try It!

My program has this objective: 70% of participant
will spend at least two hours weekly in positive
extra-curricular activities. Create a tool
appropriate to this objective and provide the
rationale for the tool
Thank you for attending my session
today!
If you have any feedback for how I can
improve my training style or content,
please email me your suggestions!
(ameenca@ameenconsulting.com)
Download