Developing Measurable and Appropriate Outcomes

advertisement
National Disability Advocacy
Program Reform:
Measurable Performance
Outcomes
Thank you for your invitation,
time and attention.
What is measurable?
• Numbers - Statistics
• Relations between numbers
• Client Satisfaction
• Staff Satisfaction
• Compliance with Standards
Where comparison of data is important data to
be collected has to be valid, comparable to
the data other services collect and collected
in a way that it can be verified.
Client and Staff satisfaction levels are very
subjective and dependent on a range of
circumstances.
Compliance with Standards is already a
requirement, although the standards and
performance indicators could be improved.
What is Performance?
Performance of service providers has to be measured on
standards/performance indicators.
NDAP already has a performance measurement system
with audits required every five years.
Performance should never be measured on statistical
data only, but predominantly on compliance with
standards.
Compliance with standards must be documented in all
areas.
Performance could also be measured on statistical data.
Compliance with performance indicators is part
of an ongoing quality improvement system.
What are Performance
Outcomes?
Performance outcomes, measured on statistics
only, may be (ab)used to determine funding
levels.
Outcomes could be defined as the
• % of clients from a CALD background;
• Cost per client per year;
• Ratio of advocates to issues resolved;
• Number of clients to be assisted; etc.
Mixing of Quantitative and
Qualitative Data
Example:
Comparison
of data about
issues and
length of time
worked on an
issue;
Linking level
of resolution
to kind of, or
level of,
disability;
Qualitative research, broadly defined, means "any kind of
research that produces findings not arrived at by
means of statistical procedures or other means of
quantification" (Strauss and Corbin, 1990, p. 17) and
instead, the kind of research that produces findings
arrived from real-world settings where the
"phenomenon of interest unfold naturally" (Patton,
2001, p. 39).
Unlike quantitative researchers who seek causal
determination, prediction, and generalization of
findings, qualitative researchers seek instead
illumination, understanding, and extrapolation to
similar situations (Hoepfl, 1997).
Reliability - Validity
Examples:
How do you
identify CALD
clients?
How do you
identify the
category to
place issues?
If your funding
would depend
on boosting
your numbers,
would you?
Joppe (2000) defines reliability as:
…The extent to which results are consistent over time and
an accurate representation of the total population under
study is referred to as reliability and if the results of a
study can be reproduced under a similar methodology,
then the research instrument is considered to be
reliable. (p. 1, Joppe, M. (2000). The Research Process. Retrieved
February 25, 2000, from http://www.ryerson.ca/~mjoppe/rp.htm)
Validity determines whether the research truly measures that
which it was intended to measure or how truthful the
research results are. In other words, does the research
instrument allow you to hit "the bull’s eye" of your research
object? Researchers generally determine validity by asking
a series of questions, and will often look for the answers in
the research of others. (p. 1)
The Qualitative Report Volume 8 Number 4 December 2003 597-607
http://www.nova.edu/ssss/QR/QR8-4/golafshani.pdf
Nahid Golafshani: Understanding Reliability and Validity in Qualitative
Research,University of Toronto, Toronto, Ontario, Canada
Critical Issues:
What kind of data is collected and for what purpose?
How (with what criteria) will the data be
evaluated?
How will advocacy services be involved in the
development of performance standards and an
ongoing quality improvement system, which can
be reliably assessed?
How will advocacy services be involved in the setting
of performance outcomes or targets, which may
determine the level of funding we receive, and
how do we ensure that the outcomes assessed
are reliable and valid?
How will such targets be developed? Will there be
consultation with people with disabilities and their
support persons?
If targets are set, what will be the consequences of not
meeting such targets?
Proposal
FaCSIA should commission two studies:
1.
Review of the existing audit system, based on the
recent audit, and whether the existing standards and
performance indicators are useful for all
stakeholders and fulfill their intended purpose;
2.
Undertake a survey of existing agencies and ask the
following questions:
1.
2.
3.
3.
What does your agency count as a program
outcome and how do you measure it?
How do you integrate the existing quality
improvement system in the daily work of your
agency?
What would you change about the standards
to make them more applicable to your
situation?
The results of both studies should be used to
develop a meaningful performance measurement
system.
In Conclusion
Clearly, in the United States there has been a growing
emphasis on outcomes measurement, performance
outcomes and results from the national to the local levels.
However, perhaps it is now time to take stock of progress
to date in the use of outcomes data. The question I would
like to pose for consideration is to what extent have the
results of outcomes measurement been used in your
countries– do outcomes matter? If so, to whom do they
matter? If not, what needs to be done to assure that they
will matter?
Toward outcomes measurement in the human services
© Copyright 2001 Edward J Mullen
Edward J Mullen
Centre for the Study of Social Work Practice
Columbia University
DACSSA Inc.
Funded by
the Australian Government as part of the
National Disability Advocacy Program.
Thank You!
Our contact details are:
Unit 3, 178 Henley Beach Road
(enter from Jervois Street)
Torrensville SA 5031
Phone:
Fax:
Country:
8234 56 99
8234 60 44
1800 088 325
Email:
drigney@dacssa.org.au
Website:
www.dacssa.org.au
Download