Ethics in Program Evaluation

advertisement
Ethics in Program
Evaluation
Research vs. Evaluations
Researcher:
Respect for research subjects
Honesty with data and money
Errors in research not likely to harm people:
publication is often goal
Research study to application is long: lots of
opportunities to identify and discard errors
Poor research = not likely to be published
Time is often limited, errors need to be identified
throughout entire eval. process.
Evaluators:
Must provide clear, useful, accurate evaluations to
stakeholders with whom they work
Poorly done evaluations
-affect the provisions of services to people
-disrupt the staff of service organizations
-harmful programs may continue
-beneficial programs may be terminated
Guiding Principles for Evaluations
1. Systemic Inquiry
- Evaluators conduct systematic, data-based
inquiries about whatever is being evaluated.
2. Competence:
- Evaluators provide competent performance
to stakeholders
3. Integrity / Honesty:
-Evaluators ensure the honesty and integrity
of entire evaluation process
4. Respect for People:
- Evaluators respect the security, dignity and
self-worth of respondents, program participants,
clients and other stakeholders with whom they
interact
5. Responsibility for General and Public Welfare:
- Evaluators articulate and take into account
the diversity of interests and values that may be
related to the general and public welfare
Standards for the Practice of
Evaluations
1.
2.
3.
4.
5.
Ethical Treatment of People
Role Conflicts Facing Evaluators
Stakeholder Needs
Validity of Evaluations
Avoiding Possible Negative Side Effects
of Evaluation Procedures
1. Ethical Treatment of People
a) Assignment to Program Groups:
-evaluators must find out if any harm can come
to the program participants because
sometimes programs have no effect or a
negative effect
(Ex: alcoholics program)
-If the treatment fails, evaluators must ensure
that they receive adequate additional services.
b) Informed Consent:
What is it?


People can agree to participate in a program being evaluated but
evaluators must make sure that their consent be informed: Must
be given sufficient information about the program to enable them
to weigh all the risks involved.
If this has not happened, even if the person has signed an
agreement to participate, informed consent was not given
Why might this provide additional ethical dilemmas for
the evaluator?


Revealing too much info. can create expectations on the part of
the participants or demoralize those not selected for the
program.
The validity of the evaluation can be threatened when informed
consent has the potential to change participants behavior.
c) Confidentiality
-information must be treated with the utmost care so as
not to invade the privacy of the participants or managers
-Protecting confidentiality
1) using only information only the particular
respondent would recognize such as first names
2) if contacting respondents is necessary, the
evaluator keeps a master list for only them
3) with very sensitive data, some evaluators have
stored names and codes in different countries
- few evaluators have to deal with such information, but
confidentiality is extremely important
2. Role Conflicts Facing Evaluators
a). Conflict with Program Staff
b). Conflict with Clients
c). Conflict with Stakeholders
What can evaluators do to?
-anticipate and minimize potential conflicts
among stakeholders before evaluation
begins.
CLASS EXCERSIZE
HANDOUT
3. Stakeholder Needs
a) Program Manager:
- concerned with efficiency
b) Program Staff:
- concerned with getting assistance in
service delivery
c) Service Users:
-concern: effective and appropriate
services
What can evaluators do?
d) Community:
-concerned with cost effective programs
4. Validity of Evaluations
a) Valid Measuring Instruments:
-temptation to use standardized tests
even when they may not be appropriate
to measure outcomes of a program.
-result: obscured or misleading
conclusions regarding programs.
b) Skilled Data Collectors:
-much data is collected through interviews- interviewing
is a difficult skill
NEED
-interpersonal skills
-common sense
-maturity
-respect for truth
-able to record and report attitudes even if it
varies from ones
own
-must be able to gain cooperation from
participants
-ability to appear concerned with the person and not
just collecting data
-experience working with children
c) Appropriate Research Design:
-design must fit the needs of those who
might utilize the information sought.
-once info. is known, conducting the
evaluation would not be ethical if it is
known that that the program cannot
address the stakeholders questions and
concerns.
-Research design plan of procedures for
collecting and analyzing data to
investigate a research question or test the
hypothesis.
d) Adequate Description of Program and Procedures:
- Science should be used so that others can evaluate
the procedures and repeat the research
- Evaluations that do not describe in enough detail or
permit others to understand create difficulties (EX:
not looking into implementation issues)
- Must present enough detail so that the reader can
understand how the evaluator obtained and
analyzed the information. Not everyone may want
to read them, but for those looking into implementing
the program this is important.
5. Avoiding Possible Negative
Side Effects of Evaluation
Procedures
a). Can Someone be Hurt by Inaccurate Findings:
-Falsely Positive: erroneously suggests
program is effective
-Falsely Negative: erroneously suggests
program is not effective
-Type I Error: false conclusion due to random
statistical variation
-Type II Error: occurs when insufficient
attention is paid to the design of an evaluation.
Why would this happen?



Evaluators focus on wrong variables
Evaluators use to short a time span to show
either positive or negative effects
Evaluators enthusiasm for a program may
lead to falsely optimistic conclusions.
b) Statistical Type II Error
What is it again?
-being unable to reject the null hypothesis
-Null Hypothesis- any observed difference between
samples is seen as a chance occurrence resulting
from sampling error
-when you reject the null hypothesis, it means that
something did cause the change
How Does it Affect Evaluations?
-because evaluations are of a sample, you cannot get
accurate findings for whole populations.
-evaluators may conclude that an valued program is
ineffective
Do I have to be aware?
- Evaluators work in situations that make Type II
errors very likely
- Few participants tested because of time
restrictions or to limit disruption
- Evaluator may not have been given enough
resources to measure the outcome variables on
a large number of participants
Ways to reduce Type II Errors
-use large samples
-use outcome measures with high reliability
c) Unplanned Effects:
Ethical evaluators examine program as designed and implemented.
Evaluators work most effectively when they are alert to unexpected
negative side effects to a program.
Example:
Desirable goal of Program: raise the quality of public
school teachers
Program: Test teacher competence
Negative Side Effect: Insulted good teachers, by
implying that meeting the minimum standard defines
good teaching
d) Evaluators Values
- Other than conflicts between the program advocate
and evaluator other unexamined values may be
hidden in statistical analysis
(EX: Sesame Street)
-Sesame street was created with children of low
income families in mind, but appeared more
effective with children of upper income families
instead
-without reanalysis of the evaluation, the finding would
have been overlooked
-when program benefits are uneven, the less
privileged group should benefit more
-merely examining the overall effects of a program
endorses a utilitarian ethic and end up ignoring the
disparities among people.
Institutional Review Boards
and Program Evaluation
Do we need them?
-IRB’s are required by federal law to ensure that
medical and behavioural research is conducted
ethically and subjects are protected from harm
- Colleges, universities, hospitals and other
agencies doing research must have a committee
which examines research to be done
- Evaluators are STRONGLY ADVISED to seek
approval
(EX: First Year Student Study)
Ethical Problems Evaluators
Report:
1.
2.
3.
4.
5.
Changing the evaluation questions after examining the data
Promising Confidentiality when it could not be guaranteed
Making decisions about the evaluation without consulting the
clients
Carrying out an evaluation without sufficient training
Making it easy for partisan groups to delete references to
embarrassing program weaknesses from reports
The most serious and frequent ethical problems evaluators reported
was the difficulty in presenting findings clearly, completely and
fairly. When asked why? Evaluators report feeling pressured by
stakeholders to alter the presentation of the findings.
Evaluators challenge: To get the stakeholders to recognize program
weaknesses in a spirit of problem solving rather than denial.
THE END!!!
Class Activity
Q&A
Assessment
Download