Evaluation Plan Template - Organizational Development

advertisement
When to Develop an Evaluation Plan?
Educational activities that should have an evaluation plan:




involve direct instruction (lesson plan, educational objectives);
include multiple offerings or implement curricula;
have an intentional or targeted audience; and
will produce evidence of community-based outcomes.
In contrast, typically activities that are indirect (such as websites,
bulletins, newsletters), that are one-time only, have public and open
attendance (such as booths, on-going demonstrations), or that include
internal capacity building of the organization (such as MSUE staff
trainings) are recorded as outputs that are not evaluated. If the event is
evaluated, typically it is a process or formative evaluation. Process or
formative evaluation differs from outcome-based or summative
evaluation. Process or formative evaluation is focused on program
improvement and usually captures only interest/satisfaction with the
activity. Typically these evaluations address what is working and what
needs to be improved in the program and how to accomplish that.
Outcome-based or summative evaluations capture individual,
organizational and community-level change. These evaluations examine
what results occurred because of the educational event. Short-term
outcomes are typically at a KASA level (knowledge, attitudes, skills, and
aspirations/motivations to change) and look at outcomes by audience
characteristics and under what conditions the changes occurred?
Practice and behavioral changes, as well as new technique adoption, are
often intermediate outcomes.
Conditional or long-term changes take several years to ascertain and
require following people over time or the use of comparison groups.
Developed July 2011
by MSU Extension Evaluation Specialist, Dr. Cheryl Peters., cpeters@anr.msu.edu
When to Develop an Evaluation Plan? Pg. 1
Evaluation Plan Template
Definitions
Indicators – what outcomes will be measured?
These should be articulated in the program logic model. Questions that help guide the
development of outcomes often start with a simple: What do you want to know? Often
followed by a reduction of outcomes by asking: What do you need to know? It’s always
good to balance these two questions with the number and depth of outcomes selected.
Outcomes need to be SMART: specific, measurable, achievable, realistic, & time-bound.
Sometimes translating goals into objectives is a beginning step. For example, what % of
program participants will increase on X (KASA); what % will do X (practice change).
Is there existing theory or measures available to use that can shape outcomes selected? Are
these measures reliable and valid? A reliable measure means something that is measured
consistently. Look for Cronbach alpha coefficients of .70 and higher on established scales.
A valid measure captures what was intended to be measured. Look for results published on
face, content, and criterion-related validity. Another way to think of reliability and validity
is in terms of accuracy and precision: reliability is analogous to precision, while validity is
analogous to accuracy.
Sample – who will receive education and who will be assessed?
Consider who received the education and determine a sample for the evaluation. Typically
if there are under 100 people, all participants are sampled. If there are a considerable
amount of program participants, sometimes a sampling procedure is used (either random or
purposively). Sampling decisions are often made based on time and resources of the
evaluator(s).
Method/Design – how will we collect the information?
For method – will you collect qualitative data (i.e., focus groups, interviews) or quantitative
data (e.g., survey)? Maybe collect both types of data. A common example is a survey with
KASA ratings (changes on knowledge, attitudes, skills, aspirations) and then open-ended
short answers or examples how the participant applied the information.
What type of evaluation design is appropriate? Think about evaluation costs and time.
This list includes the six most common designs in the evaluation of Extension programs.
The order starts with the simplest design and moves to more complex evaluation designs.
Developed July 2011
by MSU Extension Evaluation Specialist, Dr. Cheryl Peters., cpeters@anr.msu.edu
When to Develop an Evaluation Plan? Pg. 2
1. One-Shot (data collected following program).
2. Retrospective Pretest (participants recall behavior or knowledge before/after the
program at the same time).
3. One Group Pretest-Posttest (traditional, collected prior and following the program).
4. Time Series (data collected prior to, during, and after the program in a follow-up).
5. Pretest-Posttest Control Group (data collected from two different groups for
comparison, typically groups are the program participants and a control group).
6. Post-Only Control group (two groups for comparison typically program and control,
collected after the program only).
Analysis – what do we ultimately want to say?
Are you collecting the data that will ultimately lead to describing the outcomes indicated in
your program logic model? Usually the short-term outcomes are KASA-related and can be
assessed after an intentional and direct educational program. Mid-term or intermediate
outcomes often take more time to assess and require follow-up to ascertain if behavioral or
practices changes were implemented. Long-term conditional changes are more difficult to
measure and often require extensive tracking of individuals or comparison-group designs.
Currently, our MI PRS (Michigan Planning and Reporting System) asks Extension
professionals to indicate program evaluation data that shows learning (short-term outcomes)
and action (intermediate outcomes). Specifically, MI PRS collects the number of individuals
changed and the number of participants assessed. How “changed” is defined depends on the
indicator of the learning or action outcome. Often coding might be needed so that Extension
professionals are all measuring change the same way. For example, what does the
participant need to indicate on the evaluation (a composite score or certain rating) to know
that a learning or action outcome was achieved.
If MI PRS does not have all the indicators that you measured in an evaluation plan, you can
create an evaluation report summary and relate (upload) that file under the narratives
section under “Impact Summaries and Evaluation Results Related to Logic Models.”
Did you remember to collect demographics that are relevant to the program? All programs
should collect information on some participant demographics. The three variables collected
in the Outputs section of MI PRS are gender, race/ethnicity and the county location of the
participant. Other demographics collected may include age, family household composition,
household income, education, and place of residence. Collecting demographics can help
contextualize program evaluation outcomes. When reporting what the audience learned you
can provide more information about the audience that was reached. Additionally, you can
include demographics in data analysis to see if how participant characteristics related to
program outcomes.
Developed July 2011
by MSU Extension Evaluation Specialist, Dr. Cheryl Peters., cpeters@anr.msu.edu
When to Develop an Evaluation Plan? Pg. 3
Reporting – who will see/use the results?
The last major section of an evaluation plan includes what you plan to do with evaluation
results. At a minimum, most program evaluations inform internal audiences – supervisors,
organizational Directors, and funders. Commonly, evaluation results may be shared with
external audiences like the public. Indicate in this section the planned use for the program
evaluation results.
Does the project require IRB review?
This means submitting an application to the MSU Institutional Review Board. MSUE
IRB differentiates between program evaluation (with intent to inform or improve
program implementation), quality assurance or quality improvement (is the program
implemented as intended and with fidelity) and research. The first step in determining
if your planned program evaluation is research depends on the answers to the following
questions:
Is the activity a systematic investigation designed to develop or contribute to
generalizable knowledge?
If yes, the activity is defined as research. Next question to consider:
Does the research involve obtaining information about living individuals?
If yes, the research involves human subjects. Next question to consider:
Does the research involve intervention or interaction with the individuals?
If yes, an IRB application is warranted. Most Extension program evaluations fall under
the category of exempt because the evaluation plan involves research conducted in
established or commonly accepted educational settings, involving normal education
practices including the use of surveys, interviews, observations, or tests.
Most Extension program evaluation that is approved by the IRB under the exempt
category can waive collecting a signed informed consent document. This is because the
study presents minimal risk to the subject and often the consent document would be
the only record linking the subject and the research. Keeping your Extension survey
voluntary and confidential can support the allowance of waived consent.
All Extension program evaluations should include the elements of voluntary
involvement, confidentiality, and notifying participants regarding the intent of the
research or evaluation. The consent process should be informative and empowering.
Templates of consent forms are available on the MSU IRB website.
http://www.humanresearch.msu.edu/
Developed July 2011
by MSU Extension Evaluation Specialist, Dr. Cheryl Peters., cpeters@anr.msu.edu
When to Develop an Evaluation Plan? Pg. 4
Evaluation Plan Template
Worksheet
Indicators – what outcomes will be measured?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Sample – who will receive education and who will be assessed?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Method/Design – how will we collect the information?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Analysis – what do we ultimately want to say?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Reporting – who will see/use the results?
________________________________________________________________________
________________________________________________________________________
________________________________________________________________________
Developed July 2011
by MSU Extension Evaluation Specialist, Dr. Cheryl Peters., cpeters@anr.msu.edu
When to Develop an Evaluation Plan? Pg. 5
Download