Evaluation Planning III: Identifying and Selecting the Evaluation Questions and Criteria

advertisement
Evaluation Planning III:
Identifying and Selecting the
Evaluation Questions and Criteria
Dr. Suzan Ayers
Western Michigan University
(courtesy of Dr. Mary Schutten)
Evaluation Questions

Evaluations are conducted to answer questions
and to apply criteria to judge the value of
something

Evaluation Questions provide the direction and
foundation for the evaluation

They articulate the focus of the study
Criteria and Standards

Criteria: used to identify the characteristics of a
successful program (measure)

Standards: designate the level of performance
the program must achieve on these criteria to
be deemed a success (performance)

Without standards, evaluator can not judge the
results, without criteria unable to judge the
program itself
Phases of Identifying
and Selecting Questions
Divergent phase: a comprehensive “laundry
list” of potentially important questions and
concerns [many sources, all questions are
listed]
 Convergent phase: evaluators select from the
“laundry list” the most critical questions to be
answered
 Criteria are developed after the convergent
phase

Divergent Phase Sources

Questions, concerns, values of stakeholders
– Policy makers (legislators, board members)
– Administrators, managers (direct program)
– Practitioners (operate program)
– Primary consumers (clients, students, patients)
– Secondary consumers (affected audiences)
– What is their perception of the program?
What Qs or concerns do they have? How well
do they think it is doing? What would they
change if given the chance?
Stakeholder Interview Questions
fig 12.1
What is your general perception of the
program? What do you think of it?
 What do you perceive as the purposes?
 What do you think the program theory is?
 What concerns do you have about the
program? Outcomes? Operations?
 What major questions would you like the
evaluation to answer? Why?
 How could you use the information provided by
these questions?

Use of evaluation models/approaches
Objectives-oriented: are goals defined and to
what extent are they achieved?
Management-oriented: questions about CIPP;
context (need), input (design), process
(implementation), product (outcomes)
Participant-oriented: consider all stakeholders
and listen to what they have to say. Process of
program is critical
Consumer-oriented: checklists and sets of criteria
to help determine what to study & what
standards to apply
Expertise-oriented: standards and critiques that
reflect the view of the experts in the field


Findings and issues raised in the literature in
the field of the program
– Evaluator should be conversant with salient issues
in the program’s area
– Use existing literature to help develop causative
models and questions to guide the evaluations
 Literature search may be a useful start to the planning
process

Professional standards, checklists, instruments,
and criteria developed or used elsewhere
– Standards for practice exist in many fields,
including PE and athletics

Views and knowledge of expert consultants
– If expertise in the content area, they may
provide a neutral and broader view
– They can be asked to generate a list of
questions and can identify previous
evaluations of similar programs
Evaluator’s own professional judgment (p. 244)
– Trained to raise thoughtful questions
– Is the purpose of the program really serving
an important purpose?
– Are goals and objectives consistent with
documented needs?
– What critical elements and events should be
studied and observed?
 Summarizing suggestions from multiple sources

– P. 245-6
Convergent Phase

Three reasons to reduce to the range of
variables
– There will always be a budget limit
– If the study gets very complicated, it gets harder
and harder manage
– Audience attention span is limited

Who should be involved?
–
–
–
–
Evaluator
Stakeholders
Sponsor
Parties affected by the evaluation
Determining Which Questions to Study
(Cronbach, 1980)






Who would use the information? Who wants to know?
Who will be upset if this question is dropped?
Would an answer to the question reduce uncertainty
or give info not now available?
Would the answer to the question yield important
information?
Is this question merely of passing interest or does it
focus on critical issues of continued interest?
Would the scope of the evaluation be seriously limited
if this question were dropped?
Is it feasible to answer this question given the
available financial and human resources? Time?
Methods? Technology?
Convergent Phase
Sit down with sponsor and/or client and review
the laundry list and the items marked as
“doable” (from 12.2 matrix)
– Reduce the list via consensus
– Advisory board typical format
 Provide the new list with a short explanation
indicating why each is important and share with
stakeholders

Matrix for Selecting Questions
Would the evaluation question….
Be of interest to key audiences?
Reduce present uncertainty?
Yield important information
Be of continuing (not fleeting) interest?
Be critical to the study’s scope?
Have an impact on the course of events?
Be answerable in terms of $$, time,
methods/technology?
Fig 12.2
Criteria and Standards
Developed to reflect the degree of difference
that would be considered meaningful enough to
adopt the new program
 Absolute Standard: a defined level is met/not
met
– Learn stakeholders’ range of expectations
and determine standards from that
 Relative Standard: comparison to other groups
or standards

– Typically use the statistical concept of significance
and effect size to determine if the program is “that
much better” than what is in place (p. 253 expl)
Flexible (not indecisive)
 Allow new question, criteria, and standards to
emerge

Remember, the goal for this step is to lay the
foundation to create a meaningful and useful
evaluation
Download