handout

advertisement
MERC
Ten Steps to
Designing an
Evaluation for Your
Educational Program
Linda Perkowski, Ph.D.
University of Minnesota Medical School
Readiness Assurance Test
What is Program Evaluation?
Systematic collection of information about a broad
range of topics for use by specific people for a
variety of purposes
Patton, 1986
Definitions
Evaluation
---- program
Assessment
---- individual
Formative Evaluation -- to improve
Summative Evaluation --- to prove
Outcomes Research --- patient care
Purposes of Program Evaluation
To improve program
To determine next steps/make decisions
• Help decide to replace, develop further,
eliminate, accredit
To determine effectiveness
To document success
To measure outcomes
For curricular purposes,
evaluation helps
Ensure teaching is meeting learner’s needs
Identify where teaching can be improved
Inform the allocation of resources
Provide support to faculty and learners
Diagnose and document program strengths and
weaknesses
Articulate what is valued by the institution
Determine that educational objectives met
Adapted from Morrison (2003)
Influences on the evaluation
External
• Accrediting agencies
• Public
• Funding priorities
Internal
• Who needs what answers?
• Who gets to pose the questions?
• How will the answers be made known?
Barriers to Program Evaluation
Tension between implementing and evaluating
Lack of skills in conducting applied social science
research
Paucity of funding, time, and publication outlets
Failure to recognize evaluation as scholarship and
place in literature
Wilkerson, 2000
What is the biggest barrier for you or
your institution to collect and analyze
program evaluation data?
Tension between getting a program implemented
and evaluating it
Lack of skills
Paucity of funding or time
Limited outlets to present or publish findings
Many Models
Goal Oriented/Objective-Based (Tyler)
Goals-free Evaluation (Scriven)
Judicial/Adversary Evaluation
CIPP (Stufflebeam)
Kirkpatrick’s 4-level model
Situated Evaluation
Connoisseurship Evaluation (Eisner)
Utilization-Oriented Evaluation (Patton)
Logic Model
Program Logic Model - MERC
Tyler Model - MERC
Objectives:
Methods
1. Increase their
participation in medical
education research
activities (research
presentations and
publications)
(outcome evaluation)
Short
survey
2. Apply medical
education research
principles from MERC to
their daily work
(outcome evaluation)
Short
survey
Content/Specifics
Retrospective pre/post survey
•12 closed-ended
dichotomous items
participation in medical
education research
activities (ie,
collaborating in medical
education research
project, publishing a peerreviewed publication)
Open-ended question
Frequency/
Timing
Person
6-12 months
after completion
of MERC
MERC Evaluation
Committee to launch and
analyze data
6-12 months
after completion
of MERC
MERC Evaluation
Committee to launch and
analyze data
Kirkpatrick’s
Four Levels of Outcomes
1. Satisfaction
2. Advance in knowledge,
skills and attitudes
3. Skills used in everyday
environment of the learner
4. Bottom-line
a. Effect on participants’
“learners”
b. Effect on participants’
career
c. Institutional
improvements
Overview of 10 Program
Evaluation Steps (Workplan)
Step 1: Identify Users
Step 2: Identify Uses
Step 3: Identify Resources
Step 4: Identify Evaluation Questions/Objectives
Step 5: Choose Evaluation Design
Step 6: Choose Measurement Methods and
Construct Instruments
Step 7: Address Ethical Concerns
Step 8: Collect Data
Step 9: Analyze Data
Step 10: Report Results
Step 1: Identify Users
Who will use the evaluation?
• Learners
• Faculty
• Workshop developers
• Administrators
• Agencies
• Other stakeholders
What do they want from the evaluation?
Step 2: Identify Uses
Generally both formative and summative
Individual and program decisions
Qualitative and/or quantitative information
Consider specific needs of each user
• Judgments about individuals
• Judgments about project management and
processes
What uses do you have for
program evaluation?
Improving existing or new programs
Proving that a program works
Step 3: Identify Resources
What time is needed from everyone?
What personnel is needed?
What equipment?
What facilities?
What funds?
Step 4: Identify Evaluation
Questions/Objectives
These go back to the model chosen, but
Generally
• Relate to specific measurable objectives for
 Learner
 Process

Outcomes
• Wise to include some questions that get at what was
not anticipated both as strengths and weaknesses
Step 4: Identify Evaluation
Questions/Objectives – cont.
Evaluation questions should:
• Be clear and specific
• Congruent with the literature
• Focus on outcomes versus process
• Outcomes imply change
 Workshop will improve educator’s skill
RATHER THAN
 How the workshop was given (process)
• Align with goals and objectives
What are the questions?
Process
•Ease of use
•Efficiency
•Relevance
•Language
Presentation &
Organization
Evaluation of
Learning
Development
•Needs assmt
•Objectives
•Materials
•Staffing
•Design
•Interaction
•Feedback
•Clarity
•Quality
•Organization
Pedagogy
•Instructional method
•Structure
•Active learning
•Learner differences
•Objectives ~ methods
Outcomes
•Knowledge
•Attitudes
•Behaviors
Interface
?
Evaluation
of Cost
Implementation
•Staff time
•Materials
•Recruitment
•Facilities
•Hardware
Evaluation of
Content
Maintenance
•Portability
•Coordination
•Durability
•Tech support
•Authority
•Accuracy
•Appropriateness
•Breadth
•Depth
Adapted from Elissavet & Economides (2003)
Step 5: Choose Evaluation Designs
What ones are appropriate to the questions?
Posttest only
 Satisfaction/reactions
X--O
Retrospective Pretest
 Attitudes
X--O
• Pretest-Posttest
 Changes in knowledge/attitudes
O - -X- - O
• Quasi-Experimental
 Cross-over
O - -X- - O- - - - - O
O - - - - - O - -X - - O
Step 6: Choose Measurement Methods
and Construct/Adapt Instruments
Common methods
•
•
•
•
•
•
•
•
•
Rating forms
Self-assessments
Essays
Exams
Questionnaires
Interviews/focus groups
Direct observations
Performance audits
Existing data (AAMC questionnaires, Course evals, JAMA)
Collect appropriate demographics
SOURCES OF DATA
What do we have?
What do we need?
What, realistically, can we do?
Group Assignment 1
During this workshop, you will begin to:
Evaluate the effectiveness of the CORD
program (see handout)
Use your experiences and the information in
the handout to address the first four steps
What would be the best model to use
as we begin to develop our plan?
Goal oriented/
objective based
Kirkpatrick’s 4-level
model
Logic model
Assignment 2
Take your own project/program and begin filling in
one of the blank matrices
Be prepared to discuss with the group
Step 7: Address Ethical
Concerns
Confidentiality
Access to data
Consent
Resource allocation
Seek IRB approval
Step 8: Collect Data
Timing and response rate
Already existing data collection
Impact on instrument design (e.g. mail vs. web
survey)
Assignment of responsibility
Step 9: Analyze Data
Plan at the same time as the rest of the evaluation
Want congruence between question asked and
analysis that is feasible
Step 10: Report Results
Timely
Format fits needs of users
Display results in succinct and clear manner
QUESTIONS???
©
Download