Evaluating HRD Programs

advertisement
Evaluating HRD Programs
Chapter 7
Werner & DeSimone (2006)
1
Learning Objectives
 Define evaluation and explain its role/purpose
in HRD.
Compare different models of evaluation.
Discuss the various methods of data collection
for HRD evaluation.
Explain the role of research design in HRD
evaluation.
Describe the ethical issues involved in
conducting HRD evaluation.
Identify and explain the choices available for
translating evaluation results into dollar terms.
Werner & DeSimone (2006)
2
Effectiveness
The degree to which a training (or
other HRD program) achieves its
intended purpose
Measures are relative to some starting
point
Measures how well the desired goal is
achieved
Werner & DeSimone (2006)
3
Evaluation
Werner & DeSimone (2006)
4
HRD Evaluation
It is “the systematic collection of
descriptive and judgmental information
necessary to make effective training
decisions related to the selection,
adoption, value, and modification of
various instructional activities.”
Werner & DeSimone (2006)
5
In Other Words…
Are we training:
the right people
the right “stuff”
the right way
with the right materials
at the right time?
Werner & DeSimone (2006)
6
Evaluation Needs
Descriptive and judgmental
information needed

Objective and subjective data
Information gathered according to a
plan and in a desired format
Gathered to provide decision making
information
Werner & DeSimone (2006)
7
Purposes of Evaluation
Determine whether the program is
meeting the intended objectives
Identify strengths and weaknesses
Determine cost-benefit ratio
Identify who benefited most or least
Determine future participants
Provide information for improving HRD
programs
Werner & DeSimone (2006)
8
Purposes of Evaluation – 2
Reinforce major points to be made
Gather marketing information
Determine if training program is
appropriate
Establish management database
Werner & DeSimone (2006)
9
Evaluation Bottom Line
Is HRD a revenue contributor or a
revenue user?
Is HRD credible to line and upper-level
managers?
Are benefits of HRD readily evident to
all?
Werner & DeSimone (2006)
10
How Often are HRD
Evaluations Conducted?
Not often enough!!!
Frequently, only end-of-course
participant reactions are collected
Transfer to the workplace is evaluated
less frequently
Werner & DeSimone (2006)
11
Why HRD Evaluations are
Rare
Reluctance to having HRD programs
evaluated
Evaluation needs expertise and resources
Factors other than HRD cause performance
improvements – e.g.,



Economy
Equipment
Policies, etc.
Werner & DeSimone (2006)
12
Need for HRD Evaluation
Shows the value of HRD
Provides metrics for HRD efficiency
Demonstrates value-added approach
for HRD
Demonstrates accountability for HRD
activities
Werner & DeSimone (2006)
13
Make or Buy Evaluation
“I bought it, therefore it is good.”
“Since it’s good, I don’t need to posttest.”
Who says it’s:




Appropriate?
Effective?
Timely?
Transferable to the workplace?
Werner & DeSimone (2006)
14
Models and Frameworks of
Evaluation
Table 7-1 lists six frameworks for
evaluation
The most popular is that of D.
Kirkpatrick:




Reaction
Learning
Job Behavior
Results
Werner & DeSimone (2006)
15
Kirkpatrick’s Four Levels
Reaction

Focus on trainee’s reactions
Learning

Did they learn what they were supposed to?
Job Behavior

Was it used on job?
Results

Did it improve the organization’s effectiveness?
Werner & DeSimone (2006)
16
Issues Concerning
Kirkpatrick’s Framework
Most organizations don’t evaluate at
all four levels
Focuses only on post-training
Doesn’t treat inter-stage
improvements
WHAT ARE YOUR THOUGHTS?
Werner & DeSimone (2006)
17
Data Collection for HRD
Evaluation
Possible methods:
Interviews
Questionnaires
Direct observation
Written tests
Simulation/Performance tests
Archival performance information
Werner & DeSimone (2006)
18
Interviews
Advantages:
Flexible
Opportunity for
clarification
Depth possible
Personal contact
Limitations:
High reactive effects
High cost
Face-to-face threat
potential
Labor intensive
Trained observers
needed
Werner & DeSimone (2006)
19
Questionnaires
Advantages:
Low cost to
administer
Honesty increased
Anonymity possible
Respondent sets the
pace
Variety of options
Limitations:
Possible inaccurate
data
Response conditions
not controlled
Respondents set
varying paces
Uncontrolled return
rate
Werner & DeSimone (2006)
20
Direct Observation
Advantages:
Nonthreatening
Excellent way to
measure behavior
change
Limitations:
Possibly disruptive
Reactive effects are
possible
May be unreliable
Need trained
observers
Werner & DeSimone (2006)
21
Written Tests
Advantages:
Low purchase cost
Readily scored
Quickly processed
Easily administered
Wide sampling
possible
Limitations:
May be threatening
Possibly no relation
to job performance
Measures only
cognitive learning
Relies on norms
Concern for racial/
ethnic bias
Werner & DeSimone (2006)
22
Simulation/Performance Tests
Advantages:
Reliable
Objective
Close relation to job
performance
Includes cognitive,
psychomotor and
affective domains
Limitations:
Time consuming
Simulations often
difficult to create
High costs to
development and
use
Werner & DeSimone (2006)
23
Archival Performance Data
Advantages:
Reliable
Objective
Job-based
Easy to review
Minimal reactive
effects
Limitations:
Criteria for keeping/
discarding records
Information system
discrepancies
Indirect
Not always usable
Records prepared
for other purposes
Werner & DeSimone (2006)
24
Choosing Data Collection
Methods
Reliability

Consistency of results, and freedom from
collection method bias and error
Validity

Does the device measure what we want to
measure?
Practicality

Does it make sense in terms of the resources
used to get the data?
Werner & DeSimone (2006)
25
Type of Data Used/Needed
Individual performance
Systemwide performance
Economic
Werner & DeSimone (2006)
26
Individual Performance Data
Individual knowledge
Individual behaviors
Examples:




Test scores
Performance quantity, quality, and
timeliness
Attendance records
Attitudes
Werner & DeSimone (2006)
27
Systemwide Performance Data
Productivity
Scrap/rework rates
Customer satisfaction levels
On-time performance levels
Quality rates and improvement rates
Werner & DeSimone (2006)
28
Economic Data
Profits
Product liability claims
Avoidance of penalties
Market share
Competitive position
Return on investment (ROI)
Financial utility calculations
Werner & DeSimone (2006)
29
Use of Self-Report Data
Most common method
Pre-training and post-training data
Problems:

Mono-method bias
 Desire to be consistent between tests

Socially desirable responses

Response Shift Bias:
 Trainees adjust expectations to training
Werner & DeSimone (2006)
30
Research Design
Specifies in advance:
the expected results of the study
the methods of data collection to be
used
how the data will be analyzed
Werner & DeSimone (2006)
31
Assessing the Impact of HRD
Money is the language of business.
You MUST talk dollars, not HRD
jargon.
No one (except maybe you) cares
about “the effectiveness of training
interventions as measured by and
analysis of formal pretest, posttest
control group data.”
Werner & DeSimone (2006)
32
HRD Program Assessment
HRD programs and training are investments
Line managers often see HR and HRD as
costs – i.e., revenue users, not revenue
producers
You must prove your worth to the
organization –
 Or you’ll have to find another
organization…
Werner & DeSimone (2006)
33
Two Basic Methods for
Assessing Financial Impact
Evaluation of training costs
Utility analysis
Werner & DeSimone (2006)
34
Evaluation of Training Costs
Cost-benefit analysis

Compares cost of training to benefits
gained such as attitudes, reduction in
accidents, reduction in employee sickdays, etc.
Cost-effectiveness analysis

Focuses on increases in quality, reduction
in scrap/rework, productivity, etc.
Werner & DeSimone (2006)
35
Return on Investment
Return on investment = Results/Costs
Werner & DeSimone (2006)
36
Calculating Training Return On
Investment
Results
Results
Operational
How
Before
After
Differences
Expressed
Results Area
Measured
Training
2% rejected
Training
1.5% rejected
(+ or –)
.5%
in $
$720 per day
Quality of panels
% rejected
1,440 panels
per day
1,080 panels
per day
360 panels
$172,800
per year
Housekeeping
Visual
inspection
using
20-item
checklist
10 defects
(average)
2 defects
(average)
8 defects
Preventable
accidents
Number of
accidents
24 per year
16 per year
8 per year
Direct cost
of each
accident
$144,000
per year
$96,000 per
year
$48,000
ROI =
Return
Investment
=
=
$48,000 per
year
Total savings: $220,800.00
Operational Results
Training Costs
$220,800
$32,564
Not measurable in $
=
6.8
SOURCE: From D. G. Robinson & J. Robinson (1989). Training for impact. Training and Development Journal, 43(8), 41. Printed by permission.
Werner & DeSimone (2006)
37
Measuring Benefits




Change in quality per unit measured in
dollars
Reduction in scrap/rework measured in
dollar cost of labor and materials
Reduction in preventable accidents
measured in dollars
ROI = Benefits/Training costs
Werner & DeSimone (2006)
38
Ways to Improve HRD
Assessment
Walk the walk, talk the talk: MONEY
Involve HRD in strategic planning
Involve management in HRD planning and
estimation efforts

Gain mutual ownership
Use credible and conservative estimates
Share credit for successes and blame for
failures
Werner & DeSimone (2006)
39
HRD Evaluation Steps
1. Analyze needs.
2. Determine explicit evaluation strategy.
3. Insist on specific and measurable training
objectives.
4. Obtain participant reactions.
5. Develop criterion measures/instruments to
measure results.
6. Plan and execute evaluation strategy.
Werner & DeSimone (2006)
40
Summary
Training results must be measured
against costs
Training must contribute to the
“bottom line”
HRD must justify itself repeatedly as
a revenue enhancer, not a revenue
waster
Werner & DeSimone (2006)
41
Download