Evaluating HRD Programs

advertisement
Evaluating HRD Programs
Effectiveness
• The degree to which a training (or
other HRD program) achieves its
intended purpose
• Measures are relative to some starting
point
• Measures how well the desired goal is
achieved
Evaluation
HRD Evaluation
Textbook definition:
“The systematic collection of descriptive
and judgmental information necessary to
make effective training decisions related
to the selection, adoption, value, and
modification of various instructional
activities.”
In Other Words…
Are we training:
• the right people
• the right “stuff”
• the right way
• with the right materials
• at the right time?
Evaluation Needs
• Descriptive and judgmental information
needed
– Objective and subjective data
• Information gathered according to a
plan and in a desired format
• Gathered to provide decision making
information
Purposes of Evaluation
• Determine whether the program is meeting
the intended objectives
• Identify strengths and weaknesses
• Determine cost-benefit ratio
• Identify who benefited most or least
• Determine future participants
• Provide information for improving HRD
programs
Purposes of Evaluation – 2
• Reinforce major points to be made
• Gather marketing information
• Determine if training program is
appropriate
• Establish management database
Evaluation Bottom Line
• Is HRD a revenue contributor or a revenue
user?
• Is HRD credible to line and upper-level
managers?
• Are benefits of HRD readily evident to all?
How Often are HRD Evaluations
Conducted?
• Not often enough!!!
• Frequently, only end-of-course participant
reactions are collected
• Transfer to the workplace is evaluated less
frequently
Why HRD Evaluations are Rare
• Reluctance to having HRD programs
evaluated
• Evaluation needs expertise and resources
• Factors other than HRD cause performance
improvements – e.g.,
– Economy
– Equipment
– Policies, etc.
Need for HRD Evaluation
• Shows the value of HRD
• Provides metrics for HRD efficiency
• Demonstrates value-added approach for
HRD
• Demonstrates accountability for HRD
activities
• Everyone else has it… why not HRD?
Make or Buy Evaluation
• “I bought it, therefore it is good.”
• “Since it’s good, I don’t need to post-test.”
• Who says it’s:
– Appropriate?
– Effective?
– Timely?
– Transferable to the workplace?
Evolution of Evaluation Efforts
1. Anecdotal approach – talk to other
users
2. Try before buy – borrow and use
samples
3. Analytical approach – match research
data to training needs
4. Holistic approach – look at overall HRD
process, as well as individual training
Models and Frameworks of
Evaluation
• Table 7-1 lists six frameworks for
evaluation
• The most popular is that of D. Kirkpatrick:
– Reaction
– Learning
– Job Behavior
– Results
Kirkpatrick’s Four Levels
• Reaction
– Focus on trainee’s reactions
• Learning
– Did they learn what they were supposed to?
• Job Behavior
– Was it used on job?
• Results
– Did it improve the organization’s effectiveness?
Issues Concerning Kirkpatrick’s
Framework
• Most organizations don’t evaluate at all
four levels
• Focuses only on post-training
• Doesn’t treat inter-stage improvements
• WHAT ARE YOUR THOUGHTS?
A Suggested Framework – 1
• Reaction
– Did trainees like the training?
– Did the training seem useful?
• Learning
– How much did they learn?
• Behavior
– What behavior change occurred?
Suggested Framework – 2
• Results
– What were the tangible outcomes?
– What was the return on investment (ROI)?
– What was the contribution to the
organization?
Data Collection for HRD
Evaluation
Possible methods:
• Interviews
• Questionnaires
• Direct observation
• Written tests
• Simulation/Performance tests
• Archival performance information
Interviews
Advantages:
• Flexible
• Opportunity for
clarification
• Depth possible
• Personal contact
Limitations:
• High reactive effects
• High cost
• Face-to-face threat
potential
• Labor intensive
• Trained observers
needed
Questionnaires
Advantages:
• Low cost to
administer
• Honesty increased
• Anonymity possible
• Respondent sets the
pace
• Variety of options
Limitations:
• Possible inaccurate
data
• Response conditions
not controlled
• Respondents set
varying paces
• Uncontrolled return
rate
Direct Observation
Advantages:
• Nonthreatening
• Excellent way to
measure behavior
change
Limitations:
• Possibly disruptive
• Reactive effects are
possible
• May be unreliable
• Need trained
observers
Written Tests
Advantages:
• Low purchase cost
• Readily scored
• Quickly processed
• Easily administered
• Wide sampling
possible
Limitations:
• May be threatening
• Possibly no relation to
job performance
• Measures only
cognitive learning
• Relies on norms
• Concern for racial/
ethnic bias
Simulation/Performance Tests
Advantages:
• Reliable
• Objective
• Close relation to job
performance
• Includes cognitive,
psychomotor and
affective domains
Limitations:
• Time consuming
• Simulations often
difficult to create
• High costs to
development and use
Archival Performance Data
Advantages:
• Reliable
• Objective
• Job-based
• Easy to review
• Minimal reactive
effects
Limitations:
• Criteria for keeping/
discarding records
• Information system
discrepancies
• Indirect
• Not always usable
• Records prepared for
other purposes
Choosing Data Collection
Methods
• Reliability
– Consistency of results, and freedom from
collection method bias and error
• Validity
– Does the device measure what we want to
measure?
• Practicality
– Does it make sense in terms of the resources
used to get the data?
Type of Data Used/Needed
• Individual performance
• Systemwide performance
• Economic
Individual Performance Data
• Individual knowledge
• Individual behaviors
• Examples:
– Test scores
– Performance quantity, quality, and timeliness
– Attendance records
– Attitudes
Systemwide Performance Data
•
•
•
•
•
Productivity
Scrap/rework rates
Customer satisfaction levels
On-time performance levels
Quality rates and improvement rates
Economic Data
•
•
•
•
•
•
•
Profits
Product liability claims
Avoidance of penalties
Market share
Competitive position
Return on investment (ROI)
Financial utility calculations
Use of Self-Report Data
• Most common method
• Pre-training and post-training data
• Problems:
– Mono-method bias
• Desire to be consistent between tests
– Socially desirable responses
– Response Shift Bias:
• Trainees adjust expectations to training
Research Design
Specifies in advance:
• the expected results of the study
• the methods of data collection to be used
• how the data will be analyzed
Research Design Issues
• Pretest and Posttest
– Shows trainee what training has
accomplished
– Helps eliminate pretest knowledge bias
• Control Group
– Compares performance of group with training
against the performance of a similar group
without training
Recommended Research
Design
• Pretest and posttest with control group
• Whenever possible:
– Randomly assign individuals to the test group
and the control group to minimize bias
– Use “time-series” approach to data collection
to verify performance improvement is due to
training
Ethical Issues Concerning
Evaluation Research
•
•
•
•
•
Confidentiality
Informed consent
Withholding training from control groups
Use of deception
Pressure to produce positive results
Assessing the Impact of HRD
• Money is the language of business.
• You MUST talk dollars, not HRD jargon.
• No one (except maybe you) cares about
“the effectiveness of training
interventions as measured by and
analysis of formal pretest, posttest
control group data.”
HRD Program Assessment
• HRD programs and training are investments
• Line managers often see HR and HRD as
costs – i.e., revenue users, not revenue
producers
• You must prove your worth to the organization
– Or you’ll have to find another organization…
Evaluation of Training Costs
• Cost-benefit analysis
– Compares cost of training to benefits gained
such as attitudes, reduction in accidents,
reduction in employee sick-days, etc.
• Cost-effectiveness analysis
– Focuses on increases in quality, reduction in
scrap/rework, productivity, etc.
Return on Investment
• Return on investment = Results/Costs
Calculating Training Return On
Investment
Results
Results
Operational
How
Before
After
Differences
Expressed
Results Area
Measured
Training
2% rejected
Training
1.5% rejected
(+ or –)
.5%
in $
$720 per day
Quality of panels
% rejected
1,440 panels
per day
1,080 panels
per day
360 panels
$172,800
per year
Housekeeping
Visual
inspection
using
20-item
checklist
10 defects
(average)
2 defects
(average)
8 defects
Preventable
accidents
Number of
accidents
24 per year
16 per year
8 per year
Direct cost
of each
accident
$144,000
per year
$96,000 per
year
$48,000
ROI =
Return
Investment
=
=
$48,000 per
year
Total savings: $220,800.00
Operational Results
Training Costs
$220,800
$32,564
Not measurable in $
=
6.8
SOURCE: From D. G. Robinson & J. Robinson (1989). Training for impact. Training and Development Journal, 43(8), 41. Printed by permission.
Types of Training Costs
•
•
•
•
•
Direct costs
Indirect costs
Development costs
Overhead costs
Compensation for participants
Direct Costs
• Instructor
– Base pay
– Fringe benefits
– Travel and per diem
•
•
•
•
Materials
Classroom and audiovisual equipment
Travel
Food and refreshments
Indirect Costs
• Training management
• Clerical/Administrative
• Postal/shipping, telephone, computers,
etc.
• Pre- and post-learning materials
• Other overhead costs
Development Costs
• Fee to purchase program
• Costs to tailor program to organization
• Instructor training costs
Overhead Costs
•
•
•
•
General organization support
Top management participation
Utilities, facilities
General and administrative costs, such
as HRM
Compensation for Participants
• Participants’ salary and benefits for time
away from job
• Travel, lodging, and per-diem costs
Measuring Benefits
– Change in quality per unit measured in dollars
– Reduction in scrap/rework measured in dollar
cost of labor and materials
– Reduction in preventable accidents measured
in dollars
– ROI = Benefits/Training costs
Utility Analysis
• Uses a statistical approach to support
claims of training effectiveness:
– N = Number of trainees
– T = Length of time benefits are expected to last
– dt = True performance difference resulting from
training
– SDy = Dollar value of untrained job performance (in
standard deviation units)
– C = Cost of training
• U = (N)(T)(dt)(Sdy) – C
Critical Information for Utility
Analysis
• dt = difference in units between
trained/untrained, divided by standard
deviation in units produced by trained
• SDy = standard deviation in dollars, or
overall productivity of organization
Ways to Improve HRD
Assessment
• Walk the walk, talk the talk: MONEY
• Involve HRD in strategic planning
• Involve management in HRD planning and
estimation efforts
– Gain mutual ownership
• Use credible and conservative estimates
• Share credit for successes and blame for
failures
HRD Evaluation Steps
1. Analyze needs.
2. Determine explicit evaluation strategy.
3. Insist on specific and measurable training
objectives.
4. Obtain participant reactions.
5. Develop criterion measures/instruments to
measure results.
6. Plan and execute evaluation strategy.
Summary
• Training results must be measured
against costs
• Training must contribute to the “bottom
line”
• HRD must justify itself repeatedly as a
revenue enhancer, not a revenue
waster
Download