Part 5
Staffing Activities: Employment
Chapter 11: Decision Making
Chapter 12: Final Match
McGraw-Hill/Irwin
Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
Part 5
Staffing Activities: Employment
Chapter 11:
Decision Making
11-2
Staffing Organizations Model
Organization
Mission
Goals and Objectives
Organization Strategy
HR and Staffing Strategy
Staffing Policies and Programs
Support Activities
Core Staffing Activities
Legal compliance
Planning
Recruitment:
Selection:
External, internal
Measurement, external, internal
Job analysis
Employment:
Decision making, final match
Staffing System and Retention Management
11-3
Chapter Outline

Choice of Assessment Method







Single Predictor
Multiple Predictors
Hiring Standards and Cut Scores




Description of Process
Consequences of Cut Scores
Methods to Determine Cut
Scores
Professional Guidelines
Methods of Final Choice








Random Selection
Ranking
Grouping
Ongoing Hiring
Decision Makers

Determining Assessment Scores


Validity Coefficient
Face Validity
Correlation with Other
Predictors
Adverse Impact
Utility

HR Professionals
Managers
Employees
Legal Issues


Uniform Guidelines on
Employee Selection
Procedures
Diversity and Hiring Decisions
11-4
Learning Objectives for This Chapter







Be able to interpret validity coefficients
Estimate adverse impact and utility of
selection systems
Learn about methods for combining multiple
predictors
Establish hiring standards and cut scores
Evaluate various methods of making a final
selection choice
Understand the roles of various decision
makers in the staffing process
Recognize the importance of diversity
concerns in the staffing process
11-5
Discussion Questions for This Chapter






Your boss is considering using a new predictor. The base rate is
high, the selection ratio is low, and the validity coefficient is high
for the current predictor. What would you advise your boss and
why?
What are the positive consequences associated with a high
predictor cut score? What are the negative consequences?
Under what circumstances should a compensatory model be
used? When should a multiple hurdles model be used?
What are the advantages of ranking as a method of final choice
over random selection?
What roles should HR professionals play in staffing decisions?
Why?
What guidelines do the UGESP offer to organizations when it
comes to setting cut scores?
11-6
Choice of Assessment Method

Validity Coefficient

Face Validity

Correlation With Other Predictors

Adverse Impact

Utility
11-7
Validity Coefficient

Practical significance


Extent to which predictor adds value to prediction of
job success
Assessed by examining


Sign
Magnitude



Statistical significance



Validities above .15 are of moderate usefulness
Validities above .30 are of high usefulness
Assessed by probability or p values
Reasonable level of significance is p < .05
Face validity
11-8
Correlation With Other Predictors

To add value, a predictor must add to
prediction of success above and beyond
forecasting powers of current predictors
 A predictor is more useful the



Smaller its correlation with other predictors and
Higher its correlation with the criterion
Predictors are likely to be highly correlated
with one another when their content domain is
similar
11-9
Adverse Impact

Role of predictor


Discriminates between people in terms of the
likelihood of their job success
When it discriminates by screening out a
disproportionate number of minorities and women,


Adverse impact exists which may result in legal problems
Issues


What if one predictor has high validity and high
adverse impact?
And another predictor has low validity and low
adverse impact?
11-10
Utility Analysis

Taylor-Russell Tables
 Focuses on proportion
of new hires who turn
out to be successful
 Requires information
on:



Selection ratio:
Number hired /
number of applicants
Base rate:
proportion of
employees who are
successful
Validity coefficient of
current and “new”
predictors
11-11
Utility Analysis

Economic Gain Formula
 Focuses on the monetary impact of using a predictor
 Requires a wide range of information on current employees,
validity, number of applicants, cost of testing, etc.
11-12
Limitations of Utility Analysis

While most companies use multiple selection
measures, utility models assume decision is



Important variables are missing from model



Whether to use a single selection measure rather than
Select applicants by chance alone
EEO / AA concerns
Applicant reactions
Utility formula based on simplistic assumptions



Validity does not vary over time
Non-performance criteria are irrelevant
Applicants are selected in a top-down manner
and all job offers are accepted
11-13
Discussion Questions

Your boss is considering using a new
predictor. The base rate is high, the
selection ratio is low, and the validity
coefficient is high for the current
predictor. What would you advise your
boss and why?
11-14
Determining Assessment Scores

Single predictor
 Multiple predictors


Three models
shown
Multiple hurdles
model
11-15
Relevant Factors: Selecting
the Best Weighting Scheme





Do decision makers have considerable
experience and insight into selection
decisions?
Is managerial acceptance of the selection
process important?
Is there reason to believe each predictor
contributes relatively equally to job success?
Are there adequate resources to use involved
weighting schemes?
Are conditions under which multiple regression
is superior satisfied?
11-16
Ex. 11.4: Combined Model
for Recruitment Manager
11-17
Hiring Standards and Cut Scores

Issue -- What is a passing score?

Score may be a
Single score from a single predictor or
 Total score from multiple predictors


Description of process

Cut score - Separates applicants who
advance from those who are rejected
11-18
Exh. 11.5: Consequences of Cut Scores
11-19
Hiring Standards and Cut Scores
(continued)

Methods to determine cut scores
Minimum competency
 Top-down
 Banding


Professional guidelines
11-20
Ex. 11.6: Use of Cut Scores in Selection Decisions
11-21
Discussion Questions
What are the positive consequences
associated with a high predictor cut
score? What are the negative
consequences?
 Under what circumstances should a
compensatory model be used? When
should a multiple hurdles model be
used?

11-22
Methods of Final Choice

Random selection


Ranking


Finalists are ordered from most to least desirable
based on results of discretionary assessments
Grouping


Each finalist has equal chance of being selected
Finalists are banded together into rank-ordered
categories
Ongoing hiring

Hiring all acceptable candidates as they become
available for open positions
11-23
Ex. 11.8: Methods of Final Choice
11-24
Decision Makers

Role of human resource professionals




Role of managers



Determine process used to design and manage
selection system
Contribute to outcomes based on initial assessment
methods
Provide input regarding who receives job offers
Determine who is selected for employment
Provide input regarding process issues
Role of employees

Provide input regarding selection procedures
and who gets hired, especially in team approaches
11-25
Discussion Questions
What are the advantages of ranking as a
method of final choice over random
selection?
 What roles should HR professionals play
in staffing decisions? Why?

11-26
Legal Issues

Legal issue of importance in decision making



Cut scores or hiring standards
Uniform Guidelines on Employee
Selection Procedures (UGESP)

If no adverse impact, guidelines are silent on cut
scores

If adverse impact occurs, guidelines become
applicable
Choices among finalists
11-27
Discussion Questions

What guidelines do the UGESP offer to
organizations when it comes to setting
cut scores?
11-28
Ethical Issues

Issue 1


Do you think companies should use banding in
selection decisions? Defend your position.
Issue 2

Is clinical prediction the fairest way to combine
assessment information about job applicants, or are
the other methods (unit weighting, rational
weighting, multiple regression) more fair? Why?
11-29