Chapter Six - Seminole State College of Florida

© 2010 Cengage Learning

Characteristics of Effective

Selection Techniques

1

© 2010 Cengage Learning

Optimal Employee Selection Systems

• Are Reliable

• Are Valid

– Based on a job analysis (content validity)

– Predict work-related behavior (criterion validity)

• Reduce the Chance of a Legal Challenge

– Face valid

– Don’t invade privacy

– Don’t intentionally discriminate

– Minimize adverse impact

• Are Cost Effective

– Cost to purchase/create

– Cost to administer

– Cost to score

2

© 2010 Cengage Learning

Reliability

• The extent to which a score from a test is consistent and free from errors of measurement

Methods of Determining Reliability

– Test-retest (temporal stability)

– Alternate forms (form stability)

– Internal reliability (item stability)

– Scorer reliability

3

© 2010 Cengage Learning

Test-Retest Reliability

• Measures temporal stability

• Administration

– Same applicants

– Same test

– Two testing periods

• Scores at time one are correlated with scores at time two

• Correlation should be above .70

4

© 2010 Cengage Learning

Test-Retest Reliability

Problems

• Sources of measurement errors

– Characteristic or attribute being measured may change over time

– Reactivity

– Carry over effects

• Practical problems

– Time consuming

– Expensive

– Inappropriate for some types of tests

5

© 2010 Cengage Learning

Alternate Forms Reliability

Administration

• Two forms of the same test are developed, and to the highest degree possible, are equivalent in terms of content, response process, and statistical characteristics

• One form is administered to examinees, and at some later date, the same examinees take the second form

6

© 2010 Cengage Learning

Alternate Forms Reliability

Scoring

• Scores from the first form of test are correlated with scores from the second form

• If the scores are highly correlated, the test has form stability

7

© 2010 Cengage Learning

Alternate Forms Reliability

Disadvantages

• Difficult to develop

• Content sampling errors

• Time sampling errors

8

© 2010 Cengage Learning

Internal Reliability

• Defines measurement error strictly in terms of consistency or inconsistency in the content of the test.

• Used when it is impractical to administer two separate forms of a test.

• With this form of reliability the test is administered only once and measures item stability.

9

© 2010 Cengage Learning

Determining Internal Reliability

• Split-Half method (most common)

– Test items are divided into two equal parts

– Scores for the two parts are correlated to get a measure of internal reliability.

• Spearman-Brown prophecy formula:

(2 x split half reliability) ÷ (1 + split-half reliability)

10

© 2010 Cengage Learning

Spearman-Brown Formula

(2 x split-half correlation)

(1 + split-half correlation)

If we have a split-half correlation of .60, the corrected reliability would be:

(2 x .60) ÷ (1 + .60) = 1.2 ÷ 1.6 = .75

11

© 2010 Cengage Learning

Common Methods for

Correlating Split-half Methods

• Cronbach’s Coefficient Alpha

– Used with ratio or interval data.

• Kuder-Richardson Formula

– Used for test with dichotomous items (yes-no true-false)

12

© 2010 Cengage Learning

Interrater Reliability

• Used when human judgment of performance is involved in the selection process

• Refers to the degree of agreement between 2 or more raters

13

© 2010 Cengage Learning

Rate the Waiter’s Performance

(Office Space – DVD segment 3)

14

© 2010 Cengage Learning

Reliability: Conclusions

• The higher the reliability of a selection test the better. Reliability should be .70 or higher

• Reliability can be affected by many factors

If a selection test is not reliable, it is useless as a tool for selecting individuals

15

© 2010 Cengage Learning

16

© 2010 Cengage Learning

Validity

• Definition

The degree to which inferences from scores on tests or assessments are justified by the evidence

• Common Ways to Measure

– Content Validity

– Criterion Validity

– Construct Validity

17

© 2010 Cengage Learning

Content Validity

• The extent to which test items sample the content that they are supposed to measure

• In industry the appropriate content of a test of test battery is determined by a job analysis

18

© 2010 Cengage Learning

Criterion Validity

• Criterion validity refers to the extent to which a test score is related to some measure of job performance called a criterion

Established using one of the following research designs:

– Concurrent Validity

– Predictive Validity

– Validity Generalization

19

© 2010 Cengage Learning

Concurrent Validity

• Uses current employees

• Range restriction can be a problem

20

© 2010 Cengage Learning

Predictive Validity

• Correlates test scores with future behavior

• Reduces the problem of range restriction

• May not be practical

21

© 2010 Cengage Learning

Validity Generalization

• Validity Generalization is the extent to which a test found valid for a job in one location is valid for the same job in a different location

• The key to establishing validity generalization is meta-analysis and job analysis

22

© 2010 Cengage Learning

Typical Corrected Validity Coefficients for

Selection Techniques

Method

Structured Interview

Cognitive ability

Job knowledge

Work samples

Assessment centers

Biodata

Integrity tests

Situational judgment

Validity

.57

.51

.48

.48

.38

.34

.34

.34

Method

College grades

References

Experience

Validity

.32

.29

.27

Conscientiousness

Unstructured interviews

Interest inventories

Handwriting analysis

.24

.20

.10

.09

Projective personality tests .00

23

© 2010 Cengage Learning

Construct Validity

• The extent to which a test actually measures the construct that it purports to measure

• Is concerned with inferences about test scores

• Determined by correlating scores on a test with scores from other test

24

© 2010 Cengage Learning

Face Validity

• The extent to which a test appears to be job related

• Reduces the chance of legal challenge

• Increasing face validity

25

© 2010 Cengage Learning

Locating Test Information

Exercise 6.1

26

© 2010 Cengage Learning

27

© 2010 Cengage Learning

Utility

The degree to which a selection device improves the quality of a personnel system, above and beyond what would have occurred had the instrument not been used.

28

© 2010 Cengage Learning

Selection Works Best When...

• You have many job openings

• You have many more applicants than openings

• You have a valid test

• The job in question has a high salary

• The job is not easily performed or easily trained

29

© 2010 Cengage Learning

Common Utility Methods

Taylor-Russell Tables

Proportion of Correct Decisions

The Brogden-Cronbach-Gleser Model

30

© 2010 Cengage Learning

Utility Analysis

Taylor-Russell Tables

• Estimates the percentage of future employees that will be successful

• Three components

– Validity

– Base rate (successful employees ÷ total employees)

– Selection ratio (hired ÷ applicants)

31

© 2010 Cengage Learning

Taylor-Russell Example

• Suppose we have

– a test validity of .40

– a selection ratio of .30

– a base rate of .50

• Using the Taylor-Russell

Tables what percentage of future employees would be successful?

32

© 2010 Cengage Learning

50

%

.

r.

.05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95

00 .50

.50

.

50 .50

.

50 .

50 .50

.

50 .

50 .50

.

50

.10

.58

.57

.56

.55

.54

.53

.53

.52

.51

.51

.50

.59

.60

.61

.62

.53

.54

.56

.57

.62

.65

.67

.70

.54

.56

.58

.60

.66

.70

.73

.78

.55

.58

.61

.63

.70

.75

.80

.86

.56

.60

.63

.67

.75

.80

.85

.92

.58

.62

.66

.70

.79

.85

.90

.97

.59

.64

.69

.74

.84

.90

.95

.99

.61

.67

.73

.76

.90

.95

.99

1.0

.64

.71

.78

.84

.94

.98

1.0

1.0

.67

.74

.82

.88

.60

.70

.80

.90

.20

.30

.40

.50

.54

.55

.55

.52

.52

.53

.54

.52

.53

.53

.53

.51

.51

.52

.52

© 2010 Cengage Learning

Proportion of Correct Decisions

• Proportion of Correct Decisions With Test

(Correct rejections + correct acceptances) ÷ Total employees

Quadrant II Quadrant IV Quadrants I+II+III+IV

• Baseline of Correct Decisions

Successful employees ÷ Total employees

Quadrants I + II Quadrants I+II+III+IV

34

i o r e n t i r

C

© 2010 Cengage Learning

10

I

9

6

5

4

3

8

7

IV

2

1

1 x x x

2 x x x x x x x x x x x x x x x x x x

II x

III x x x x x x x

3 4 5 6

Test Score (x)

7 8 x

9 10

35

© 2010 Cengage Learning

Proportion of Correct Decisions

• Proportion of Correct Decisions With Test

( 10 + 11 ) ÷

Quadrant II Quadrant IV

(5 + 10 + 4 + 11)

Quadrants I+II+III+IV

= 21 ÷ 30 = .70

• Baseline of Correct Decisions

5 + 10 ÷

Quadrants I + II

5 + 10 + 4 + 11

Quadrants I+II+III+IV

= 15 ÷ 30 = .50

36

© 2010 Cengage Learning

Computing the Proportion of Correct Decisions

Exercise 6.3

37

© 2010 Cengage Learning

38

© 2010 Cengage Learning

9

3

2

1

8

7

6 x

5 I

4 x x x x x x x x x x x x x x

II x x x x

IV x III

1 2 3 4 5 6 7 8 9

Test Scores

39

© 2010 Cengage Learning

Answer to Exercise 6.3

• Proportion of Correct Decisions With Test

( 8 + 6 ) ÷

Quadrant II Quadrant IV

(4 + 8 + 6 + 2)

Quadrants I+II+III+IV

= 14 ÷ 20 = .70

• Baseline of Correct Decisions

4 + 8 ÷

Quadrants I + II

4 + 8 + 6 + 2

Quadrants I+II+III+IV

= 12 ÷ 20 = .60

40

© 2010 Cengage Learning

Brogden-Cronbach-Gleser Utility

Formula

• Gives an estimate of utility by estimating the amount of money an organization would save if it used the test to select employees.

Savings =(n) (t) (r) (SDy) (m) - cost of testing

• n= Number of employees hired per year

• t= average tenure

• r= test validity

• SDy=standard deviation of performance in dollars

• m=mean standardized predictor score of selected applicants

41

© 2010 Cengage Learning

Components of Utility

Selection ratio

The ratio between the number of openings to the number of applicants

Validity coefficient

Base rate of current performance

The percentage of employees currently on the job who are considered successful.

SD y

The difference in performance (measured in dollars) between a good and average worker (workers one standard deviation apart)

42

© 2010 Cengage Learning

Calculating

m

• For example, we administer a test of mental ability to a group of 100 applicants and hire the 10 with the highest scores. The average score of the 10 hired applicants was 34.6, the average test score of the other 90 applicants was 28.4, and the standard deviation of all test scores was 8.3. The desired figure would be:

• (34.6 - 28.4) ÷ 8.3 = 6.2 ÷ 8.3 = ?

43

© 2010 Cengage Learning

Calculating

m

• You administer a test of mental ability to a group of 150 applicants, and hire 35 with the highest scores. The average score of the 35 hired applicants was 35.7, the average test score of the other 115 applicants was 24.6, and the standard deviation of all test scores was 11.2. The desired figure would be:

– (35.7 - 24.6) ÷ 11.2 = ?

44

© 2010 Cengage Learning

Standardized Selection Ratio

.70

.60

.50

.40

.30

SR

1.00

.90

.80

.20

.10

.05

.50

.64

.80

.97

1.17

m

.00

.20

.35

1.40

1.76

2.08

45

© 2010 Cengage Learning

Example

– Suppose:

• we hire 10 auditors per year

• the average person in this position stays 2 years

• the validity coefficient is .40

• the average annual salary for the position is $30,000

• we have 50 applicants for ten openings.

– Our utility would be:

(10 x 2 x .40 x $12,000 x 1.40) – (50 x 10) =

$133,900

46

© 2010 Cengage Learning

Exercise 6.2: Utility

47

© 2010 Cengage Learning

48

© 2010 Cengage Learning

1. Selection Ratio

Base rate

Validity

% of future successful employees

200 ÷ 400 = .50

800 ÷ 1000 = .80

.30

87%

49

© 2010 Cengage Learning Selection Ratio

80

%

.

r.

.05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95

00 .80

.80

.

80 .80

.

80 .

80 .80

.

80 .

80 .80

.

80

.10

.85

.85

.84

.83

.83

.82

.82

.81

.81

.81

.80

.87

.89

.91

.94

.82

.83

.85

.86

.90

.92

.94

.97

.83

.84

.86

.88

.92

.94

.96

.99

.84

.86

.88

.90

.94

.96

.98

1.0

.84

.87

.89

.91

.95

.97

.99

1.0

.85

.88

.90

.93

.96

.98

1.0

1.0

.86

.89

.92

.94

.98

.99

1.0

1.0

.87

.90

.93

.96

.99

1.0

1.0

1.0

.89

.92

.95

.97

.99

1.0

1.0

1.0

.90

.94

.96

.98

.60

.70

.80

.90

.20

.30

.40

.50

.84

.85

.87

.81

.82

.83

.84

.83

.83

.84

.84

.81

.81

.82

.82

© 2010 Cengage Learning

2. Answer: Current Test

– Components:

• We will hire 200 people

• The average person in this position stays 4 years

• The validity coefficient is .25

• The average annual salary for the position is

$50,000

• We have 400 applicants for 200 openings.

– Our utility would be:

(200 x 4 x .25 x $20,000 x .80) – (400 x 8) =

$3,200,000 - $3,200 = $3,196,800

51

© 2010 Cengage Learning

3. Answer: New Test

– Components:

• We will hire 200 people

• The average person in this position stays 4 years

• The validity coefficient is .30

• The average annual salary for the position is

$50,000

• We have 400 applicants for 200 openings.

– Our utility would be:

(200 x 4 x .30 x $20,000 x .80) – (400 x 4) =

$3,840,000 - $1,600 = $3,838,400

52

© 2010 Cengage Learning

4. Savings Over Old Test

Test

New Test: Reilly Statistical Logic Test

Old Test: Robson Math

Savings

Utility

$3,838,400

$3,196,800

$641,600

53

© 2010 Cengage Learning

Standardized Selection Ratio

.70

.60

.50

.40

.30

SR

1.00

.90

.80

.20

.10

.05

.50

.64

.80

.97

1.17

m

.00

.20

.35

1.40

1.76

2.08

54

© 2010 Cengage Learning

Typical Corrected Validity Coefficients for

Selection Techniques

Method

Structured Interview

Cognitive ability

Job knowledge

Work samples

Assessment centers

Biodata

Integrity tests

Situational judgment

Validity

.57

.51

.48

.39

.38

.34

.34

.34

Method

College grades

References

Experience

Validity

.32

.29

.27

Conscientiousness

Unstructured interviews

Interest inventories

Handwriting analysis

.24

.20

.10

.02

Projective personality tests .00

55

© 2010 Cengage Learning

1.

Selection Ratio

Base rate

Validity

% of future successful employees

.50

.80

.57

.91 (round r down)

.94 (round r up)

56

© 2010 Cengage Learning Selection Ratio

80

%

.

r.

.05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95

00 .80

.80

.

80 .80

.

80 .

80 .80

.

80 .

80 .80

.

80

.10

.85

.85

.84

.83

.83

.82

.82

.81

.81

.81

.80

.87

.89

.91

.94

.82

.83

.85

.86

.90

.92

.94

.97

.83

.84

.86

.88

.92

.94

.96

.99

.84

.86

.88

.90

.94

.96

.98

1.0

.84

.87

.89

.91

.95

.97

.99

1.0

.85

.88

.90

.93

.96

.98

1.0

1.0

.86

.89

.92

.94

.98

.99

1.0

1.0

.87

.90

.93

.96

.99

1.0

1.0

1.0

.89

.92

.95

.97

.99

1.0

1.0

1.0

.90

.94

.96

.98

.60

.70

.80

.90

.20

.30

.40

.50

.84

.85

.87

.81

.82

.83

.84

.83

.83

.84

.84

.81

.81

.82

.82

© 2010 Cengage Learning

2. Answer: Unstructured Interview

– Components:

• We will hire 200 people

• The average person in this position stays 4 years

• The validity coefficient is .20

• The average annual salary for the position is

$50,000

• We have 400 applicants for 200 openings.

– Our utility would be:

(200 x 4 x .20 x $20,000 x .80) – (400 x 15) =

$2,560,000 - $6,000 = $2,554,000

58

© 2010 Cengage Learning

2. Answer: Unstructured Interview

– Components:

• We will hire 200 people

• The average person in this position stays 4 years

• The validity coefficient is .57

• The average annual salary for the position is

$50,000

• We have 400 applicants for 200 openings.

– Our utility would be:

(200 x 4 x .57 x $20,000 x .80) – (400 x 15) =

$7,296,000 - $6,000 = $7,290,000

59

© 2010 Cengage Learning

4. Savings Over Old Test

Test

New Test: Structured Interview

Old Test: Unstructured Interview

Savings

Utility

$7,290,000

$2,554,000

$4,736,000

60

© 2010 Cengage Learning

61

© 2010 Cengage Learning

Definitions

• Test Bias

– Technical aspects of the test

– A test is biased if there are group differences in test scores (e.g., race, gender) that are unrelated to the construct being measured (e.g., integrity)

• Test Fairness

– Includes bias as well a political and social issues

– A test is fair if people of equal probability of success on a job have an equal chance of being hired

62

© 2010 Cengage Learning

Adverse Impact

Occurs when the selection rate for one group is less than 80% of the rate for the highest scoring group

Number of applicants

Number hired

Selection ratio

Male

50

20

.40

Female

30

10

.33

.33/.40 = .83 > .80 (no adverse impact)

63

© 2010 Cengage Learning

Adverse Impact - Example 2

Number of applicants

Number hired

Selection ratio

Male

40

20

.50

Female

20

4

.20

.20/.50 = .40 < .80 (adverse impact)

64

© 2010 Cengage Learning

Standard Deviation Method

1. Compute Standard Deviation female applicants

x total applicants male applicants

x total hired total applicants

2.

Multiply standard deviation by 2

3.

Compute expected number of females to be hired

(female applicants/total applicants) x total hired

4.

Compute confidence interval (expected ± 2 SD)

5.

Determine if number of females hired falls within the confidence interval

65

© 2010 Cengage Learning

Standard Deviation Example

1. Compute Standard Deviation

10

50

x

40

50

x 20 = .20 x .80 x 20 = 3.2 = 1.79

2.

Multiply standard deviation by 2 = 1.79 * 2 = 3.58

3.

Compute expected number of females to be hired

(10/50) x 20 = .2 x 20 = 4

4.

Compute confidence interval (.42

4

7.58)

5.

Determine if number of females hired falls within the confidence interval

66

© 2010 Cengage Learning

Other Fairness Issues

• Single-Group Validity

– Test predicts for one group but not another

– Very rare

• Differential Validity

– Test predicts for both groups but better for one

– Also very rare

67

© 2010 Cengage Learning

68

© 2010 Cengage Learning

Linear Approaches to Making the

Selection Decision

• Unadjusted Top-down Selection

• Passing Scores

• Banding

69

© 2010 Cengage Learning

The Top-Down Approach

Who will perform the best?

A “performance first” hiring formula

Applicant

Drew

Eric

Lenny

Omar

Mia

Morris

Sex

M

M

M

M

F

M

Test Score

99

98

91

90

88

87

70

© 2010 Cengage Learning

Top-Down Selection

Advantages

• Higher quality of selected applicants

• Objective decision making

Disadvantages

• Less flexibility in decision making

• Adverse impact = less workforce diversity

• Ignores measurement error

• Assumes test score accounts for all the variance in performance (Zedeck, Cascio, Goldstein & Outtz, 1996).

71

© 2010 Cengage Learning

The Passing Scores Approach

Who will perform at an acceptable level?

A passing score is a point in a distribution of scores that distinguishes acceptable from unacceptable performance

(Kane, 1994).

Uniform Guidelines (1978) Section 5H:

Passing scores should be reasonable and consistent with expectations of acceptable proficiency

72

© 2010 Cengage Learning

Passing Scores

Applicant Sex Score

Omar

Eric

Mia

M

M

F

98

80

70 (passing score)

Morris

Tammy

Drew

M

F

M

69

58

40

73

© 2010 Cengage Learning

Passing Scores

Advantages

• Increased flexibility in decision making

• Less adverse impact against protected groups

Disadvantages

• Lowered utility

• Can be difficult to set

74

© 2010 Cengage Learning

Five Categories of Banding

• Top-down (most inflexibility)

• Rules of “Three” or “Five”

• Traditional banding

• Expectancy bands

• SEM banding (standard error of measurement)

• Testing differences between scores for statistical significance.

• Pass/Fail bands (most flexibility)

75

© 2010 Cengage Learning

Top-Down Banding

Applicant

Drew

Eric

Lenny

Omar

Mia

Morris

Sex

M

M

M

M

F

M

Test Score

99

98

91

90

88

87

76

© 2010 Cengage Learning

Rules of “Three” or “Five”

Applicant

Drew

Eric

Lenny

Omar

Jerry

Morris

Sex

M

M

M

M

F

M

Test Score

99

98

91

90

88

87

77

© 2010 Cengage Learning

Traditional Bands

• Based on expert judgment

• Administrative ease

• e.g. college grading system

• e.g. level of job qualifications

78

© 2010 Cengage Learning

Band

A

B

C

D

Expectancy Bands

Test Score

522 – 574

483 – 521

419 – 482

0 – 418

Probability

85%

75%

66%

56%

79

© 2010 Cengage Learning

SEM Bands

“Ranges of Indifference”

• A compromise between the top-down and passing scores approach.

• It takes into account that tests are not perfectly reliable (error).

80

© 2010 Cengage Learning

SEM Banding

• Compromise between top-down selection and passing scores

• Based on the concept of the standard error of measurement

• To compute you need the standard deviation and reliability of the test

Standard error = SD 1

 reliabilit y

• Band is established by multiplying 1.96 times the standard error

81

© 2010 Cengage Learning

Applicant

Armstrong

Glenn

Grissom

Aldren

Ride

Irwin

Carpenter

Gibson

McAuliffe

Carr

Teshkova

Jamison

Pogue

Resnick

Anders

Borman

Lovell

Slayton

Kubasov m m f m f m f m f m m m

Sex m m m m m f m

65

64

61

60

80

75

72

70

Score Band 1 Band 2 Band 3 Band 4

99 x hired hired hired

98

94 x x x x hired x hired hired x x 92

88

87

84 x x x x hired x x

58

57

55

53

82

© 2010 Cengage Learning

Applicant

Clancy

King

Koontz

Follot

Saunders

Crichton

Sanford

Dixon

Wolfe

Grisham

Clussler

Turow

Cornwell

Clark

Brown m m m m m m f f f m m m m

Sex m m

65

64

61

60

75

72

70

Score Band 1 Band 2 Band 3 Band 4 Band 5

97 x hired

95 x x hired

94

92

88

87

84

80 x x x x x x x x x x x hired x x x x hired x x x x

12 .

8 1

.

90

12 .

8 .

10

= 12.8 * .316

= 4.04

Band = 4.04 * 1.96 = 7.92 ~ 8

83

© 2010 Cengage Learning

Types of SEM Bands

Fixed

Sliding

Diversity-based

• Females and minorities are given preference when selecting from within a band .

84

© 2010 Cengage Learning

Pass or Fail Bands

(just two bands)

Applicant Sex Score

Omar

Eric

Mia

M

M

F

98

80

70 (cutoff)

Morris M

Tammy F

Drew M

69

58

40

85

© 2010 Cengage Learning

Advantages of Banding

• Helps reduce adverse impact, increase workforce diversity,and increase perceptions of fairness (Zedeck et al., 1996).

• Allows you to consider secondary criteria relevant to the job (Campion et al., 2001).

86

© 2010 Cengage Learning

Disadvantages of Banding

(Campion et al., 2001)

• Lose valuable information

• Lower the quality of people selected

• Sliding bands may be difficult to apply in the private sector

• Banding without minority preference may not reduce adverse impact

87

© 2010 Cengage Learning

Factors to Consider When

Deciding the Width of a Band

(Campion et. al, 2001)

• Narrow bands are preferred

• Consequences or errors in selection

• Criterion space covered by selection device

• Reliability of selection device

• Validity evidence

• Diversity issues

88

© 2010 Cengage Learning

Legal Issues in Banding

(Campion et al., 2001).

Banding has generally been approved by the courts

• Bridgeport Guardians v. City of

Bridgeport, 1991

• Chicago Firefighters Union Local

No.2 v. City of Chicago, 1999

• Officers for Justice v. Civil Service

Commission, 1992

Minority Preference

89

© 2010 Cengage Learning

What the Organization Should do to Protect Itself

• The company should have established rules and procedures for making choices within a band

• Applicants should be informed about the use and logic behind banding in addition to company values and objectives (Campion et al., 2001).

90

© 2010 Cengage Learning

Banding Example

• Sample Test Information

– Reliability = .80

– Mean = 72.85

– Standard deviation = 9.1

• The Standard Error

SD 1

 reliabilit y

• Example 1

– We have four openings

– We would like to hire more females

9 .

1 1

.

80

• The Band

Band = Standard error * 1.96

Band = 4.07 * 1.96 = 7.98 ~ 8

9 .

1 .

20

• Example 2

– Reliability = .90

– Standard deviation = 12.8

=

9.1 * .447

= 4.07

91

© 2010 Cengage Learning

Using Banding to Reduce Adverse Impact

Exercise 6.4

92

© 2010 Cengage Learning

93

© 2010 Cengage Learning

1. Standard Error

2. Band

3. Hire using nonsliding band

McCoy

Robinette

4. Hire using sliding band

Carmichael

Ross

3.06

3.06 * 1.96 = 6 points

5. Hire using a passing score of 80

Carmichael

Ross

Crane

Carmichael

McCoy

Crane

McCoy

Crane

94

© 2010 Cengage Learning

Applicant

McCoy

Crane

Robinette

Schiff

Carmichael

Carver

Ross

Cutter

Kincaid

Cabot

Stone

Lewin

Shore

Branch

Sack f f m f m m f m m m f m m

Sex m m

85

83

80

78

87

86

86

89

89

88

Score Band 1 Band 2 Band 3 Band 4 Band 5

97 x x hire hired hired

95 x x x x hire

94

94

91 x x x x x hire x x x x hired hired x hired x x hire x hired

7 .

43 1

.

83

7 .

43 .

17

= 7.43 * .412

= 3.06

Band = 3.06 * 1.96 = 5.99 ~ 6

95

© 2010 Cengage Learning

Should the top scorers on a test always get the job?

96

© 2010 Cengage Learning

Applied Case Study:

Thomas A. Edison’s Employment Test

97

© 2010 Cengage Learning

Focus on Ethics

Diversity Efforts

98

© 2010 Cengage Learning

What Do You Think?

• To increase diversity, it is often legal to consider race or gender as a factor in selecting employees. Although legal, do you think it is ethical that race or gender be a factor in making an employment decision? How much of a role should it play?

• Is it ethical to hire a person with a lower test score because he or she seems to be a better personality fit for an organization?

• If an I/O psychologist is employed by a company that appears to be discriminating against Hispanics, is it ethical for her to stay with the company? What ethical obligations does she have?

99