NILG 2012 EEO Analysis Bootcamp_7-6-2012

advertisement
2012 Industry Liaison Group
National Conference
EEO Analysis Bootcamp
August 28, 2012  9:00am - 12:00 p.m.
Drill Instructors: Dan A. Biddle, Ph.D.
Patrick M. Nooren, Ph.D.
Biddle Consulting Group, Inc.
Contact Information
Biddle Consulting Group, Inc.
193 Blue Ravine, Suite 270
Folsom, CA 95630
916.294.4250
www.biddle.com | www.bcginstitute.org
Dan A. Biddle, Ph.D. (dbiddle@biddle.com)
Patrick M. Nooren Ph.D. (pnooren@biddle.com)
Copyright © 2012 BCG, Inc.
2
Overview of Biddle Consulting Group, Inc.
Affirmative Action Plan
(AAP) Consulting and
Fulfillment
• Thousands of AAPs developed each year
• Audit and compliance assistance
• AutoAAP™ Enterprise software
HR Assessments
• AutoGOJA™ online job analysis system
• TVAP™ test validation & analysis program
• CritiCall™ pre-employment testing for 911 operators
• OPAC™ pre-employment testing for admin professionals
• Video Situational Assessments (General and Nursing)
EEO Litigation Consulting
/Expert Witness Services
• Over 200+ cases in EEO/AA (both plaintiff and defense)
• Focus on disparate impact/validation cases
Compensation Analysis
• Proactive and litigation/enforcement pay equity studies
• COMPare™ compensation analysis software
Publications/Books
BCG Institute for Workforce
Development
Nation-Wide Speaking and
Training
• EEO Insight™: Leading EEO Compliance Journal
• Adverse Impact (3rd ed.) / Compensation (1st ed.)
• 4,000+ members
• Free webinars, EEO resources/tools
• Regular speakers on the national speaking circuit
BCG Institute for Workforce
Development
BCGi Standard Membership (free)
Online community
Monthly webinars on EEO compliance topics
EEO Insight Journal (e-copy)
BCGi Platinum Membership (paid)
Fully interactive online community
Includes validation/compensation analysis books
EEO Tools including validation surveys and AI calculator
EEO Insight Journal (e-copy and hardcopy)
Members only webinars, training and much more…
www.BCGinstitute.org
We have four (4)
primary topics to
cover in 90 minutes,
each capable of
being it’s own fullday of training . . .
you did sign-up for
a bootcamp!!
Copyright © 2012 BCG, Inc.
5
Agenda
Comparison of Incumbency to Availability: What We Do Look
Like Compared to What We “Should” Look Like
Comparison of Selection Rates: How We Got to Where We Are
Today
Logistic Regression and Hiring: I Understand There is a
Significant Difference in Hiring Rates . . . But It’s Based on JobRelated Practices, Procedures and/or Tests (PPTs)
Multiple Regression and Compensation: I Understand There is a
Significant Difference in Pay, But It’s Based on Job-Related Criteria
Copyright © 2012 BCG, Inc.
6
Comparison of Incumbency to
Availability (i.e., What We Do Look
Like v What We “Should” Look
Like)
Copyright © 2012 BCG, Inc.
7
What We Do Look Like . . .
• Availability analyses are conducted for each location (or
FAAP) by “job group”
• Job groups are aggregations of jobs that are similar in
“content, wage rate, and opportunity”
• Job groups are used to:
―
Increase sample size to yield meaningful results
―
Reduce the number of analyses conducted
• Job groups should never cross EEO categories
Important Note: Be thoughtful when creating job
groups. You could be artificially creating problems!
Copyright © 2012 BCG, Inc.
8
Job Group Analysis
Copyright © 2012 BCG, Inc.
9
What We “Should” Look Like . . .
• What we “should” look like is referred to as the “final
availability.”
– It is an estimate of the number of qualified minorities or
women available for employment in a given job group
– It’s a combination of both internal and external
comparison data (i.e., factors) used to identify what the
composition of those qualified to work in the job group
is “supposed” to look like
– In the “comparison of incumbency to availability”
analysis it will be compared to the job group headcounts
to determine the existence of underutilization
Copyright © 2012 BCG, Inc.
10
What We “Should” Look Like . . .
• External Factor:
– Step 1: Define local labor area
– Step 2: Identify/select census occupation codes (472)
– Step 3: Mathematically weight census codes based
upon representation within each job group
– Step 4: Identify relevant data other than local (e.g.,
state/national – if any)
Important Note: Results are only as good as the
amount of effort put into this process!
Copyright © 2012 BCG, Inc.
11
What We “Should” Look Like . . .
Copyright © 2012 BCG, Inc.
12
What We “Should” Look Like . . .
• Internal Factor:
– Positions are not always filled via external sources
– Necessary to identify internal sources of availability
information
– Step 1: Identify “Feeders” for all jobs/job groups
– Step 2: Weight feeders based on historical promotions
data (for starters . . . with a heavy dose of personal
review and approval)
Target Job Group
Weight
Feeder Job Group
1A – Management
75.0
1B – Middle Management (Directors)
25.0
1C – Managers/Supervisors
Copyright © 2012 BCG, Inc.
13
What We “Should” Look Like . . .
Copyright © 2012 BCG, Inc.
14
What We “Should” Look Like . . .
• Factor Weights:
– The weight given to the internal and external
availability data (i.e., factors) for each job group
– Identifies the relative “importance” of each set
of data
• Assigning factor weights requires the user ask the
following question:
– “Out of 100 hypothetical movements into this
job group, what number do I expect to come
from a local recruitment area, reasonable
recruitment area, or an internal pool?”
Copyright © 2012 BCG, Inc.
15
What We “Should” Look Like . . .
Copyright © 2012 BCG, Inc.
16
Comparison of Incumbency to
Availability (Overview)
External Census Data
Internal Availability
Data
Assign Factor Weights
Final Availability Data
Compare Representation (Incumbency) to Availability
Actual Representation (Incumbency/Headcount) Data
If Potential Problem Area Exists
Create Goal/Action-Oriented Program
Copyright © 2012 BCG, Inc.
17
Comparison of Incumbency to
Availability
• Regulations require contractors to compare the
percentage of minorities and women in each job group
with the availability for those job groups determined in
the availability analysis
• When the percentage of minorities or women employed
in a particular job group is less than would reasonably
be expected . . . the contractor must establish a
placement goal and create action-oriented programs
associated with that goal
Copyright © 2012 BCG, Inc.
18
Comparison of Incumbency to
Availability
•
What is “less than would reasonably be expected”?
― Any Difference: Is there any difference between incumbency and
availability?
― Whole Person Rule: Is the difference between incumbency and
availability at least one whole person?
― 80% Rule: Is incumbency at least 80% of availability?
― Statistical Significance: Is the difference between incumbency and
availability statistically significant?
Bonus Question: What is the name of the statistical test used to
evaluate incumbency v availability?
Exact Binomial!
Fisher Exact? Chi Square? Impact Ratio Analysis?
Copyright © 2012 BCG, Inc.
19
Comparison of Incumbency to
Availability
Statistical Significance
•Least proactive
•Legally-oriented
•Least goals
Any Difference
•Proactive
•Diversity-Oriented
•Most goals/misleading?
80% Test
•Has historical value
•Misleading?
Whole Person
•Focus on tangible issues
•Good with small orgs/job groups
•Balanced
Important Note: Identifying underutilization is NOT a
declaration of discrimination. Choose a rule that best
represents your organizational size/structure and how it
views/perceives affirmative action.
Copyright © 2012 BCG, Inc.
20
Comparison of Selection Rates:
How We Got to Where We Are
Today
Copyright © 2012 BCG, Inc.
21
(+)
Hires
(-)
Promotions
From
(+)
Transfers
Into
(+)
Promotions
Into
Current
Representation
(-)
Involuntary
Terms
(-)
Transfers
From
(-)
Voluntary
Terms
Copyright © 2012 BCG, Inc.
22
Adverse/Disparate Impact:
Legal Overview
DISPARATE IMPACT
An unlawful employment practice based on disparate impact is
established only if:
1
A complaining party demonstrates that a respondent uses a
particular employment practice that causes an adverse impact
and
2
the respondent fails to demonstrate that the challenged practice
is job-related for the position in question and consistent with
business necessity
or
3 the complaining party makes the demonstration described
above with respect to an alternate employment practice, and
the respondent refuses to adopt such alternative employment
practice.
Copyright © 2012 BCG, Inc.
23
Title VII Disparate Impact
Discrimination Flowchart
How selection processes
are challenged . . .
Practice,
Procedure,
Test (PPT)
“or”
Plaintiff
Burden
Diff. in Rates?
YES
Defense
Burden
NO
Is the PPT
Valid?
END
YES
Plaintiff
Burden
Alternative
Employment
Practice?
NO
Defendant Prevails
NO
Plaintiff
Prevails
YES
Plaintiff Prevails
Copyright © 2012 BCG, Inc.
24
Selection Rate Comparison
Does a practice, procedure or test (PPT) results in
disproportionate selection rates by gender,
race/ethnicity, or age group?
Males
Females
Copyright © 2012 BCG, Inc.
25
Selection Rate Comparison
•2 X 2 Table Comparison
•Impact Ratio Analysis (IRA)
•Fisher Exact / Chi-Square / 80% Test
Note: The chi-square estimates the probability and becomes more accurate
as the sample size increases. This is why Fisher’s Exact (adj) is required with
small sample sizes. However, BCG recommends Fisher’s Exact (adj) at all
times.
Men
Pass (50)
Men
Fail (50)
Men Passing
Rate (50%)
Women
Pass (25)
Women
Fail (75)
Women Passing
Rate (25%)
Copyright © 2012 BCG, Inc.
Results in a value
indicating if the observed
difference in rates is due
to chance (i.e., statistically
significant).
26
Statistical Significance
• Statistical Significance (Thresholds):
― 5%
― 0.05
― 1 chance in 20
― 2.0 Standard Deviations (actually 1.96)
• Statistical Significance (Outputs)
―Lower p-values=higher SD (or “Z”) values
―For example:
• P-value: .05 = 1.96 SDs
• P-value: .01 = 2.58 SDs
Copyright © 2012 BCG, Inc.
27
Adverse Impact:
The Typical Approach
Copyright © 2012 BCG, Inc.
28
Statistical Significance and Power
• Statistical significance: The point at which differences
become large enough that one can claim a trend exists.
• Statistical power: The ability to see those trends if, in fact,
they do exist.
• Statistical power is directly related to effect size and
sample size:
Effect size: The size of the difference in selection rates between two
groups . . . the larger the difference the less number of transactions
necessary to detect statistical significance
― Sample size: With larger numbers of transactions it becomes much
easier to detect statistical significance
―
Copyright © 2012 BCG, Inc.
29
Statistical Significance and Power
• Enforcement agencies have no control over effect size
(i.e., the difference in selection rates), but they do have
some control over sample size . . . which is why they
often request two (2) years worth of data to analyze.
• However, simply aggregating all applicants and all hires
across strata (as is typically done), can sometimes result
in incorrect/misleading findings.
Copyright © 2012 BCG, Inc.
30
Statistical Significance and Power
Company
A
Company
C
0.343
Company
B
0.168
0.088
Company
D
0.048
31
Adverse Impact:
The Typical Approach
• Analyses by AAP job group regardless of different:
Job titles
― Selection processes
― Hiring managers
― Basic qualifications
― Locations (perhaps)
― Applicant pools for separate requisitions (perhaps)
―
• Typically an aggregation of 12 months (sometimes 18/24
months) worth of transactions into a single 2x2 table
• Considers everyone who applied throughout the year as
available for every hire throughout the year
Copyright © 2012 BCG, Inc.
32
Adverse Impact:
The Typical Approach
ALL applicants
and ALL hires for
a 12-month period
Men
Pass
Men
Fail
Women Women
Pass
Fail
There is nothing wrong with this approach . . . as an initial inquiry only.
Sometimes this approach is used as the basis for a Notice of Violation
(NOV) or plaintiff class action litigation; however, it is up to the employer to
provide rebuttal analyses that may reflect reality more accurately (e.g.,
accounting for recordkeeping issues, etc.)
Copyright © 2012 BCG, Inc.
33
Simpson’s Paradox
Testing Year
2010 Test
2011 Test
2010 + 2011
Tests Combined
Group
Applicants
(#)
Selected
(#)
Selection Rate
(%)
Men
400
200
50.0%
Women
100
50
50.0%
Men
100
20
20.0%
Women
100
20
20.0%
Men
500
220
44.0%
Women
200
70
35.0%
• Fisher Exact Test: SD = 2.16 (Significant)
• Mantel-Haenszel: SD = .024 (NOT Significant)
Copyright © 2012 BCG, Inc.
34
Adverse Impact:
The Right Way
Copyright © 2012 BCG, Inc.
35
Single Event v. Multiple Events
Men
Men
Pass
Fail
Women Women
Pass
Fail
ALL applicants
and ALL hires
throughout the
time period
Req. 1
Men
Men
Pass
Fail
Women Women
Pass
Fail
+
Req. 2
Men
Men
Pass
Fail
Women Women
Pass
Fail
Copyright © 2012 BCG, Inc.
= Chi-Square or
Fisher’s Exact (adj)
+
Req. 3
Men
Men
Pass
Fail
Women Women
Pass
Fail
36
Mantel-Haenszel (MH) Defined
• In the context of selection rate analyses, the MH:
– is a statistical tool that allows researchers to appropriately
combine separate and distinct selection processes (e.g.,
requisitions) into a single analysis
– appropriately allows for the statistical benefits of
increased sample size while controlling for Simpson’s
Paradox
– is a useful tool for evaluating whether the employer has a
“pattern and practice” that is possibly discriminatory
– may (if appropriate) be used to rebut allegations based on
single aggregated analyses over time
Copyright © 2012 BCG, Inc.
37
Mantel-Haenszel (MH) Defined
• The Mantel-Haenszel is not for every situation. It requires
separate and distinct pools of applicants/test takers, etc.
For example, combining:
Requisitions
o Locations
o Different jobs in same job group
o Different hiring seasons and/or groups
o
• Not applicable for “pooled requisitions” where the
requisition stays “open” and applicants are just regularly
selected from the pool.
Copyright © 2012 BCG, Inc.
38
Component “Step”
Analyses
Copyright © 2012 BCG, Inc.
39
Component “Step” Analyses
Title VII of 1964/1991 Civil Rights Act
An unlawful employment practice based on disparate impact
is established under this title only if a complaining party
demonstrates that a respondent uses a particular
employment practice that causes a disparate impact . . .
Important Note: Enforcement agencies have every right to
investigate the practices, procedures, and tests contractors
use to screen applicants. However, in the past, due to
resource constraints they wouldn’t typically do so unless
there was adverse impact in the overall hiring process.
Times have changed!
Copyright © 2012 BCG, Inc.
40
Component “Step” Analyses
Male v. Female
Steps
Starting
Completing
Result
Overall (App vs.
Hired)
Male - 100
Female - 100
Male - 50
Female - 30
2.81 SD
1. Basic
Qualifications
Male - 100
Female -100
Male – 79
Female - 77
0.25 SD
2. Test
Male - 79
Female - 77
Male – 65
Female - 35
4.80 SD
3. Interview
Male - 65
Female - 35
Male – 60
Female - 32
0.18 SD
4. Final Selection
Male - 60
Female - 32
Male – 50
Female - 30
0.00 SD
Before you start freaking out, there
is a tool. Just join BCGi . . .
Adverse Impact Toolkit™
Copyright © 2012 BCG, Inc.
42
Logistic Regression and Hiring: I
Understand There is a Significant
Difference in Hiring Rates . . . But It’s
Based on Job-Related Practices,
Procedures, or Tests
Copyright © 2012 BCG, Inc.
43
Logistic Regression
• The goal is not to instruct how to perform proper
logistic regression analyses.
• The goal is to inform you that there is a way to
analyze whether applicant differences in job-related
criteria are the “real” reason why there is a disparity.
• Particularly useful when/if the OFCCP claims
discrimination but you know that there is a legitimate
reason for the disparity.
Copyright © 2012 BCG, Inc.
44
Logistic Regression
• Classic adverse impact analyses can only determine if
the numerical difference in passing rates between two
groups is significantly different.
• Logistic Regression (LR) can identify if that numerical
difference in passing rates is due to applicant
differences in job-related criteria (e.g., experience or
education).
Copyright © 2012 BCG, Inc.
45
Logistic Regression
Hiring
Decision(s)
Gender
Relevant
Experience
Education
Copyright © 2012 BCG, Inc.
46
Logistic Regression
• LR needs to be applied to job-related factors that were
actually used or considered in the selection process
• LR is useful for weighing the practical importance of jobrelated factors in the hiring or promotion process
• LR can potentially “pin” the impact on specific job-related
criteria
• LR can also be useful for determining “shortfall”
calculations
―
―
For example, how many women would have been hired “but for”
the possible discrimination?
What is the total shortfall for women, given what the model can
explain?
Copyright © 2012 BCG, Inc.
47
Gender Alone
B
S.E.
Wald
df
Sig.
Exp(B)
GENDER
1.39
.41
11.61
1
0.0006
4.03
Constant
-3.23
0.38
70.26
1
0.00
0.039
• Gender is significant in the absence of any job-related explanatory variables.
Gender and Qualification Factors
B
S.E.
Wald
df
Sig.
Exp(B)
GENDER
0.36
0.45
0.69
1
0.414
1.24
QUALIFICATION 1
0.47
0.36
1.68
1
0.194
1.59
QUALIFICATION 2
QUALIFICATION 3
QUALIFICATION 4
0.915
1.87
0.21
0.68
0.93
0.34
1.78
4.04
0.37
1
1
1
0.18
0.04
0.53
2.49
6.51
1.23
Constant
-5.07
0.76
43.71
1
0.00
0.006
•
After controlling for gender differences in the job-related variables, gender is no
longer significant.
Multiple Regression and
Compensation: I Understand
There is a Significant Difference
in Pay . . . But It’s Based on JobRelated Criteria
Copyright © 2012 BCG, Inc.
49
Multiple Linear Regression
Multiple Regression
Used to create a “model” to
determine whether
differences in compensation
are due to “legitimate jobrelated factors” or
(perhaps) an employee’s
gender or ethnicity.
Gender
Experience
Job
Market
Factors
Performance
Differences in
Compensation
Tenure
Education
Copyright © 2012 BCG, Inc.
50
Correlation
Correlation Coefficient (r = .35)
Compensation
Customer Service Representative
Time with Company
Copyright © 2012 BCG, Inc.
51
Correlation
r = .35
Compensation
Employee
Gender
Percent of compensation explained by
gender (r2 = .35 x .35 = 12.3%)
Copyright © 2012 BCG, Inc.
52
The Correlation Coefficient
Range
Size
Direction
Coefficient of
Determination
• Always between -1.00 and +1.00
• Close to + or – 1.00: stronger the relationship
• Close to 0.00: weaker the relationship
• 0.00: no relationship
• Negative: variables move in the opposite direction
• Positive: variables move in the same direction
• Square the correlation coefficient to get the percent
of one variable that is accounted for by the other
variable
Copyright © 2012 BCG, Inc.
53
Correlation and Multiple Regression
r = .35
Compensation
Customer Service Representative
$55,000
Regression/Prediction
Line
8 years
Time with Company
Copyright © 2012 BCG, Inc.
54
The Regression “Model”
• All variables together become the basis for a prediction
“model” known as a regression model.
• The regression model predicts a certain percentage of what
makes up an employee’s compensation.
R = .67
R2 = 45%
Experience
Compensation
Time In
Company
Perf. Score
Copyright © 2012 BCG, Inc.
55
The Regression “Model”
Q: So how does regression help to identify
discrimination in pay?
A: If the prediction model becomes significantly
better after including the protected variable.
Experience
Compensation
Gender
Time In Perf. Score
Company
Copyright © 2012 BCG, Inc.
R2 = 45%
without gender
R2 = 51%
with gender
56
Before you start freaking out
(again), there is another tool. Just
join BCGi . . .
COMPare™
Copyright © 2012 BCG, Inc.
57
Summary and Conclusion
It is imperative that practitioners (at least
conceptually) “know” these four statistics:
1. Comparison of Incumbency to Availability
―
What is availability and how is it calculated?
2. Adverse Impact Analyses
―
Typical v. how to do it right
―
Component “step” analyses
3. Logistic Regression – Can job-related factors justify
significant differences in selection?
4. Multiple Regression – Can job-related factors justify
significant differences in compensation?
Copyright © 2012 BCG, Inc.
58
Questions
Answers
Copyright © 2012 BCG, Inc.
59
Download