Uploaded by vallejomatthew

week5assignment

advertisement
1
Can Artificial Intelligence be Used to Reduce Employment Discrimination?
Matthew Vallejo
The University of Arizona Global Campus
PHI103 Informal Logic
Professor Wise
May 16, 2022
2
Can Artificial Intelligence Be Used to Reduce Employment Discrimination?
An increasingly popular topic for debate is whether Artificial Intelligence (AI) has a
place in employment screening. Some have argued that using AI to determine which candidates
move on in the employment screening process promotes discrimination and adds even more bias
to an already flawed system. In contrast, others believe that AI can be used to reduce the amount
of bias and discrimination already present in the system we currently use. With the use of AI
becoming more prevalent, we must tackle this issue before it has time to snowball into a problem
that is too big for us to handle. This paper will present strong arguments on both sides of the
issue and my argument for whether AI should be used in employment screening. This paper will
also analyze all arguments and an objection to my argument.
Argument that AI Increases Employment Discrimination
Below is an argument that shows findings that support the conclusion that AI increases
employment discrimination. The argument is listed in standard form as follows:
Premise 1: The U.S. Equal Employment Opportunity Commission (n.d.) defines employment
discrimination as treating someone differently or less favorably based on race, color, religion,
sex, national origin, disability, age, or genetic information.
Premise 2: AI systems amplify the biases of real-world data. (Shaw, 2019)
Premise 3: Programmers sometimes create models that produce more favorable results for a
particular group of individuals to compensate for bias present in the data. (Ferrer et al., 2021)
Premise 4: Machine learning programs favor European-sounding names over African-sounding
names. (Hadhazy, 2017)
3
Premise 5: Amazon’s artificial intelligence tool favored male over female candidates. (Hamilton,
2018)
Premise 6: Microsoft and Face++’s Facial recognition software views black people as more
contemptuous and angrier than white people. (Rhue, 2018)
Conclusion: Artificial intelligence systems increase discrimination and bias when screening
candidates for employment.
Argument that AI Decreases Employment Discrimination
On the other side of the argument are those who feel AI has the potential to decrease
employment discrimination. This argument also has an equal amount of research to support its
claim and is listed in standard form as follows:
Premise 1: AI systems can be designed to eliminate unconscious human bias as long as certain
specifications are met. (Polli, 2019)
Premise 2: If AI systems are transparent in their codebase and ethical design, they will produce
fairer results in employment screening. (Trindel et al., n.d.)
Premise 3: AI can be trained to ignore the name, skin color, race, or gender when analyzing
candidates, which is something a human cannot do. (Xie, 2019)
Premise 4: Vermont’s Artificial Intelligence Task Force (2020) states that a permanent AI
commission and the adoption of a code of ethics would help keep AI systems fair.
Premise 5: Researchers at Penn State and Columbia University have created an artificial
intelligence tool to detect racial and gender discrimination. (LaJeunesse, 2020)
4
Conclusion: Artificial intelligence systems decrease discrimination and bias when screening
candidates for employment
Analysis of Both Arguments
Both of the arguments listed above are relatively sound. They both provide a profound
amount of research that can be used to support their respective conclusions. However, both
arguments are not exactly opposite each other. Both can be true at the same time. The first
argument states that AI increases employment discrimination while the second conclusion states
it decreases it. Both can be true because it has shown that AI can do both of these things at times.
We now should be asking whether AI does more harm than good when it comes to employment
discrimination.
Further analysis shows that one argument may be stronger than the other. The first
argument has a flaw in its reasoning. The hasty generalization fallacy may be committed in the
premises listed in the first argument. In premise five and six of the first argument, it states that
Microsoft, Amazon, and Face++’s AI systems have produced discrimination during employment
screening. While these premises are true, it may be an over generalization to state that since three
companies have created AI systems that produced negative results, all companies that create AI
systems will produce these same negative results. A better approach would be to analyze the
pitfalls these companies have faced and learn from them. If businesses can learn why Amazon,
Microsoft, and Face++’s systems failed, they can create better AI systems that produce fairer
results.
However, this is not to say that the first argument’s conclusion is false, and the second
argument is undoubtedly better. Both arguments have specific cases where some may see them
as correct. Despite this, however, it does seem like the second argument is slightly sounder since
5
it offers a solution to fight employment discrimination. In contrast, the first argument offers no
solution other than saying that using AI to screen for employees can cause problems.
My Argument
I believe the best argument can be listed in my words below. This argument contains
pieces from the first and second arguments in addition to my interpretation of a solution to the
issue that is AI use in employment screening:
Premise 1: The U.S Equal Opportunity Commission (2021) states that AI has great potential to
improve lives in employment screening.
Premise 2: When misused, AI has the potential to reinforce biases. (Shaw, 2019)
Premise 3: Traditional methods of hiring are incredibly biased. (Polli, 2019)
Premise 4: If a set of standards are created and followed, then AI will produce fairer results than
traditional methods of employment screening.
Premise 5: If organizations create open-source auditable AI systems, then these systems will be
able to be tested for any discriminatory behavior
Premise 6: If AI systems can be tested for any discriminatory behavior, then AI systems will
decrease employment discrimination.
Conclusion: Therefore, AI systems can decrease employment discrimination if suitable systems,
methods, and checks and balances are put into place to limit how organizations use AI systems.
Response to an Objection to the Argument
While I presented a sound argument with research to back up my premises, there can still
be some objections to my argument. One of the strongest objections would be to the fourth
6
premise of my argument. This premise states that if a set of standards are created and followed,
then AI will produce fairer results than traditional employment screening methods. However, one
can argue that creating a set of standards for AI to produce unbiased results would require clean
data with no biases present.
Finding data with no biases is a highly challenging problem that many data scientists
struggle with when creating fair algorithms. Many AI systems work to highly rate applicants
with a set of qualities that represent the ideal candidate for a position. Most of the time, these
qualities are taken from high-performing employees to find applicants similar to these highperforming employees. The issue arises when you have a skewed workforce that underrepresents
certain groups of people. If, for example, a company had a predominately white male workforce,
then it is likely that any AI system they use will favor white male applicants.
The issue of creating a fair algorithm for AI systems is a catch-22. To create an algorithm
that produces a diverse workforce, you must present data from an already diverse workforce.
This is a problem because to create a system that produces fair results. You must already have a
system that produces fair results. The best response to this objection is to create an AI system
that ignores race, age, sex, gender, and other qualities that can be discriminated against. (Barton,
2019) Eventually, suppose this information is filtered out, and there is enough time to keep
adjusting the algorithm. In that case, we should see a system that can produce results that can
produce a diverse workforce.
The main thing one must consider when creating an AI system to screen for employees is
that this will not be a system that can be created and perfected overnight. At first, there may be
some discrimination present in the algorithm, but companies must be willing to closely watch the
7
system and correct any errors in the system's judgment. Only then can a system that produces fair
results be created.
Conclusion
In conclusion, issues have yet to be solved when dealing with artificially intelligent
systems that can produce fair results. Some believe the risks are too significant and that we can
create worse results for individuals if AI systems are used in employment screening. However,
evidence suggests that the opposite may be true. A human cannot parse through the thousands of
applicants that may come their way, so AI must inevitably be used to filter out some applicants
to a hiring manager. The big question that must be answered is how AI can be used to decrease
the discrimination that is already present in employment screenings. This is not an easy question
to answer, and there are many ways to view and tackle this issue, but one thing remains true. In
order to solve challenging issues like these, we must all have an open mind to all solutions. If we
fail to assess all sides of an argument critically, we will not likely come to an optimal solution.
8
References
Barton, N. T. L., Paul Resnick, and Genie. (2019, May 22). Algorithmic bias detection and mitigation:
Best practices and policies to reduce consumer harms. Brookings.
https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practicesand-policies-to-reduce-consumer-harms/
Ferrer, X., Nuenen, T. van, Such, J. M., Cote, M., & Criado, N. (2021). Bias and discrimination in AI:
A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72–80.
https://doi.org/10.1109/MTS.2021.3056293
Hadhazy, A. (2017, April 18). Biased bots: Artificial-intelligence systems echo human prejudices.
Princeton University. https://www.princeton.edu/news/2017/04/18/biased-bots-artificialintelligence-systems-echo-human-prejudices
Hamilton, I. A. (2018, October 10). Amazon built an AI tool to hire people but had to shut it down
because it was discriminating against women. Business Insider.
https://www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women2018-10
LaJeunesse, S. (2020, September 3). Using artificial intelligence to detect discrimination | Penn State
University. https://www.psu.edu/news/research/story/using-artificial-intelligence-detectdiscrimination/
Polli, F. (2019, October 29). Using AI to eliminate bias from hiring. Harvard Business Review.
https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring
9
Rhue, L. (2018). Racial influence on automated perceptions of emotions. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.3281765
Shaw, J. (2019, December 6). Artificial intelligence and ethics. Harvard Magazine.
https://www.harvardmagazine.com/2019/01/artificial-intelligence-limitations
Trindel, K., Polli, F., & Glazebrook, K. (n.d.). Using technology to increase fairness in hiring. 8.
U.S. Equal Employment Opportunity Commission. (n.d.-a). EEOC launches initiative on artificial
intelligence and algorithmic fairness. US EEOC. Retrieved May 2, 2022, from
https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmicfairness
U.S. Equal Employment Opportunity Commission. (n.d.-b). What is employment discrimination? US
EEOC. Retrieved April 19, 2022, from https://www.eeoc.gov/youth/what-employmentdiscrimination
Vermont Artificial Intelligence Task Force. (2020, January). Artificial intelligence task force.
https://legislature.vermont.gov/assets/Legislative-Reports/Artificial-Intelligence-Task-ForceFinal-Report-1.15.2020.pdf
Webber, C., & Gerleman, S. (2022, January 28). “Regulation of ai hiring tools is a work in
progress,” law360 employment authority | Cohen Milstein.
https://www.cohenmilstein.com/update/%E2%80%9Cregulation-ai-hiring-tools-workprogress%E2%80%9D-law360-employment-authority
10
Xie, Y. (2019, September 6). AI for candidate screening: Eliminating or reinforcing bias. Medill
Reports Chicago. https://news.medill.northwestern.edu/chicago/ai-for-candidate-screeningeliminating-or-reinforcing-bias/
Download