Criminal justice is one of the oldest institutions in society, carrying a huge responsibility of
making decisions over the lives and liberties of individuals, therefore having the power to
influence the very fabric of our collective conscience. In today’s society artificial intelligence
represents the rising technology that flows into more and more industries, making work easier,
faster, and more efficient. Consequently the question may arise whether it should be applied
in criminal justice to make critical decisions. Even though in many cases AI is a useful and
practical tool, decisions that influence peoples’ and families lives are not questions that should
be answered by the full responsibility of AI. Since criminal justice affects governments,
companies, and individuals on the same level, and plays an important role in the everyday life of
many, it requires serious consideration and precise decision making. This essay will argue the
serious concerns regarding the use of artificial intelligence in criminal justice by detailing its
inability to recognize human values, biased decisions, and lack of transparency, ultimately
demonstrating why it should not be used to make significant and far-reaching decisions about
human lives and liberties.
Due to the differences in human and machine intelligence, AI is lacking the ability to evaluate
decisions involving capacities only perceivable by humans, such as the ones involving emotions
and values. Making critical decisions in a legal context is a complex and long process that
requires the court to consider many aspects before passing a judgment. Among these aspects
emotional consideration is a crucial point when making decisions in accordance with the human
values of society. Since machine intelligence is unable to identify human emotions and properly
consider these values, it can not consider them in its decision-making process. As stated in the
article “Cyborg justice and the risk of technological-legal lock in” (Rebeca Crootof, 2019)
published by the Columbia Law Review Forum, the author clearly states; “Human beings and
machine systems process information and reach conclusions in fundamentally different ways,
with AI being particularly ill-suited for the rule application and value balancing often required of
human judges” (p. 234-235). Furthermore, while a human judge is able to specially treat
different cases according to emotional norms, the rules and patterns AI uses are following a fixed
schema which does not involve the special needs each situation requires. “Rather, the
decisionmaker is assessing the meaning of the facts and the meaning of the law in the situation in
the context of larger social norms and goals” (p.242). As both stability and legal evolution are
critical to the legitimacy of law, the balance between time-tested rules and the flexible
application of these rules are essential to accommodate the different social circumstances in
particular cases. Consequently, until
Criminal justice is one of the oldest institutions in society, carrying a huge responsibility of
making decisions over the lives and liberties of individuals, therefore having the power to
influence the very fabric of our collective conscience. In today’s society artificial intelligence
represents the rising technology that flows into more and more industries, making work easier,
faster, and more efficient. Consequently the question may arise whether it should be applied in
criminal justice to make critical decisions. Even though in many cases AI is a useful and
practical tool, decisions that influence peoples’ and families lives are not questions that should
be answered by the full responsibility of AI. Since criminal justice affects governments,
companies, and individuals on the same level, it requires serious consideration and precise
decision making. This essay will argue the serious concerns regarding the use of artificial
intelligence in criminal justice by detailing its inability to recognize human values, biased
decisions, and lack of transparency, ultimately demonstrating why it should not be used to make
significant and far-reaching decisions about human lives and liberties.
Due to the differences in human and machine intelligence, AI is lacking the ability to evaluate
decisions involving capacities only perceivable by humans, such as the ones involving emotions
and values. Making critical decisions in a legal context is a complex and long process that
requires the court to consider many aspects before passing a judgment. Among those aspects
emotional consideration is a crucial point when making decisions in accordance with the human
values of society. Since machine intelligence is unable to identify human emotions and properly
consider these values, it cannot consider them in its decision-making process. As stated in the
article “Cyborg justice and the risk of technological-legal lock in” (Crootof, 2019) published by
the Columbia Law Review Forum, the author clearly states; “Human beings and machine
systems process information and reach conclusions in fundamentally different ways, with AI
being particularly ill-suited for the rule application and value balancing often required of human
judges” (p. 234-235). Therefore, AI is unable to complete the essential role of value
consideration which directly affects the outcome of a trial. Furthermore, while a human judge is
able to specially treat different cases according to emotional norms, the rules and patterns AI
uses are following a fixed schema which does not involve the special needs each situation
requires. As Rebecca Crootof phrases, “The decisionmaker is assessing the meaning of the facts
and the meaning of the law in the situation in the context of larger social norms and goals”
(p.242). As both stability and legal evolution are critical to the legitimacy of law, the balance
between time-tested rules and the flexible application of these rules are essential to accommodate
the different social circumstances in particular cases. Since AI is unable to apply this flexibility
in its operational system it is insufficient in making critical judgements in the legal system.
In the view of the fact that AI is trained on huge amounts of data from the internet it is likely that
the information in its database include biased material leading the system to make discriminating
decisions. The issue of prejudice is already a major problem in legal decision making, and
multiple facts are proving that it cannot be solved by the replacement of human judges to
artificial machines. In the article, “Aspects of artificial intelligence on e-justice and personal data
limitations” written by Fotios Spyropoulos and Evangelia Androulaki (2023) from the University
of West Attica, the authors explore the faults of AI decisions, highlighting the risk of biased
judgements as the main issue. As they claim, “In criminal cases there is also the chance of
discriminatory treatment, given that these tools, which are manufactured and interpreted by
humans, may replicate unjustifiable and already existing inequalities in a particular system of
criminal justice.” (p.5) The article further expresses that the implementation of machine
intelligence in the legal environment may result in the legitimization of problematic biased
policies instead of their correction. As the preexisting data used to train artificial intelligence is
encoded in AI systems, a portion of information may be lost resulting in incomplete and
simplified context that leads to the emergence of bias. The full reliance on these unfair
evaluations made by AI would increase inequality corrupting the fundamental values of society.
In the Journal of Internet Law Kristian P. Humble and Dilara Altun (2020) deeply express their
concerns regarding the involvement of artificial intelligence in legal decision making. As they
claim, “Machine learning system goals create self-fulfilling markers of success and in turn
reinforce patterns of inequality or issues arising from using non-representative or biased datasets.
These factors can lead to biased, inaccurate, and unfair outcomes which result in discrimination”
(p.13). Expanding the frame, the authors also highlight how policies regarding the use of AI are
not implemented in international law, even though any form of discrimination is strictly
prohibited on a global level. The article raises concerns and urges the United Nations to set up a
regulatory framework around the issue.
The lack of transparency in AI decisions lead to confusion in the reason behind key decisions,
making it difficult, and in many cases impossible to detect errors or back up claims. In a real case
published by the Criminal Law Forum, Jiahui Shi (2022) details the faults of AI detected during
its implementation in the Chinese criminal justice system. In his article: “Artificial intelligence,
algorithms and sentencing in Chinese criminal justice: problems and solutions”, he not only
raises concern about the mistakes made by AI judges, but also questions how the errors could be
detected. In a strict legal environment every critical decision requires strong reasoning and
foundation which in many cases simply cannot be provided. During real trials of AI judges in
China, radical mistakes were made due to the misimplementation of law by machine intelligence.
“Algorithms work accurately only if they are based on accurate data. Yet, the quality of
decisions published in “China Judgements Online” cannot be guaranteed. First, some decisions
published in “China Judgements Online” applied the law wrongly and therefore should not be
considered” (Shi, p.135). In a place where stakes are so high that they affect human lives, such
mistakes simply cannot be made, consequently the use of machine intelligence in this context is
entirely dishonorable. Furthermore, the lack of transparency and incapability of AI to list its real
sources result in undetectable errors leading to unlawful judgements. In Sonia M. Gipson
Rankin’s (2023) article: “Technological Tethereds: Potential Impact of Untrustworthy Artificial
Intelligence in Criminal Justice Risk Assessment Instruments”, she argues that “Scholars and
engineers acknowledge that the artificial intelligence that is giving recommendations to law
enforcement, prosecutors, judges, and parole boards lacks the common sense of an eighteenmonth-old child” (p.648), critically assessing the extent to which artificial intelligence
undermines the very principles that legal systems aim to uphold.
Artificial intelligence is a complicated technology that uses a wide variety of sources provided
by the internet to
more detail Artificial intelligence is a complicated technology that uses a wide range of sources
provided by the internet. Even though AI can be a useful tool in many fields, in criminal justice,
a strict legal environment, it cannot fulfill the required efficiency. First of all the differences
between human and machine intelligence lead to multiple problems, such as the lack of ability to
include emotional consideration and value balancing in AI decisions. Secondly, the sources used
by AI are often not authentic and unreliable. This means that AI can make decisions based on
biased data and apply discrimination in its judgements. Finally, the lack of transparency and
accountability in AI decision making make it hard to track the logical sequence used by AI and
find the reasons behind its decisions. This raises the risk of undetectable errors, and makes it
hard or even impossible to identify and correct them.