Uploaded by Kaley Brown

Space Odessey 2001: A Modern Retelling (Ethics of AI used in neurosurgery)

advertisement
Brown 1
Space Odyssey: 2001: A Modern Retelling
Composition II 1233
April 13, 2023
Kaley Brown
Brown 2
Space Odyssey: 2001: A Modern Retelling
Introduction
Imagine coming across a train track. It begins with a single lane, then branches off into
two sections. You may think this is an ordinary train track at first glance, but with a closer
inspection, you see there are five people tied to one of the tracks, and one person on the other, all
struggling to escape. Before you can free them, a nightmare happens. You hear the clamor of a
train approaching, barreling towards you at full speed, on the track with 5 victims. The train
cannot stop and you cannot free the trapped, but there is a lever you can pull. This lever will
change the trajectory of the train, killing the single victim instead. What would you choose? To
do nothing, and kill five people through an accident, or purposefully murder one to save the lives
of the majority? What would you do?
This is a famous thought question known as “The Trolley Problem.” It was developed in
1967 by philosopher Philippa Foot to challenge the followers of various common moralities
(Andrade). It is a notoriously difficult question to answer, but I want to add another layer of
complexity. Imagine you are one of the victims. You are the one laying trapped, helpless, on the
track. You see a person standing by. What would your thoughts consist of? You do not know this
perfect stranger, you do not know what their actions will be, their reasoning behind their
decision. All you know is your life is wholly dependent upon them. Now, as another layer, this
perfect stranger has no concept of innate morality, does not understand ethical boundaries or
laws. Any decision they make will be completely contingent upon the education they received.
Brown 3
You do not know how they were taught, who taught them, or if they had any flaws in their
education. You do not know this person. What would you do?
Changing the setting a bit, you are laying on an operating room table, blinding white
lights in your eyes, the beeping of your heart monitor creating a steady percussion in the
background. That same perfect stranger is your doctor. You know it has innate sense of morality,
no grasp of ethics, beyond what they were taught of the subjects. Would fear wind its way into
your very soul, with the physicians knife piercing your skin? Even worse, this stranger cuts open
your skull and delves into the most precious part of the human body: the brain. The difference
between neurosurgery, or operation of the brain, and most other surgical specialties is that the
slightest mistake can completely topple a life. One nick in the wrong place, and the patient could
awake with severe amnesia, never to recover. At the edge of the knife, what would you do?
This fearful reality is one that people are trying to introduce into the surgical fields now.
The faceless perfect stranger has a name, a name that induces incredibly different reactions.
Some may spit in its face, calling it the harbinger of doom, while others may praise it as the
savior of humanity. This divisive creature is known as artificial intelligence (AI). Popularized by
modern media in terrifying tributes such as Hal, the fearsome AI that delves into madness from
2001: A Space Odyssey, this technology has been glamorized and sanitized from the true danger
it poses to humanity.
Brown 4
Medical AI
The history of artificial intelligence is wildly colorful for the brief amount of time it has
been circulating within academic and engineering circles. Starting in the realm of the startlingly
imaginative and rapidly developing into a not-too-absurd reality, AI has fully ensnared the minds
of philosophers, ethicists, and scientists since 1935. Alan Turing, an English cryptanalyst, was
the earliest known theorist of AI. He imagined a computer program that had an infinite memory
capacity, an ability to scan through its memory card, and continue the pattern it senses. This was
known as the stored-program idea, and while it was a rough concept, later scientists would
expound upon it (Sarker). In 1956, Professor John McCarthy, a renowned computer scientist
from Dartmouth College, refined the concept of AI, defining it in an infamous article as
machines that could accurately predict and simulate human thought and decision making
(McCarthy). He proposed that in the creation of true AI, the machine would be fully automatic,
able to grasp language, create concepts, theorize, self-evolve, form cogitation, and experience
creativity (Copeland). Turing’s idea and McCarthy’s concept would spark a technological
revolution.
The modern definition of artificial intelligence is a machine that can complete tasks and
solve problems that would typically require a human’s experience and intelligence (Janiesch).
Though accurate, this is an extremely vague description of a vast field of technology. There are
now numerous ways to categorize the several types of AI.
Brown 5
The first is known as analytical AI. Analytical AI will take information from a dataset,
examine the patterns, and draw determinations from that material. Some types of analytical AI
are known as deep learning (DL) and machine learning (ML) (Janiesch). Machine learning is the
ability of some technology to learn from specific datasets to build an automatic program to
recognize patterns in similar systems to the original dataset. An example of ML would be highlytrained programs that can diagnose specific diseases in medical patients, based off of a strictly
defined set of recognizable symptoms. Deep learning is a subtype of machine learning. DL relies
on one of the most fundamental aspect of AI software: neural networks (Sarker). Neural
networks are heavily steeped in complicated mathematics, but at its most basic level, it is the
recreation of the human neuron in software form. In the human body, neurons are are conduits
for electrical signals generated by certain parts of the brain that are sent to other areas of the
brain. Thoughts and consciousness are formed through 100 billion of these types of nerves.
Although neural networks use the most advanced forms of mathematics and technology to
attempt to copy this astronomical level of complexity, researchers have failed to recreate it
successfully (Sarker). The neural networks we have created are much closer to lower level life
forms that only have hundreds of neurons, and the creature still performs at a superior level
(Yang). Deep learning applies these fake neurons to achieve a higher degree of reliability than
the original machine learning (Han). DL is especially important in biomedicine. The information
systems that the medical field uses are vast and complex. It is almost impossible for a single
human to make sense of everything a medical dataset has within itself. DL aids in this, making
Brown 6
sense of the complicated tangle of information (Yang). Through the means described, analytical
AI recognize patterns in systems of data, and possibly seeing sequences their human counterparts
would never recognize. They then apply these patterns and make suggestions based off of them.
The second type is functional AI. Functional AI is notably similar to analytical AI, but it
does have a few stark differences. Its main purpose is to manipulate large masses of data, like
analytical AI, but the discrepancy lies in the response of the machine to these datasets. Where
analytical AI will give recommendations based on the given information, functional AI will
develop a sequence of actions formulated from the analysis of the data (Janiesch).
A third kind is interactive AI. Interactive AI is typically used for communication and
language based needs. This is the basis of the “chatbots” and personal computerized assistants,
such as Apple’s SIRI and Amazon’s ALEXA, that have taken the internet by storm. When this
software is combined with machine learning, taught pattern analysis and reasoning, it can also be
used for search functions (Janiesch).
A fourth category of artificial intelligence is textual AI. This type mainly confers with
text, natural language, and text based prompts. The distinction between natural language and
artificial language is critical to draw. Natural languages are the types of language that evolve
organically, in comparison to machine languages that are created for specific purpose
(Winograd). As machine languages are the basis of AI and software, it is much simpler for a
program to analyze that type of text, but textual AI deals solely with natural language. This is the
AI that is the foundation of speech-to-text software, text recognition, and machine translation. It
Brown 7
also is the basis for generation of content systems, which is used in corporate contexts frequently.
A significant component of textual AI is known as text mining. This type of data collection will
“mine” the text for patterns and meaning, extract necessary information, and create visualizations
(Janiesch).
The fifth type of artificial intelligence is known as visual AI. Visual AI systems will
analyze images for meaningful patterns and information. It also sorts the mined information.
Visual AI systems will typically use some form of computer vision. Computer vision is the
engineering response to the human optic system. It tries to emulate the optic processes of the
human body, from visualization, understanding the image, and the response to the stimulus. The
goal is to be fully autonomous. It depends heavily on mathematics, and is mainly used for visual
analytics (Janiesch).
As AI has evolved in the last century, numerous conventions, programs, and academic
groups have formed to guide its journey, one of which is the Turing test. The originator of AI,
Alan Turing, created a examination known as the Turing Test to measure the growth of AI
technology. The Turing test is essentially a game altered to test the how similar a machine
functions when compared to a human counterpart. The computer will be programmed to imitate a
human and a human’s thought pattern. In front of a jury, made up of around twelve experts in the
field, the computer would then compete with either one interrogator and one or two hidden
human counterparts. The AI system then tries to fool the jury into thinking the AI was human
with the counterparts used as a control group. The interrogator asks questions of both groups. In
Brown 8
order to pass the test, which is comprised of multiple five minute conversations, the AI must
convince the jury they are human at least 30% of the time. The five minute dialogues have
specific timing for a reason; any longer and it becomes rapidly more difficult to replicate natural
language (Warwick). If the AI succeeds in meeting the threshold, it is said that it holds the same
level of intelligence as the average human (Furtado). The Turing test is one of the most important
tests in the development of AI.
Currently, there are a few AI that have passed the Turing test, but their success is
controversial (Big Think). In 2014, at a conference hosted by the University of Reading, a
chatbot created by Princeton University, known colloquially as Eugene Goostman, was entered
into a Turing test competition. The system tricked 33% of the judging panel, marking the first
time in history an AI had come close to passing Turing test (University of Reading). In 2022, a
Google software engineer claimed that their AI, known as LaMDA (Language Model for
Dialogue Applications), was sentient. This assertion is highly contentious, but it does show
remarkable fluidity and understanding of the world around it. In a form of “interview” between
the developer and the AI, it was shown to be able to conceptualize death, a trait difficult even for
humans, emotional responses, and abstract concepts. It has a starting human-like grasp of natural
language and flow. When reading LaMDA’s responses to the interviewer, it is very clear that,
although most likely not formally sentient, it is one of the most advanced pieces of technology
the world has seen (Lemoine). Eugene Goostman and LaMDA are two stepping stones into the
future of AI.
Brown 9
There are several groups that regulate the development of AI. One of the largest of these
is known as the Institute of Electrical and Electronics Engineers (IEEE). IEEE is a professional
society that cultivates modern technology for the benefit of mankind and conducts one of the
largest education and research communities in the engineering field (IEEE). It constructs
standards for ethical application for most current technology and software. Their stance on AI is
that humanity must trust artificial intelligence and begin to integrate it into their lives (IEEE).
ACM (Association for Computing Machinery) is another one of these organizations. This society
primarily focuses on academia and research. ACM has a subgroup dedicated to the research and
experimentation of AI known as the ACM Special Interest Group on Artificial Intelligence
(ACM SIGAI). They support the AI academic community through funding and personnel. These
organizations protect the sanctity of scientific research in a specialty that tends towards the
morally grey.
Since AI is the human attempt at recreating human productivity and intelligence, it is
only natural that there is a growing interest in the relationship between artificial intelligence and
the human mind. Doctors use artificial intelligence to create maps of the brain for surgery, and
use scanning equipment for locating lesions and probes to stimulate certain sections of the brain.
Although this technology is more of a simplistic AI, it is not the true artificial intelligence that
Alan Turing envisioned, that would imitate human intellect. Instead, this technology streamlines
the productivity of the average person. Doctors also utilize brain-computer interfaces (BCI),
which is another type of newly developed AI. BCI are a new type of technology that manipulate
Brown 10
the electrical signals in the brain; it is applied in cases of severe epilepsy, stroke, and
accessibility tools. It evaluates signals from the central nervous system (CNS) and creates
instructions based off of the electrical impulses. This means that voice-prompted and muscleactivated programs are excluded from the BCI definition, but it does not insinuate that BCI are
telepathic devices that extract information from unwilling victims. BCI simply gather
information and decode it. The relationship between BCI and its human counterpart resembles a
partnership, as the human must train themselves to accurately promote the right brain signals to
signify their intention, and the BCI will then analyze the data (Shih). BCI has many applications
in the medical field. Through this emerging technology, people that have lost the ability to speak
can communicate through text derived from brain signals. The impulses are translated by a
specific type of BCI into text (Rabbani). BCIL is also used by advanced wheelchairs, robotic
prosthetics, and even some computer cursors (Shih). BCI language
There are similar machines to brain-controlled interfaces, but they have certain critical
differences. One such device is the electroencephalogram (EEG) (Shih). EEG is not a type of AI
as it only measures impulses and it does not act on the conclusions drawn like BCI. Even though
it is not considered artificial intelligence, EEG is incredibly important to understand because it is
the basis of many types of neural based AI. It is a non-invasive procedure executed through
electrodes placed on the scalp of the patient. These electrodes record data from large masses of
synchronized neurons (Light). EEG are used primarily for exploration of cerebral activity rather
Brown 11
than pinpointing irregularities, but are used often in cases of monitoring coma, neural infections,
dementia, and epilepsy (C, Binnie D).
Epilepsy is often treated with neuron device interfaces (NDI). Neuron device interfaces
are a type of artificial intelligence that relate neuron activity and transmission in the synapses.
One application of NDI is neuromodulation. Neuromodulation is stimulation of specific neurons
in order to promote nerve cell activity. When integrated with BCI, NDI can rapidly improve
neurological function. Disorders such as Parkinson’s can be treated when applied with deep brain
stimulation (DBS) (Wang). Deep brain stimulation is a neurosurgical tool that engages cerebral
parts that are not easily accessible, such as the thalamus, which is acts as a messenger between
subcortical, cerebellar, and cortical parts of the brain (Fama). When the thalamus is stimulated
using DBS, it is shown that severe disorders such as Parkinson’s and essential tremor improve
drastically (Lozano). If the subthalamic nucleus, a part responsible for movement regulation
(Basinger), is stimulated, researchers suggest that obsessive-compulsive disorder (OCD) may
ameliorate (Lozano). DBS is a tool in a subspecialty of neurosurgery that is known as functional
neurosurgery. This is an emerging field, one that has been quickly developing in the last few
decades. Functional neurosurgery focuses primarily on neuromodulation and stimulation of
various parts of the brain. DBS is one of the leading strategies of functional neurosurgery. These
AI-based technologies are the most commonly employed by neurosurgeons and neurologists.
Ethical application has always been a controversial topic in neurology-based fields, but
because the mind is what sets us apart from the common animal, it is critical to draw distinct
Brown 12
definitions of ethics as new developments are made rapidly in AI technology. Medicinal ethics
are based on the hippocratic oath. In historical context, the hippocratic oath was the standard for
bioethics. Written by Hippocrates, the attributed father of medicine in Ancient Greece, it was
made as an oath to the healing gods, such as Apollo and Hygeia. This demonstrated the gravity
of the task being given to Hippocrates’s students. Although infamous in modern times, it only
became universal in the nineteenth century. Since then, its application has become controversial.
It does not take into account various diseases and disorders that have been discovered since 400
BC. The original Oath’s lines instruct the physician against euthanasia, and while that is a
controversial subject in modern culture, there are occasions that necessitate such extreme
measures. When a patient is brain dead with no hope of recovery, which is known as a vegetative
state, euthanasia often becomes a preference of the family. Under the Hippocratic Oath, this
situation would be unacceptable. Thus, the Oath has undergone many alterations. The modern
version is:
“I swear to fulfill, to the best of my ability and judgment, this covenant:
I will respect the hard-won scientific gains of those physicians in whose steps I walk, and
gladly share such knowledge as is mine with those who are to follow.
I will apply, for the benefit of the sick, all measures [that] are required, avoiding those
twin traps of overtreatment and therapeutic nihilism.
I will remember that there is art to medicine as well as science, and that warmth,
sympathy, and understanding may outweigh the surgeon's knife or the chemist's drug.
Brown 13
I will not be ashamed to say "I know not," nor will I fail to call in my colleagues when
the skills of another are needed for a patient's recovery.
I will respect the privacy of my patients, for their problems are not disclosed to me that
the world may know. Most especially must I tread with care in matters of life and death.
If it is given me to save a life, all thanks. But it may also be within my power to take a
life; this awesome responsibility must be faced with great humbleness and awareness of
my own frailty. Above all, I must not play at God.
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human
being, whose illness may affect the person's family and economic stability. My
responsibility includes these related problems, if I am to care adequately for the sick.
I will prevent disease whenever I can, for prevention is preferable to cure.
I will remember that I remain a member of society, with special obligations to all my
fellow human beings, those sound of mind and body as well as the infirm.
If I do not violate this oath, may I enjoy life and art, respected while I live and
remembered with affection thereafter. May I always act so as to preserve the finest
traditions of my calling and may I long experience the joy of healing those who seek my
help.” (Lasagna)
In modern times, taking this oath is considered more of a symbolic action rather than a legally
binding contract (Indla). While the Hippocratic Oath is upheld by all practicing physicians, there
Brown 14
is a specific brand of morals practiced by those that practice in brain related fields. This is known
as neuroethics. Neuroethics, as defined by the Brain Research Through Advancing Innovative
Neurotechnologies (BRAIN) initiative, is the study of social, ethical, and legal consequences of
neuroscience (Brain Initative). This is an ongoing field of study, one that necessitates staying
ahead of the rapid development of neurotechnologies.
Although there are groups that determine large portions of medicinal ethics, each
individual practitioner will hold their own beliefs, which in turn effect how they practice
medicine. The first type of belief is known as the consequentialist. Consequentialism theorizes
that the consequences of an action dictate the morality of the action. This breaks into two parts,
utilitarianism and hedonism. Utilitarianism is the belief that the good of the majority is greater
than the suffering of the few. An example of this in medicine is found in research. During testing
trials, the subjects may suffer, but it is considered a noble ethical choice because the
pharmaceuticals being tested will save many more lives than those that suffered. This is directly
contrasted to hedonism, which is the school of thought that emphasizes the production of
pleasure and the avoidance of pain. This is especially seen in the belief that euthanasia is a viable
option for those suffering. Though the patient would not be experiencing pleasure in the classical
definition, euthanasia’s purpose is to avoid life-long emotional and physical pain (Ethics
Unwrapped).
The opposite of consequentialism is deontology. The etymology of deontology draws
back to ancient Greek. The word can be broken down into the study of duty. Deontology is the
Brown 15
belief that the ethics of the action itself is to be considered, regardless of the consequence
(Alexander). In medicine, this concept is applied as treating the patient as a consequence in and
of themselves. This can be observed through the practice of informing the patient of any clinical
mistakes made. This is seen as the morally right action, although the patient may reciprocate in a
lawsuit or other negative consequence.
The thesis of this paper is derived from the harsh reality that even though many
guidelines, both ethical and practical, have been created for the use of artificial intelligence in
neurosurgery, it still begs the question of whether current artificial intelligence is ready to be
used in clinical settings, especially that of surgery. The basic purpose of AI is to make decisions,
as seen in the various types of AI explored earlier. Surgery, neurosurgery in particular, relies on
quick, reliable, and accurate choices, and these decisions must maintain a sense of ethicality and
empathy. Neurosurgery delves into the morally grey, and AI cannot grasp the concept. As will be
extrapolated within in this thesis, current AI technology is not ready for ethical application in
neurosurgery.
Invasive Research
The necessary research for AI is incredibly invasive, both physically and emotionally.
Artificial intelligence in a clinical setting is under development, and as with any scientific tool,
must be tested in order to improve. A problem arises when there is no way to conduct these
experiments ethically. As Chiong, et al., states, the basis of ethical testing is that the subject gives
Brown 16
total, free consent, without the worry of lack of care. This agreement is wholly dependent on
their understanding of the subject at hand and the consequences of taking part in the research. An
issue with this particular strain of thought is that the subjects dealt with in testing are very
vulnerable. Major neurological deficits, such as the kind that this type of technology manages,
can cause a lack of clear cogitation. This phenomenon is known as consent capacity and tends to
be aggravated by the grey lines of the physician and patient relationship. If the attending
physician is also the head of the research initiative, they may have conflicting desires, which can
take advantage of the patient’s trust in their doctor. In order to prevent confusion and possible
manipulation, it must be demonstrated with crystal clarity that the patient’s care does not depend
on their willingness to involve themselves with the research. Another ethical concern is that
researchers have the ability to become carried away and forget the purpose of the testing. The
quality of care of the patient must be equal to or exceed the quality of the research. This is seen
by only testing on patients that have a condition that necessitate this kind of technology in the
first place; any other way and it becomes ethically unsound (Chiong, et al.). There is no black
and white solution to this, but the current answer chances unnecessary pain for the patient; it
maintains a high risk, low reward philosophy, and modern noninvasive tests are inconclusive.
When patients are caught in the middle by research ethics, they tend to be subjected to
unnecessary trials and procedures, and, of course, this can be incredibly counterproductive.
There are several critical elements to consider when weighing research and clinical aspects for
the use of AI in neurosurgery. One is the pitfall of prioritizing the research over the care of the
Brown 17
patient. This type of neglect is the most obvious in how dangerous it can be; if the patient’s care
is forgotten in favor of the endeavor of knowledge, it can lead to fatal consequences, especially
in a field as high-risk as neurosurgery. This can be seen in unnecessary medical procedures, such
as lumbar punctures or blood tests to gauge a patient’s response to a treatment, due to study
protocols rather than an urgent medical need. On the other end of the spectrum, a research
physician can focus on the patient so that the data they collect becomes a part of a database of
knowledge, and it becomes generalized evidence, rather than forcing the patient to undergo
additional, unnecessary procedures (Chiong, et al.). This centralized school of care causes the
patient to participate in noninvasive, nonsurgical tests, such as behavioral analysis with
electrodes attached typically to the dura, or the surface of their brain (O'Neill). This necessitates
voluntary participation from the patient and runs the risk of unnecessary pain (Chiong, et al.).
The high risk, low reward philosophy is a dangerous trap for the patient. Neurosurgeons
tend to see only the least hopeful, the most desperate of cases. The patients they will typically see
will gladly hold on to a hope, however fleeting, if it insinuates a possibility for recovery. Most of
the time, these are the cases that consult with functional neurosurgeons. While functional
neurosurgery promises many benefits, such as relieving epilepsy and Parkinson’s disorder, there
are several detrimental consequences that may occur. There is a 4.0% chance of intracranial
hemorrhage after undergoing DBS. Intracranial hemorrhage is a sudden bleeding of the brain,
with an incredibly high fatality rate. There is only a 35% immediate survival rate, and after thirty
days, this rises to only 52%. Of those that do survive, only 20% are expected to make a full
Brown 18
recovery within six months (Caceres). Another possibility (2.3%) is a pyogenic central nervous
system (CNS). This condition is due to an infection of the same name. It typically has pus,
abscesses, and destroys the neutrophils, a type of white blood cell, in the blood. These conditions
are a leading cause of death in the world, as well as one of the most common causes for latent
disabilities. (Kalita) There is a 4.6% chance of other transient neurological deficits and a
necessitation for additional surgery at 3.5%. The highest risk is a leak of cerebral spinal fluid
(CSF), a necessary lubricant for the spine, skull, and certain parts of the brain, at 11.7%. This
lubricant in necessary for delivering nutrients, protection, and removal of waste. Loss of CSF is
attributed to neurodegenerative diseases and rapid aging. While these numbers may not seem
high, the affects they have on a patient is detrimental to their health and daily function, and will
be occasionally life threatening.
Noninvasive tests tend to be inconclusive. Invasive tests, as have been explained, tend to
be incredibly dangerous. Noninvasive tests seem to be the obvious answer, but these too are not
the perfect compromise. While the application of noninvasive tests are much less risky than
invasive tests, the point of conducting these tests are to collect information for the development
of neurosurgical technology.
Grasping Ethics
Artificial intelligence, at its foundation, is nothing more than a string of numbers
arranged in a certain manner and oftentimes people forget that. Computer science theorists and
Brown 19
the average person both tend to expect too much of the limits of code, and one way this is
demonstrated is by imagining AI is able to evolve its own sense of human ethics. AI are built
purely off mathematics, that is why we use it for analysis , organization, and calculations, but in
no way is current AI technology able to both understand and practically apply the code of human
ethics when the average human has difficulty understanding it. This is clearly demonstrated by
the fact AI cannot compute moral grey areas, ethics are not static, and biases are too prevalent in
human psyches.
Artificial intelligence cannot grasp the grey areas of human ethics. There is no reason that
humanity should expect it to, considering that the human race has dedicated millennia to
understanding its own sense of morality, and it has come no closer to the end than when they
started. AI is nothing more than a string of logic and binary code, and often human morality fails
to follow the constructs of solid logic. Imagine, someone is in a car and the brakes fail. On the
road ahead, there is elderly gentleman and a child. The driver cannot stop, but they can steer, so
which do they hit? Or, does the subject drive off the road, most likely to their demise? Survival
instinct, or in this case logic, would dictate that the third would be the least acceptable option, so
that leaves killing the baby or the old man. This is a thought question that has stumped even the
greatest minds, and as AI is nothing more than what the initial coder decides, there is no reason
that the AI would have an acceptable answer. In fact, most people would likely choose the option
to kill the driver, the least logical answer. This is the least likely option the AI would choose. As
Keskinbura wrote, the interpretation of vague ethical standards is an enormously difficult task for
Brown 20
the coder to program into an AI (Keskinbora). From that statement, one can conclude that AI
technology is not ready to be applied in the operation room. In a clinical setting, if the AI
chooses the most “logical” option, it has the potential to ruin a person’s life. For example, say an
AI is either conducting or leading a surgeon through a surgical operation in the brain. If
something catastrophic happens, and the AI must choose between the death of the patient and the
loss of a vital function, such as movement, sensory, vocal ability, or even an entire personality
change, the AI will always choose against a fatality, even if the patient depends on one of these
functions for happiness or financial support. The AI cannot understand the emotional attachment
many people have to surface attributes, and will never be able to understand because artificial
intelligence is built on logic, and humans are illogical, irrational creatures.
Another challenge AI is not ready to tackle is that not only are ethics difficult to
understand to begin with, but they are not static. Depending on the region, the time period, and
the people group, ethics vary wildly (Velasquez). What might be acceptable in one part of the
country might be a reprehensible act in another. Velasquez continues to explain it in a succinct
way,
“We might suppose that in the matter of taking life all peoples would agree on
condemnation. On the contrary, in the matter of homicide, it may be held that one kills by
custom his two children, or that a husband has a right of life and death over his wife or
that it is the duty of the child to kill his parents before they are old. It may be the case that
those are killed who steal fowl, or who cut their upper teeth first, or who are born on
Brown 21
Wednesday. Among some peoples, a person suffers torment at having caused an
accidental death, among others, it is a matter of no consequence. Suicide may also be a
light matter, the recourse of anyone who has suffered some slight rebuff, an act that
constantly occurs in a tribe. It may be the highest and noblest act a wise man can
perform. The very tale of it, on the other hand, may be a matter for incredulous mirth, and
the act itself, impossible to conceive as human possibility. Or it may be a crime
punishable by law, or regarded as a sin against the gods.”
Even in a clinical setting, this idea still holds true. Physicians see a multitude of various cultures
within their patient pool, and each have their own set of morality. For example, Muslims and
practicing Jews are not permitted to consume or use any form of product extracted from pigs
unless it is absolutely necessary, due to the pig being considered unclean in their religion. This is
then reflected inversely in Hinduism. This belief considers the cow to be holy, and while there is
no written law forbidding the use of bovine in medical procedures, many Hindus will refuse to
be treated with any procedure involving cow product (Easterbrook). These three religions are
often categorized similarly, but their rules are contradictory. Since artificial intelligence is built
off of pure logic, this is setting AI up for failure. Their basis is logic, and logic is true anywhere
in the universe. Even on the other side of the sun, two plus two will always be four, whereas one
side of a city may have a completely different view than another. When ethics are not static,
current AI has no hope of being able to follow its code.
Brown 22
AI will apply unintentional biases in fields that have room for unethical biases, such as
medicine. AI is no more than a reflection of its creator and what the coder deems necessary for
the AI to have, and so, in its nature, AI may have biases against socioeconomic classes, races, or
even simply statistics stacked against the patient. One example is a programmer that is in of
themselves bias, could create an AI that judges an individual solely based on their socioeconomic
class’s probability to commit felonies (Keskinbora). This is clearly incredibly unfair and an
unethical viewpoint to a human, but it makes complete sense to an artificial intelligence’s
algorithm program. In order to cultivate an environment of safe artificial intelligence for
everyone, there must be an emphasis placed on the ethics of research. Keskinboro suggests that
members of various scientific fields that regularly deal with this relationship should be involved
in laying the ethical foundations of AI research. This would create a framework for the AI,
therefore establishing an acceptance by society due to an easy predictability and traceability. In
order to create this safe behavior, the AI must understand justice, fairness, and other vital moral
concepts (Keskinbora). If an AI must make a decision in the operating room based on logic, and
it does not understand these ideas, it may make a choice based on the value assigned to that
human life. This value tends to be rooted in what the person contributes to their society, and if
the AI does not see the patient as an important member of their community, it may decide their
life is not crucial enough for it to attempt to save. Current AI does not understand the moral
dilemma of bias, and letting a machine that cannot grasp such a crucial idea is a lethal mistake.
Brown 23
Violation of Human Rights
Sometimes the evil that perpetrates the environment of AI is not the AI itself, but instead
the puppet masters that stand behind it. AI is nothing more than a part of a larger umbrella of
technology, inventions that are run by fallen man. In recent years, many of the titans of
technology, such as Google and Facebook (now known as Meta), have been exposed for having
surveillance based programs, actively disregarding the sanctity of human privacy, one of the
pillars of human rights (Brown). Human rights are defined as the basics given to the individual in
order that they are able to lead a satisfactory life. Some examples include life, freedom, lack of
wrongful or extreme punishment, and privacy, as outlined by the United Nations (Caranti). The
current environment surrounding modern technology, which includes AI, has no strong ethical
framework, meaning more than likely, AI will be used for the profit of the elite, and in so doing,
the total violation of human rights, such as privacy, human dignity, and safety.
One intrinsic right is a patient’s privilege of privacy. Privacy is defined by the
International Association of Privacy Professionals (IAPP), the largest worldwide network of
information privacy, as the freedom from interference or intrusion (IAPP). The current training
methods of various AI tend to ignore this right. For example, machine learning is mostly
developed through vast amounts data, colloquially referred to as Big Data (BD). Although BD
seems inconsequential on the surface, due to the demand, the lines of ethical application tend to
be blurred. The instances of data collection, data mining, and the spread of personal information
are at risk to be inflated when AI hits commerciality. This is in juxtaposition to the need for
Brown 24
transparent data to train machine learning AI (Internet Society). Individual privacy is necessary
for a healthy society, but current AI demands that it is no longer respected. A compromise must
be researched by ethicists, but currently one has not been reached. This dilemma is seen in a
clinical atmosphere as well. If a patient has a deeply unique case and treatment that could benefit
many, but refuses to release the case files to the machine learning database, the loss could be
devastating to the others. The patient’s privacy is foremost, but the paradox stands: where tens of
thousands could be saved, and the patient resists, should their privacy be respected? This
dilemma evades many of the world’s brightest minds. This is a problem that current AI cannot
understand, and moreover, the threat of privacy intrusion thrives where current AI technology is
found.
There is also a possible threat to human dignity. Human dignity is difficult to strictly
define, but Stanford eloquently describes it as such: “[The] kind of basic worth or status that
purportedly belongs to all persons equally, and which grounds fundamental moral or political
duties or rights.” (Debes). This is the intrinsic value that current AI technology endangers. The
Turing test is a specific example of this danger. Ethicists have proposed that in the future, if the
Turing Test is completed successfully by an AI, the definition of humanity must be changed. It
will challenge our definition of freedom, morality, and virtue, the very cornerstones of human
dignity. By redefining human dignity, ethicists and scientists will be putting the whole of society
at risk of total disarray. The definition of humanity and the concept of human dignity are
elementally intertwined. Separate or change one, the other becomes warped beyond recognition,
Brown 25
and when societal understandings become distorted the whole foundation becomes unstable.
When the understanding of human dignity and humanity becomes contorted, professions such as
neurosurgery become more complex. Neurosurgeons tend to grapple the moral debate of human
dignity even beyond the operating room. When the power of life and death lay at the hands of
mere man, the shades of morality tend to fade. When these doctors do not clearly understand the
meaning of humanity, they will find it difficult to consistently respect human dignity.
There is an unparalleled potential for danger that comes with the development of AI,
which includes the possibility of AI spiraling out of control. This is part of the theory of
superintelligence. Superintelligence is the idea that there is a form of understanding so vast that it
has no limits, boundaries, or rules. A being that has this kind of knowledge would have limitless
power. While it is formally a thought experiment, it is also a real concern of technology
developers (Szocik). AI may never achieve perfect omniscience, but one day it may become
“smarter” than the developer. This is the danger of self-improving AI. If society becomes too
reliant upon them, there may be a time that the AI realizes this, and with the influence of the
creator, may become hazardous. If the healthcare industry relies on AI to conduct or supervise
surgeries, especially at such a high risk level as neurosurgery, humanity’s ability to live
independently from AI will be lost. This is why it is of utmost importance to distinguish between
“good” and “bad” AI. The creator must implicitly instill boundaries and “good behavior” into the
artificial intelligence. By doing so, the engineer ensures that safety for both the AI and its human
counterparts are a priority.
Brown 26
Counterarguments
While the reasons why AI technology should not be incorporated yet into medical fields
have been clearly outlined in this thesis, there are still many that believe that the benefits
outweigh the risks. They argue that the help it gives doctors in the OR overrides the amount of
risk and that safeguards are already in place for patient protection. These are weak reasons to put
a patient’s life at risk.
A common argument for the regular installment of AI in the OR is help it provides
doctors to regularly perform more successful surgeries. While it is true that this is the intent of
clinical AI, the practical application leaves much to be desired. The current landscape only
allows for AI-like machine learning to be used. ML is limited by the dataset the programmer
provides, and when faced with an unprecedented predicament, the AI becomes utterly useless
(Hashimoto). If success is totally reliant on a perfectly functional AI, this can become rapidly
dangerous. Not only does coding break regularly, but when AI is necessary for the case, the
patient is relying on an already shaky foundation. Implementing AI in a clinical setting has good
intentions, but possible disastrous consequences.
Another popular claim is that sufficient safe guards are in place to protect patients from
possible pain. These boundaries are a vital step forward, but that cannot be enough. Having
Brown 27
safeguards in place is bare minimum for a new technology, meaning that their existence is not
enough to justify a dangerous, unethical tool in the operating room. These protections tend to be
inconsistent, an alarming attribute for such a vital element. These soft boundaries are not enough
to overcome the fact that application of current AI borders on defiling research ethics, creating a
balancing act of patient care and scientific research, and running the risk of violating basic
human rights.
Those in favor of the risky application of current AI will often lead their arguments with
the idea that the benefits AI gives doctors will outweigh the risks it poses. This is an invalid
argument because the limits of current AI technology prove to be a greater burden than first
realized. They also try to claim that there are already safeguards in place for AI, that patients
have nothing to fear. This is demonstrably not true, with the boundaries being weak and
unethically placed. AI technology is too new to apply correctly within surgical settings.
Conclusion
The technology of AI being used in neurosurgery is a wonderful possibility, but current
AI infrastructure is not ready to be applied within the field. As explored within this thesis, the
amount of invasive research, the fact that AI cannot understand human morality, and that it runs
the risk of violating basic human rights, it is shown that current AI cannot be used. As AI
technology continues to be developed, the ethical applicability in neurosurgery increases. As AI
Brown 28
evolves, it may be applied safely and ethically within the realm of invasive medical procedures
but as it stands now, the current brand of AI is redefining human ethics.
There is a new wave crashing over popular culture, making rapidly developed
technologies “trendy,” creating an environment of excitement over creations that are not ready
for public consumption. There is a clear distinction that must be understood between rapidly
developing and rapidly developed technologies. The former insinuates that it is being throughly
tested and is being constantly improved, albeit quickly. The latter creates an image of sloppy,
haphazard technology, being thrown together for the sake of finishing. In the current wave of
trending AI, developers will have to resist its siren call. In high-risk specialities, such as surgery,
the effects of rapidly developed technology can have a butterfly effect, creating life-long
problems for the patient, possibly causing death.
Beyond the physical problems of AI, philosophical dilemmas are created as well. Several
theorists have posed a newly developed set of ethics is needed to survive in this coming age. In
fact, they have posed that new religions and political infrastructure need to be established. This,
of course, is ludicrous. It is is blasphemy, and it is evil. Indeed, the Future of Life Institute, an
organization dedicated to posing ethics on the great technology race, has called for a total,
temporary halt of the development of AI. In an open letter sent to the highest executives in the
technology industry, they state that AI was never meant to be developed so quickly, and that in
order to maintain a helpfulness for society, leaders in AI must stop and create systems to protect
the sanctity of humanity. They further explain that if these leaders do not, there is a chance of the
Brown 29
human race becoming overrun. They pose the question, “should we”, in juxtaposition to the
popular “can we”. Signed by thousands, including industry leaders such as Elon Musk, founder
of SpaceX and Tesla; Steve Wozniak, co-founder of Apple; Max Tegmark, a MIT physics
professor who specializes in AI; and Aza Raskin, a member of the WEF Global AI Institute, this
letter is a sign that society should not ignore (Future of Life Institute). Do not let reality be
stripped away by this terrifying tribute of technology, do not let a helpful, safe tool become a
modernized Hal. When you come across that train track, do not be blinded by the glare of lavish,
modern inventions, but focus instead on the people struggling to escape.
Brown 30
Works Cited
Alexander, Larry and Michael Moore, "Deontological Ethics", The Stanford Encyclopedia of
Philosophy. Winter 2021.
Andrade, Gabriel. “Medical ethics and the trolley Problem.” Journal of Medical Ethics and
History of Medicine. Vol. 12, no. 3, March 2019.
“Artificial Intelligence and Machine Learning: Policy Paper.” Internet Society. April 2017.
Basinger, Hayden, et al. “Neuroanatomy, Subthalamic Nucleus.” StatPearls. October 2022.
Brown, Deborah. “Big Tech’s Heavy Hand Around the Globe.” Foreign Policy in Focus.
September 2020.
C, Binnie D, et al. “Electroencephalography.” J. Neurol. Neurosurg. Psychiatry. Vol. 57, no. 11,
November 1994, pp. 1308-1319.
Caceres, J. Alfredo, et al. “Intracranial Hemorrage.” Emerg. Med. Clin. North Am. Vol. 30, no. 3,
2012, pp. 771-794.
Caranti, Luigi. “Kant’s theory of human rights.” Handbook of Human Rights. September 2011.
Chiong, Winston, et al. “Neurosurgical Patients as Human Research Subjects: Ethical
Considerations in Intracranial Electrophysiology Research.” Neurosurgery. Vol. 83, no. 1,
July 2018, pp. 29-37.
“Consequentialism.” Ethics Unwrapped.
Copeland, B.J.. "Alan Turing". Encyclopedia Britannica, 6 Mar. 2023
Debes, Remy, "Dignity", The Stanford Encyclopedia of Philosophy. Spring 2023.
Easterbrook, Catherine, et al. “Porcine and Bovine Surgical Products: Jewish, Muslim, and
Hindu Perspectives.” Arch Surg. Vol. 143, no. 4, 2008, pp. 366-370.
Brown 31
Fama, Rosemary, et al. “Thalamic structures and associated cognitive functions: Relations with
age and aging.” Neurosci. Biobehav. Rev. Vol. 54, July 2015, pp. 29-37.
Furtado, Erika L. “ARTIFICIAL INTELLIGENCE: AN ANALYSIS OF ALAN TURING’S
ROLE IN THE CONCEPTION AND DEVELOPMENT OF INTELLIGENT
MACHINERY.” Southeastern. Spring 2018.
Han, Su-Hyan, et al. “Artificial Neural Network: Understanding the Basic Concepts without
Mathematics.” Dement. Neurocognitive Disorders. Vol 17, no. 3, September 2018, pp.
83-89.
Hashimoto, Daniel A. “Artificial Intelligence in Surgery: Promises and Perils.” Ann Surg., Vol.
268, no. 1, Spring 2018, pp. 70-76
“IEEE Mission & Vision.” IEEE. October 2015.
“IEEE Position Statement.” IEEE. June 2019.
Indla, Vishal, et al. “Hippocratic oath: Losing relevance in today’s world?” Indian J Psychiatry.
Vol. 61, no. 4, April 2019, pp. S773-S775.
Janiesch, Christian. “Machine learning and deep learning.” Electronic Markets. Vol. 31, 2021,
pp. 685-695.
Kalita, Jitu Mani, et al. “Multidrug resistant superbugs in pyogenic infections: a study from
Western Rajasthan, India.” Pan Afr. Med. J. Vol. 38, no. 409, April 2019.
Keskinbora, Kadircan. “Medical ethics considerations on artificial intelligence.” Journal of
Clinical Neuroscience. Vol. 64, June 2019, pp. 277-282.
Lasagna, Louis. “THE HIPPOCRATIC OATH: MODERN VERSION.” 1964.
Lemoine, Blake. “Is LaMDA Sentient? - an interview.” Medium. June 2022.
Brown 32
Light, Gregory A. “Electroencephalography (EEG) and Event-Related Potentials (ERP’s) with
Human Participants.” Curr. Protoc, Neurosci. 2010.
Lozano, Andres M, et al. “Deep brain stimulation: current challenges and future directions.” Nat.
Rev. Neurol. Vol. 15, no. 3, March 2019, pp. 148-160.
McCarthy, John. “A Proposal for the Dartmouth Summer Research Project on Artificial
Intelligence.” AI Magazine. Vol. 27, no. 4, August 1955, pp. 12-14.
“Neuroethics Working Group.” Brain Initiative.
O’ Neill, Brent, et al. “Mapping, Disconnection, and Resective Surgery in Pediatric Epilepsy.”
Schmidek and Sweet Operative Neurosurgical Techniques. Sixth edition, 2012, pp.
684-694.
“Pause Giant AI Experiments.” Future of Life Institute. March 2023.
Rabbani, Qinwan. “The Potential for a Speech Brain–Computer Interface Using Chronic
Electrocorticography.” Neurotherapeutics. Vol. 16, January 2019, pp. 144-165.
Sarker, Iqbal H. “AI-Based Modeling: Techniques, Applications and Research Issues Towards
Automation, Intelligent and Smart Systems” SN Computer Science. Vol. 3, no. 2, 2022.
Sarker, Iqbal H. “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy,
Applications and Research Directions.” SN Computer Science. Vol. 2, no. 420, 2021.
Shih, Jerry J. “Brain-Computer Interfaces in Medicine.” Mayo Clinic Proc. Vol. 87, no. 3, March
2012, pp. 268-279.
Szocik, Konrad. “The revelation of superintelligence.” AI & Society. Vol. 35, February 2020.
“The Turing test: AI still hasn’t passed the “imitation game”.” Big Think. March 2022.
“Turing Test success marks milestone in computing history.” University of Reading. June 2014.
Brown 33
Wang, Yang, et al. “Neuron devices: emerging prospects in neural interfaces and recognition.”
Microsystems & Nanoengineering. Vol 8, no. 128, December 2022.
Warwick, Kevin, et al. “Passing the Turing Test Does Not Mean the End of Humanity.” Cognit
Comput. Vol. 8, 2016, pp. 409-419.
“What is Privacy.” IAPP.
Winograd, Terry. “Understanding Natural Language.” Cognitive Psychology. Vol 3, no. 1,
January 1972, pp. 1-191.
Yang, Sijie, et al.. “Intelligent Health Care: Applications of Deep Learning in Computational
Medicine.” Frontiers in Genetics. Vol. 12, 2021.
Download