Uploaded by Miroslav Březík

Can AI Replace Physicians?

advertisement
Can AI Replace Physicians?
Miroslav Březı́k
20th June 2020
1
Introduction
Artificial intelligence (AI) systems went through a significant transformation in recent years.
This is due to substantial advancements in the area of machine learning (ML) and its
subsequent application in medicine-related fields. Also, the access to quality public datasets
has proven to be of great importance as the more complex models require training datasets
of considerable size. In some cases, an accuracy that is on par with the performance of a
trained professional or reaching even super-human levels can be achieved. However, this
often comes with an array of caveats. Issues of interpretability of the results, humanmachine interactivity and, adaptability must be addressed in the future. The move towards
a computer-based diagnosis carries with itself a number of challenges that are now present
in human-based diagnosis as well. Finally, substantive changes concerning regulations and
accepted standards need to be introduced for the successful implementation of AI systems.
All of this will be discussed in the following sections in further detail.
2
State-of-the-art systems
One of the main tendencies in the medical applications of AI is the move towards databased instead of traditional knowledge-based systems [11]. The approach allows for capturing nuances not otherwise perceivable by a human practitioner. This has lead to highly
performant systems such as convolutional neural networks used to classify skin cancer [6]
and radiographs [13].
An often stated obstacle to further expansion of use is their poor interpretability capabilities [12]. This black-box problem has resulted in a decrease in trust in the medical
community. A great amount of work has been done to tackle this subject. On one part
methods for better interpretability were developed [1] [19]. While these might not be sufficient to render presented AI systems trustworthy standalone diagnostic methods, they may
still prove essential in aiding in clinician’s decision-making on a day to day basis.
3
Health care availability
When it comes to replacing physicians by AI systems there is still a lot of ground to be
covered. However, currently, there are scenarios in which computers could fill the role of
a physician to a large extent. A large proportion of people in many countries around the
globe have limited access to health care [16], often coinciding with physician shortage [2].
AI systems could provide valuable solutions alleviating some of these issues. AI’s possible
application here resides in diagnosis, treatment selection, epidemic monitoring, etc. [18].
4
Regulation
Apart from the above-mentioned challenges, one major aspect has been hindering the
progress of implementing AI in medical settings: regulation. Regulatory institutions often lack the procedures to properly assess rapidly evolving digital systems. ML models are
often expected to adapt in time and such scenarios are yet to be implemented in many
agencies [9] [7]. Although regulation procedures generally lack in staying up-to-date with
1
advancements in modern technology, their existence is integral to a proper implementation of such radical changes. We have observed instances of circumventing the approval
procedures, which could have resulted and in some cases did result in dire consequences [4].
Another major task that currently stands in the way of wide adoption of AI in medicine is
the characteristics of the underlying data. ML models generally require substantial amounts
of data points to be able to achieve high accuracy and robustness. Medical data does not
historically fulfill this assumption. One of the reasons is the slow shift from traditional
paper records to a fully digitized scheme [15]. Possibly, a more difficult task is that of data
anonymization [10]. Medical records are highly sensitive and are prone to misuse. Even if
complete digitization of medical records is achieved, health care systems still need to acquire
public trust with their records in order to make them ultimately available for ML tasks.
5
Bias in medicine
The promise of integrating computer-based decision-making does not automatically mitigate
biases present in the health care industry. Implicit bias, when it comes to age, gender,
ethnicity, race, and other characteristics, has long been observed in physicians and the health
care system at large [5] [8]. Unfortunately, this can have an impact on the individuals’ choice
of treatment, which can then result in insufficient care. The fallacy stating that algorithms
cannot produce biased outputs has been thoroughly disproved.
Natural language processing (NLP) is of utmost importance in AI applications in medicine.
It provides the capabilities to analyze medical questionnaires, extract essential information
from patient’s health records, and other prospective utilization. Some ML applications, and
especially those depending on NLP methods, have been indicated in containing implicit bias
[14] [3]. The persisting disparity must be taken into account when training ML models as
the data used might be unbalanced or even incorrectly labeled.
6
Conclusion
The debate regarding sudden job replacements in the medical field by AI systems, as they
exist in their current state, is ahead of the curve. Promising results in the area of AI-assisted
surgery, diagnosis based on ML, and drug discovery have been presented. However, relevant
systems often require human medical professionals to function alongside them. Some of the
areas discussed that could help in broadening the scope of its use are better interpretability
of models, robust and quick regulation procedures, thorough and transparent anonymization
mechanisms, and mitigation of implicit bias. These measures could then increase the trust
among the society as a whole, which is oftentimes essential in such advancements concerning
sensitive issues [17]. Finally, even if we are able to overcome these obstacles, there are still
medical specialties in which human-to-human interaction is essential [20] and might not be
replaced by AI systems.
2
References
[1]
Muhammad Aurangzeb Ahmad, Carly Eckert, and Ankur Teredesai. “Interpretable
machine learning in healthcare”. In: Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics. 2018, pp. 559–
560.
[2]
Thomas S Bodenheimer and Mark D Smith. “Primary care: proposed solutions to the
physician shortage without training more physicians”. In: Health Affairs 32.11 (2013),
pp. 1881–1886.
[3]
Joy Buolamwini and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification”. In: Conference on fairness, accountability
and transparency. 2018, pp. 77–91.
[4]
John Carreyrou. Bad blood : secrets and lies in a Silicon Valley startup. London:
Picador, 2019. isbn: 9781509868087.
[5]
Elizabeth N Chapman, Anna Kaatz, and Molly Carnes. “Physicians and implicit bias:
how doctors may unwittingly perpetuate health care disparities”. In: Journal of general
internal medicine 28.11 (2013), pp. 1504–1510.
[6]
Andre Esteva et al. “Dermatologist-level classification of skin cancer with deep neural
networks”. In: nature 542.7639 (2017), pp. 115–118.
[7]
Food, Drug Administration, et al. Proposed regulatory framework for modifications to
artificial intelligence/machine learning (AI/ML)-based software as a medical device
(SaMD)-discussion paper. 2019.
[8]
William J Hall et al. “Implicit racial/ethnic bias among health care professionals and
its influence on health care outcomes: a systematic review”. In: American journal of
public health 105.12 (2015), e60–e76.
[9]
Jianxing He et al. “The practical implementation of artificial intelligence technologies
in medicine”. In: Nature medicine 25.1 (2019), pp. 30–36.
[10]
Sharona Hoffman and Andy Podgurski. “Big bad data: law, public health, and biomedical databases”. In: The Journal of Law, Medicine & Ethics 41 (2013), pp. 56–60.
[11]
Werner Horn. “AI in medicine on its way from knowledge-intensive to data-intensive
systems”. In: Artificial Intelligence in Medicine 23.1 (2001), pp. 5–12.
[12]
Polina Mamoshina et al. “Applications of deep learning in biomedicine”. In: Molecular
pharmaceutics 13.5 (2016), pp. 1445–1454.
[13]
Ju Gang Nam et al. “Development and validation of deep learning–based automatic
detection algorithm for malignant pulmonary nodules on chest radiographs”. In: Radiology 290.1 (2019), pp. 218–228.
[14]
Ziad Obermeyer et al. “Dissecting racial bias in an algorithm used to manage the
health of populations”. In: Science 366.6464 (2019), pp. 447–453.
[15]
Niels Peek, John H Holmes, and J Sun. “Technical challenges for big data in biomedicine
and health: data sources, infrastructure, and analytics”. In: Yearbook of medical informatics 23.01 (2014), pp. 42–47.
[16]
David H Peters et al. “Poverty and access to health care in developing countries”. In:
Annals of the New York Academy of Sciences 1136.1 (2008), pp. 161–171.
[17]
Michael J Rigby. “Ethical dimensions of using artificial intelligence in health care”.
In: AMA Journal of Ethics 21.2 (2019), pp. 121–124.
[18]
Brian Wahl et al. “Artificial intelligence (AI) and global health: how can AI contribute
to health in resource-poor settings?” In: BMJ global health 3.4 (2018), e000798.
[19]
Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. “Interpretable convolutional
neural networks”. In: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition. 2018, pp. 8827–8836.
[20]
Donna M Zulman et al. “Practices to foster physician presence and connection with
patients in the clinical encounter”. In: Jama 323.1 (2020), pp. 70–81.
3
Download