Uploaded by Eva Panetti

AI in Healthcare: Standardization Report

advertisement
This publication was produced with the financial support of the European Union (PI/2020/418-042). Its contents are the sole responsibility of the
implementing partners lead by GFA Consulting Group GmbH and do not necessarily reflect the views of the European Union. The views expressed
may not in any circumstances be regarded as stating an official position of the European Commission.
Healthcare as a Use Case for
Standardisation in the Field of
Artificial Intelligence
July 2024
Acknowledgment and Disclaimer
This document has been prepared by Prof.dr. Nick Guldemond MD PhD Eng, independent
expert for the EU-funded International Outreach for a human-centric approach to AI
(InTouchAI.eu) project (contract no. PI/2020/418-042).
The InTouchAI.eu project is grateful for the guidance and valuable input provided by
European Commission's colleagues from the Directorate-General for Communications
Networks, Content and Technology (CNECT) and the Service for Foreign Policy Instruments
(FPI) during reviews of intermediate results and consultation meetings. The information and
views set out in this report are those of the experts and do not necessarily reflect the official
opinion of the European Commission, which does not guarantee the accuracy of the data
included in this study. Neither the European Commission nor any person acting on its behalf
may be held responsible for the use which may be made of the information contained therein.
Copyright: European Commission © 2024
info@intouchai.eu
InTouchAI.eu
@InTouchAIeu
https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
TABLE OF CONTENTS
Table of Contents ................................................................................................................... 0
Executive Summary................................................................................................................ 1
1.
2.
Introduction...................................................................................................................... 6
1.1
Methodology ............................................................................................................ 7
1.2
Readers Guidance and Report Structure ................................................................ 7
Healthcare ..................................................................................................................... 10
2.1
Introduction ............................................................................................................ 10
2.1.1 Human Centricity in Healthcare and Public Health ........................................... 14
2.1.2 Integrated Services to Ensure Person-centricity ............................................... 15
2.1.3 Health System Resilience and Sustainability .................................................... 16
2.1.4 The Healthcare Paradigm Shift ......................................................................... 18
2.2
The Impact of Innovation and Technology on Health Expenditure ....................... 20
2.2.1 Overutilisation .................................................................................................... 21
2.3
The Shift from Fee-for-Service to Outcome–based Financing and Procurement. 24
2.3.1 Value-based Procurement................................................................................. 25
2.4
Relevant Legislation and Regulation ..................................................................... 26
2.4.1 EU Laws and Regulations in Relation to Healthcare ........................................ 26
2.4.2 The Oviedo Convention..................................................................................... 27
2.4.3 The Clinical Trials Regulation ........................................................................... 28
2.4.4 Soft Law and Regulation in Healthcare Practise .............................................. 29
2.5
3.
Key Conclusions about Healthcare ....................................................................... 32
Artificial Intelligence ...................................................................................................... 33
3.1
Introduction ............................................................................................................ 33
3.1.1 Aritifical Intelligence ........................................................................................... 33
3.1.2 Generative Artificial Intelligence ........................................................................ 34
3.2
Adaptive Algorithms ............................................................................................... 36
3.2.1 Fixed Rule-based Algorithms versus Adaptive, AI-driven Algorithms .............. 36
3.2.2 Machine Learning Principles Used in Development and Application of Adaptive
Algorithms ......................................................................................................... 37
3.2.3 Standards and Protocols for the Development of Adaptive Algorithms............ 38
info@intouchai.eu
InTouchAI.eu
@InTouchAIeu
https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
3.3
Methodological, Mathematical, Statistical and Practical Limitations of AI ............ 40
3.3.1 The Importance of Data Quality ........................................................................ 42
3.4
Providing (Ecological) Validation and Evidence for AI-Systems ........................... 43
3.4.1 More Quality Data and Computational Power Will not Solve Problems of
Uncertainty and Unpredictability ....................................................................... 45
3.5
AI-based Decision-making..................................................................................... 45
3.5.1 The Clinical Decision-making Process .............................................................. 45
3.5.2 Clinical Decision Support .................................................................................. 46
3.5.3 Virtual Agents with Autonomous Decision-making Capabilities ....................... 46
3.5.4 Human Agency Oversight of AI ......................................................................... 47
3.6
4.
Key Conclusions about Artificial Intelligence......................................................... 48
Medical Technology ...................................................................................................... 49
4.1
Introduction ............................................................................................................ 50
4.1.1 Emerging Technologies..................................................................................... 51
4.2
Medical Technology Sector and Market ................................................................ 52
4.3
Categories of Intelligent Medical Devices ............................................................. 54
4.3.1 Medical Devices Used in Hospitals ................................................................... 55
4.3.2 Hospital-to-home Concepts ............................................................................... 55
4.3.3 Assisted Technology and Devices .................................................................... 57
4.3.4 Interconnected Implantable Medical Devices ................................................... 59
4.4
Components and Functions of Interconnected Intelligent Medical Devices ......... 61
4.4.1 Intelligent Device (1 in fig. 12), .......................................................................... 63
4.4.2 (Graphic) User Interface (2 in fig. 12)................................................................ 64
4.4.3 Inter-connectivity and Cybersecurity (3 in fig. 12)............................................. 65
4.4.4 Cloud-based Technology (4 in fig. 12) .............................................................. 67
4.4.5 Implications for Medical Devices ....................................................................... 68
4.5
Relevant EU Legislation ........................................................................................ 69
4.5.1 Background ....................................................................................................... 69
4.5.2 Main Changes Introduced by the New MDR and IVD Regulation .................... 70
4.5.3 Medical Device Software ................................................................................... 71
4.6
Artificial Intelligence: The AI Act ............................................................................ 73
4.6.1 A Layered Framework for Risk Assessment and Classification ....................... 74
info@intouchai.eu
InTouchAI.eu
@InTouchAIeu
https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
4.6.2 Implications for Medical Devices ....................................................................... 75
4.7
Data Protection and Patient Privacy ...................................................................... 77
4.7.1 Implications for Medical Devices ....................................................................... 79
4.8
The European Health Data Space ........................................................................ 80
4.8.1 Primary and Secondary Use of Data................................................................. 81
4.8.2 Regulated Control and Data Access ................................................................. 82
4.9
Development, Validation, and Implementation of AI for Medical Devices ............ 83
4.9.1 Development ..................................................................................................... 84
4.9.2 Validation ........................................................................................................... 88
4.9.3 Market Access ................................................................................................... 92
4.9.4 Procurement ...................................................................................................... 95
4.9.5 Use .................................................................................................................. 101
4.9.6 Post-Market Surveillance ................................................................................ 102
4.9.7 Improvement and Innovation ........................................................................... 104
4.10
5.
Key Conclusions about Medical Technology ...................................................... 104
Standards .................................................................................................................... 106
5.1
Introduction .......................................................................................................... 106
5.1.1 Standards ........................................................................................................ 107
5.2
International Organisation For Standardisation (Iso) & International
Electrotechnical Commission (Iec) ...................................................................... 108
5.2.1 Published (all 35.020)...................................................................................... 109
5.2.2 In Development ............................................................................................... 110
5.2.3 Quantitative and Qualitative Analysis of Standards ........................................ 111
5.2.4 Results of Quantitative and Qualitative Analysis of Standards so far ............ 126
5.3
Institute of Electrical and Electronics Engineers (IEEE) ..................................... 128
5.3.1 Standards Applicable to Medical Devices ....................................................... 130
5.4
Other Relevant Standardisation Initiatives .......................................................... 130
5.4.1 World Health Organisation .............................................................................. 130
5.4.2 International Medical Device Regulators Forum ............................................. 130
5.4.3 European Coordination Committee of the Radiological, Electromedical and
Healthcare IT Industry .................................................................................... 131
5.4.4 STANDING Together ...................................................................................... 131
info@intouchai.eu
InTouchAI.eu
@InTouchAIeu
https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
5.4.5 EQUATOR ....................................................................................................... 132
5.5
Overview Regional and National Standardisation Initiatives............................... 132
5.5.1 United Kingdom ............................................................................................... 132
5.5.2 United States of America................................................................................. 133
5.5.3 China ............................................................................................................... 134
5.5.4 Japan ............................................................................................................... 136
5.5.5 India ................................................................................................................. 137
5.5.6 Brazil ................................................................................................................ 137
5.5.7 The Netherlands .............................................................................................. 138
5.6
6.
Key Conclusions about Standardisation.............................................................. 139
Strategy ....................................................................................................................... 141
6.1
Strategic Context ................................................................................................. 141
6.2
Standardisation Strategy Development ............................................................... 142
6.2.1 A European Strategic Infrastructure for AI-based Solutions ........................... 144
6.3
Healthcare-specific Standardisation Actions ....................................................... 147
6.4
Conclusions and Recommendations ................................................................... 152
Annex: Glossary ................................................................................................................. 154
Literature............................................................................................................................. 157
info@intouchai.eu
InTouchAI.eu
@InTouchAIeu
https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai
EXECUTIVE SUMMARY
This study explores the implications of Artificial Intelligence (AI), in healthcare, focusing on
standardisation to ensure human-centred, efficient, and equitable care. The context is framed
by the European regulation and legislation, which emphasises the role of standards in
meeting requirements and facilitating AI integration in healthcare.
Standardisation is essential for developing, testing, implementing and using AI-systems
consistently and safely across health and social care applications. It should ensure that AIsystems are reliable, interoperable, and trustworthy, facilitating regulatory harmonisation and
reducing adoption barriers. The study aims to:





Explain the context of current healthcare challenges, and opportunities in relation to the
use of AI and medical technology.
Provide an overview of AI and standardisation in healthcare, particularly focusing on
medical devices.
Present an inventory of international standardisation activities related to AI and health.
Identify gaps in current standards for medical devices and suggest areas for future
standardisation.
Offer conclusions and recommendations for a human-centric approach to AI in a global
context.
Information in this report was derived from extensive desk research and the methodology
follows a holistic and human-centric approach to AI in healthcare. This means that the use
of AI, medical technology and implications for standardisation are researched from the
perspective of how health systems should provide digitally enabled integrated personcentred care.
This report consists of five chapters: Healthcare, Artificial Intelligence, Medical Technology,
Standards and Strategy.
The Healthcare Chapter outlines key concepts and developments relevant to AI and
standardisation. It discusses the shift towards person-centric, digitally enabled healthcare
and the need for resilient and sustainable health systems amid rising demands and limited
resources. Demographic changes and economic pressures necessitate resilient and
sustainable health systems.
The report highlights the paradoxical impact of innovation and technology on healthcare
costs, emphasising the importance of standards to mitigate negative effects and promote
positive outcomes. Over-diagnosis and over-treatment potentially induced by AI-systems can
significantly increase healthcare expenditure, but technology could also improve outcomes
and save costs when developed and used properly.
A hallmark of most health and social care systems is fragmentation. The lack of coordination
among providers and systems leads to inefficiencies and poor outcomes. Multiple
incompatible systems hinder seamless information exchange and care continuity.
1
Fragmented financing and dispersed decision-making complicate resource allocation and
equitable access.
Integrated care and value-based healthcare are two important concepts which anticipate
fragmentation of service provision and pursuing better health outcomes through
multidisciplinary collaboration, ideally supported by digital solutions such as AI.
Financing and procurement play an essential role in adopting modern technologies and
service models. Value-based procurement is a more holistic approach to procurement, which
considers not only the price of the services being procured but also their impact on patient
outcomes and the overall healthcare system, which makes it more suitable to facilitate
sustainable integrated, digitally enabled, person-centred services.
Healthcare legislation and regulation are critical for integrating AI-systems and medical
technology in safe and efficient person-centred services. Various paragraphs addressing
relevant EU laws, including those related to medical devices, clinical research, digital health,
and data protection. This chapter also addresses the importance of soft law and regulation
in healthcare practice which are relevant for the implementation and adoption of system
improvements through standardisation and harmonisation strategies.
The chapter related to Artificial Intelligence provides an overview of the definitions,
concepts, methods, and challenges associated with AI in healthcare. This chapter elaborates
upon the principles and applications of AI in healthcare, focusing on generative AI, adaptive
algorithms, and the methodological and practical limitations of AI. Further elaboration on the
development, validation and implementation of AI-systems is discussed in the chapter on
Medical Technology.
Developing and validating AI-systems tailored for healthcare are paramount given the unique
characteristics and requirements of AI in this context. There is much optimism about AI's
potential in healthcare, but many of the current applications are standalone solutions rather
than integrated and comprehensive systems that could have a significant impact on health
systems. Ideally, the introduction of AI technologies should improve cost-efficiency and
patient outcomes.
The effectiveness of AI-systems depends on the quality of data and the validation process,
which are crucial for ensuring accurate and reliable AI applications. It stresses the importance
of data quality and the need for evidence in AI-systems. This points to the need for standards
which could guide the process of development and validation as well as set criteria for safe
and human-centric use of AI. Further elaboration on the development, validation, and
implementation of AI-systems is discussed in the chapter on Medical Technology.
Artificial Intelligence systems must demonstrate ecological validity, meaning they should
perform well in real-world situations beyond their training data. This requires significant
investment in digital infrastructure, standards for data collection, interoperability, and
cybersecurity. Ensuring that AI models can be generalised to new contexts, essential to their
successful implementation in diverse healthcare environments.
2
Advanced AI algorithms can function as autonomous agents, making independent decisions
based on their data sources and goals. These multi-agent systems interact in a distributed
network, exhibiting emergent properties and adaptive behaviour. However, the unpredictable
nature of these interactions necessitates monitoring and control to ensure proper functioning
and to prevent adverse outcomes
Accordingly, human oversight is critical in AI-based decision-making to ensure that AI outputs
are accurate and reliable. Over-reliance on AI can lead to erroneous decisions, especially
when clinicians are under high workloads and cannot critically appraise AI recommendations.
Ensuring that AI-systems are transparent and that humans can understand their decisions is
essential to maintain trust and efficacy in clinical settings.
Despite advancements, AI in healthcare faces several challenges. These include the need
for high-quality data, the potential for bias in training data and the difficulty in transferring AI
models between different institutions and settings. AI models often perform well in controlled
environments but may struggle in real-world applications due to these limitations.
The Medical Technology Chapter provides a comprehensive overview of the medtech
sector, highlighting its diverse applications, emerging trends and the key stakeholders
involved in its advancement. It covers emerging technologies in the medical field, market
dynamics and the categories of intelligent medical devices. It discusses hospital-to-home
concepts, assisted technology, and interconnected implantable devices. For each of these
topics, there are new challenges, and opportunities with the use of AI.
This chapter further outlines the regulations that govern the development, validation,
implementation, and market access of medical devices, with a specific focus on AI-driven
systems. Relevant EU regulations and legislation, including the Medical Device Regulation,
In-Vitro Diagnostic Medical Device Regulation, EU AI Act, and European Health Data Space
regulations are discussed. Implementation of these regulations is still under development
while specifications for standards in various healthcare domains need to be defined. Ideally,
practical use-cases should provide information on how these regulations take effect in
everyday care and which adaptations for improvement can be made.
The ongoing trend in medical technology involves increased connectivity and the use of datadriven solutions, accelerated by advancements in AI and machine learning. This evolution
means that medical devices are becoming part of adaptive and evolutionary processes rather
than remaining static standalone devices. Standards should define approaches for
continuous monitoring, data collection and performance assessment of AI-systems in realworld healthcare environments to detect potential safety or efficacy issues.
Harmonising data governance practices across countries is necessary. This includes
establishing common principles and frameworks for data sharing, data protection and patient
privacy to facilitate secure and responsible data sharing and collaboration.
3
The development, implementation, and maintenance of AI in medical devices bring additional
responsibilities for users, suppliers, and developers. These responsibilities include ensuring
safety, efficacy, and quality throughout the device lifecycle.
The development of international standards is essential for promoting EU-wide and global
interoperability, facilitating the adoption of AI-based medical devices across different regions.
The need for robust standards to ensure safety, effectiveness and interoperability of medical
devices is highlighted.
It is essential to establish clear standards for ongoing monitoring of AI-based medical
devices. This includes strategies for collecting and analysing real-world data, detecting
potential issues, and conducting necessary updates or recalls ensuring patient safety and
device effectiveness.
The Standards Chapter provides an overview of regional and national standardisation
initiatives, identifying key gaps and proposing recommendations for future standardisation
efforts. It underscores the importance of a human-centric approach to AI, aligning with EU
values and promoting global consensus.
Current AI standards predominantly cover foundational aspects like vocabulary and
definitions, with limited application-specific standards for health and social care. The
fragmented AI standardisation landscape in healthcare requires a dedicated approach,
considering the complexity of AI-driven medical devices. Recommendations include
developing new standards, updating existing ones and creating compliance management
instruments tailored to AI-specific risks and requirements.
Artificial Intelligence in medical devices raises concerns about data privacy, patient safety
and algorithmic transparency. Existing frameworks often lack specificity for AI, necessitating
the development of standards addressing explainability, transparency and bias mitigation.
Ensuring AI algorithms do not perpetuate biases as well as promoting ethical AI use are
essential for gaining patient trust and regulatory compliance.
Current gaps in international harmonisation include differences in regulatory frameworks,
approval processes, post-market surveillance requirements, and ethical considerations.
Updating existing standards and developing new ones is necessary to assess the conformity
of AI-integrated medical devices with relevant regulations and societal values. Developing
internationally recognised ethical frameworks and guidelines is necessary to ensure ethical
practices across borders. These should address transparency, explainability, fairness,
privacy, and the responsible use of AI technology.
International collaboration is crucial for developing standards to ensure EU-wide and global
interoperability. Current international harmonisation efforts have several shortfalls, such as
differing regulatory frameworks, approval processes and post-market surveillance
requirements. Ethical considerations and data governance practices vary across countries,
necessitating common ethical frameworks and data governance principles. Technical
4
standards for interoperability and data exchange are essential but currently lack
harmonisation, requiring collaborative efforts to develop common standards.
This report concludes with the Strategy Chapter addressing the need for developing a
shared vision as basis for a strategy with goals, objectives, actions and related planning as
well as human-centric AI framework for the standardisation and harmonisation of AI-based
solutions in healthcare. Harmonisation efforts should consider cultural, linguistic, and societal
factors that impact the development, deployment, and acceptance of AI technologies in
healthcare.
Such a strategy should support the development of comprehensive standards that address
the ethical, legal, and technical aspects of AI, ensuring that AI-systems are safe, effective,
and aligned with human values. It should advocate the strengthening of ongoing initiatives
and the creation of a European-wide network of innovation eco-systems and living labs
dedicated to the development, testing, validation, and application of AI-solutions.
Collaborative efforts are needed to develop and adopt common technical standards to ensure
interoperability, data exchange and compatibility of AI-systems across different regions.
Harmonisation efforts should account for disparities in healthcare resource availability and
infrastructure across EU countries to ensure equitable access to AI-driven healthcare
solutions.
Addressing intellectual property challenges and facilitating fair and transparent sharing of AI
innovations is essential for global deployment. Harmonisation efforts should promote
collaboration while protecting intellectual property rights.
5
1.
INTRODUCTION
Accessibility of every citizen to high quality, safe, responsive, efficient, and affordable care
is a hallmark of European values and a key guiding principle in national health and social
policies1. Person-centric care provision is an intense information-driven process with many
actors, complex processes, and critical decision-making. Artificial Intelligence holds the
promise to facilitate and enable better care at lower cost. The impact and consequences of
the use of AI in healthcare is potentially extensive. Accordingly, the requirements for
standards guiding the design, development, and application of AI in healthcare should meet
the values, principles, and functionalities for optimal human-centredness, appropriateness,
efficiency, effectiveness, safety, and equity.
Therefore, the context of this study lies within the context of the European Commissions’
(EC), legal proposal for an Artificial Intelligence (AI) Act2. While the proposed AI Act lays
down requirements for specific AI use cases, standards are expected to play an essential
role in providing harmonised technical solutions that will make compliance to these
requirements possible. The application of AI in the field of healthcare could offer a useful
case study for standardisation processes, regarding both its state of play, future prospects,
and needs.
To contribute to the ongoing debate on regulatory, legal and ethical challenges posed by the
greater use of AI at global level, the Service for Foreign Policy Instruments (FPI), of the
European External Action Service (EEAS), and the Directorate-General for Communications
Networks, Content and Technology (DG Connect), launched the International Outreach for
a Human-Centric Approach to Artificial Intelligence project (InTouchAI.eu). It aims to
“contribute to the setting up of a framework for ethics and trust to enable the growth of AI in
accordance with EU and universally recognised values and prepare the ground for global
consensus building in this field.” This study serves the purpose of providing evidence for the
promotion of the European Union’s (EU), human-centric approach to AI in a global context.
International standardisation based on human-centric values can maximise the potential
benefits of AI in healthcare while minimising its potential risks.
Accordingly, this study aims toward the following objectives:
1
2
3
1
To provide an overview of relevant aspects in relation to AI and standardisation with a
focus on medical devices;
To present an inventory of relevant past/ongoing international standardisation work and
activities related to AI and health, which shall include more than an analysis of medical
devices and related software;
Indicate medical device-specific gaps that future international or regional (notably
European), Standardisation may be called to fill;
https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/economy-works-people/jobs-growth-andinvestment/european-pillar-social-rights/european-pillar-social-rights-20-principles_en
2
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
6
4
Provide conclusions and recommendations for a Human-Centric Approach to AI which
is specifically focusing on the debate on AI standards in a global context.
Information in this report was derived from extensive desk research, guided by key
publications reporting on comprehensive studies on the topic of interest. This information
was combined and summarised in a manner that informs the subsequent parts of the report.
1.1
METHODOLOGY
This report is based on a review of the literature and various web sources reporting on
existing AI, medical devices, and standardisation. Rapid reviews are increasingly used to
inform policy evidence needs and decisions due to their usefulness for generating prompt,
actionable and reliable evidence summaries (Irma et al., 2023). First, a PubMed search to
identify existing systematic reviews of horizon scanning systems was conducted. The search
strings were based on various keywords for AI, medical devices, and standardisation. To be
eligible for inclusion, publications had to be written in English, German, French, Spanish or
Dutch, published within the last ten years and report on medical technologies. These reviews
were analysed and information relevant for guiding this report was summarised. Furthermore,
the primary studies were resynthesized in the reviews and searched for updates on the AI,
medical devices, and standardisation.
Additionally, scientific, and grey literature search engines were explored using topics that did
not gain sufficient information through the existing literature reviews. The desk research was
performed on relevant policy documents and ‘white and grey’ literature through databases
such as IEEE Xplore, Science Direct, PUBMED as well as specific databases e.g., EUR-Lex,
ISO, ProQuest and SSRN. Search terms were extracted from relevant publications
(literature, trend and industry reports and policy documents), related to the Human-Centric
Approach to AI in the context of healthcare and related areas. The selection will be based on
criteria for high-risk AI applications’ conformity assessments as described in relevant AI
regulation and legislation e.g., quality of data sets; technical documentation; record-keeping;
transparency and the provision of information to users; human oversight; robustness,
accuracy, and cybersecurity.
The information obtained by the desk research, was assessed on possible gaps in
standardisation in the context of healthcare and which gaps preferably could and should be
addressed in the future.
1.2
READERS GUIDANCE AND REPORT STRUCTURE
Standardisation often involves breaking down themes into specific definitions and
specifications. However, the understanding of the context in which standardisation activities
take place is important to determine the overall direction of a standardisation strategy and its
priorities to ensure that the standards developed are relevant, practical, and beneficial for the
citizens in the EU.
7
The nature of healthcare in Europe is changing rapidly because of aging societies, a rising
demand of more complex care, lesser resources, rapid technological advancements,
geopolitical changes and increased international market competition. This report aims to
provide a broader context for the role of AI-systems in relation to medical devices and Invitro diagnostics (IVD). These domains are Healthcare, Artificial Intelligence, Medical
Technology and Standards.
The results from the research and its elaboration are structured according to four interrelated
domains which are relevant for the development of a strategy for standardisation activities in
relation to AI-systems for medical devices and IVD’s, please see figure 1 below.
Figure 1 Four interrelated domains translate into a strategy
Accordingly, this report is structured in five interrelated chapters:
Chapter 1 Healthcare describes the unique characteristics of the sector, specific challenges
and points of attention which are relevant for the use of AI-systems in daily care processes
as well as their development, procurement, and financing.
Chapter 2 Artificial Intelligence explains the definitions, concepts and methods used in the
development and validation of AI-systems as well as the challenges with the implementation
and application of AI-systems in the healthcare context.
Chapter 3 Medical technology elaborates on its sector with an explanation of the typical
digital technologies and data, products, and services as well as important technology
developments and market dynamics. This chapter also integrates healthcare aspects,
8
Artificial Intelligence and medical technology as well as the relevant regulations and
legislation. Because of the unique characteristics of AI and its use in healthcare, the
development, validation, and use of AI-systems are discussed more elaboratively and
integrated than in chapter two.
Chapter 4 Standards, provides an overview of standards relevant for healthcare, medical
technology, and AI in relation to medical devices and IVD as well as relevant international
initiatives are described.
Chapter 5 Strategy, describes the potential implications of the previous sections for the
development of a European vision, strategy, and related standardisation activities. Finally,
conclusions and recommendations are provided.
9
2.
HEALTHCARE
This chapter introduces the reader to the key concepts, developments and terminology from
a healthcare perspective that are relevant for AI and standardisation with a focus on medical
devices.
In the introduction, the meaning of European values and guiding principles for providing
person-centred integrated digitally enabled health and social care is elaborated. The
difference between the concept of person-centredness and human-centricity is explained. It
describes the need for transformation to achieve better health system resilience and
sustainability under the pressure of an increasing demand for care but lesser resources due
to ageing and multi-crises.
The chapter addresses the impact of innovation and technology (and potentially the negative
impact of AI), on health expenditure due to overutilisation of health scares and social care
resources. In this section the paradoxical effects of innovation and technology are
highlighted, which are important for understanding how AI-systems should be used, and
standards might play a role in preventing negative impacts while facilitating better outcomes.
Another key topic in health and social care are the developments in procurement, funding
and financing which are relevant for AI-system integration and implementation. Particularly,
the shift from cost to value-based financing is discussed in the context of person-centredness
and human-centricity.
There are also sections dedicated to healthcare legislation and regulation relevant for AIsystems and medical technology as well as soft law and regulation in healthcare practise.
2.1
INTRODUCTION
Healthcare or medical care refers to the maintenance and improvement of health through the
diagnosis, treatment, and prevention of illness, disease, injury, with other physical and mental
impairments in human beings. Healthcare encompasses a wide range of services provided
by healthcare professionals, such as doctors, nurses, pharmacists, therapists, and other
healthcare practitioners, as well as healthcare facilities such as hospitals, clinics,
laboratories, and other healthcare settings.
In Europe, each member state has their own policies and regulations for governance,
financing, and management of their healthcare system, while the EU issues directives and
regulations related to specific areas of healthcare 3 such as medicinal products, medical
devices, clinical research4, digital health5, (including eHealth, Well-Being & Ageing), and data
3
Directorate-General for Health and Food Safety https://commission.europa.eu/about-europeancommission/departments-and-executive-agencies/health-and-food-safety_en
4
European Medicines Agency https://www.ema.europa.eu/en
5
https://health.ec.europa.eu/ehealth-digital-health-and-care_en
10
protection 6 which aim to ensure high standards of safety, quality and accessibility of
healthcare across member states.
European Union member states share the same values and guiding principles by which each
respective health system is organised. The key guiding principles are:





Availability refers to ensuring that health services and essential health commodities
are physically accessible and consistently available to those who need them. This
includes having an adequate number of health facilities, trained health workers and
necessary medical supplies and equipment in place to provide health services.
Accessibility involves ensuring that health services are affordable, equitable and
geographically accessible to all individuals and communities, including those in remote
or underserved areas: this includes removing financial, geographical, cultural, and other
barriers that may prevent people from accessing health care services.
Acceptability means that health services are delivered in a manner that is respectful of
individuals' culture, gender, age, and other social and personal characteristics. It
includes ensuring that health services are provided with dignity, without discrimination
and in a way that respects individuals' rights and preferences.
Appropriateness refers to delivering health services that are evidence-based, of high
quality and meet the needs of the population being served. This includes adhering to
clinical guidelines, using effective and safe interventions and providing care that is
relevant and responsive to the health needs of individuals and communities.
Accountability should ensure that health systems are transparent, responsive, and
responsible for delivering quality health services. This includes having effective
governance and management structures in place, monitoring and evaluating the
performance of health systems and holding health providers and policymakers
accountable for their actions and decisions.
Furthermore, in terms of digital health the Commission wants to ensure people are
empowered to fully enjoy the opportunities that the digital era brings. Therefore, a proposed
set of European digital rights and principles that reflect EU values and promote a sustainable,
human-centric vision for the digital transformation7. These include:


6
People at the centre - technology should serve and benefit all people living in the EU
and empower them to pursue their aspirations. It should not infringe upon their security
or fundamental rights.
Solidarity and inclusion - Everyone should have access to technology, which should
be inclusive and promote our rights. The declaration proposes rights in several key
areas to ensure that nobody is left behind by the digital transformation, making sure
that we take extra effort to include elderly people, people living in rural areas, persons
https://commission.europa.eu/about-european-commission/departments-and-executive-agencies/communicationsnetworks-content-and-technology_en
7
https://digital-strategy.ec.europa.eu/en/policies/digital-principles
11




with disabilities and marginalised, vulnerable or disenfranchised people and those who
act on their behalf.
Freedom of choice - Everyone should be empowered to make their own, informed
choices online. This includes when interacting with Artificial Intelligence and algorithms.
The declaration seeks to guarantee this by promoting human-centric, trustworthy, and
ethical Artificial Intelligence systems, which are used in line with EU values. It pushes
for transparency around the use of algorithms and Artificial Intelligence.
Participation - Digital technologies can be used to stimulate engagement and
democratic participation. Everyone should have access to a trustworthy, diverse, and
multilingual online environment and should know who owns or controls the services
they are using. This encourages pluralistic public debate and participatory democracy.
Safety and security - Everyone should have access to safe, secure, and privacyprotected digital technologies, products, and services. The digital principles commit to
protecting the interests of people, businesses and public services against cybercrime
and confronts those that seek to undermine the security and integrity of our online
environment.
Sustainability - The digital and green transitions are intricately linked. While digital
technologies offer many solutions for climate change, we must ensure they do not
contribute to the problem themselves. Digital products and services should be
designed, produced, and disposed of in a way that reduces their impact on the
environment and society. There should also be more information regarding the
environmental impact and energy consumption of such services.
Healthcare is intrinsically linked to planetary health while human health and its determinants
are strongly influenced by the social-economic, political, and cultural situation, at a regional,
European, and international level. The COVID-19 pandemic and climate change show that
the environment and animal health are also directly linked to health of people (Dye, 2022).
Biomedical factors and clinical care contribute relatively little i.e. 15-20% % to the overall
health of citizens (Tarlov, 1999).
12
Figure 2 The relations between the environment, human and animal health
Figure 2 summarises the relations between the environment, human and animal health as
well as the social-economic, political, and cultural context. Figure 2 shows the relevant policy
levels and strategic aspects for healthcare improvement.
Health in all policies (HiAP), is a collaborative approach that integrates actions across all
sectors on the wider determinants of health: the social, environmental, economic, and
commercial conditions in which people live. By considering health in all policies, policy
actions can create the conditions for healthy lives and reduce health inequalities (Ståhl and
Koivusalo, 2020).
The EU is actively implementing the health in all policies approach in its health policy. The
EC Directorate for Health and Food Safety (DG SANTE), supports the efforts of EU countries
to protect and improve the health of their citizens and to ensure the accessibility,
effectiveness, and resilience of their health systems8. This is done through various means,
including proposing legislation, providing financial support, coordinating, and facilitating the
exchange of best practices between EU countries and health promotion activities9.
8
9
13
https://health.ec.europa.eu/eu-health-policy_en
https://health.ec.europa.eu/eu-health-policy/overview_en
The EU Global Health Strategy approach consist of a wide variety of policies aiming to
support planetary health goals with supportive actions to facilitate research, digitalisation,
and capacity building10.
The EU4Health programme provides funding to improve health in the member states and
beyond, tackle cross-border health threats, improve the availability and affordability of
medicinal products, medical devices and crisis-relevant products as well as improving the
health systems’ resilience11. Other EU programmes also invest in healthcare systems, health
research, infrastructure, or wider health-related aspects12.
The European Health Union will focus on both urgent and long-term health priorities, from
the response to the COVID-19 crisis and resilience to cross-border health threats, to
Europe’s Beating Cancer Plan, the Pharmaceutical Strategy for Europe and digital health13.
These actions demonstrate the EC commitment to the health in all policies approach,
aiming to improve health and health equity across all sectors and policies. Accordingly, these
reflect the strategy for the development of standards for healthcare and the use of AIsystems requiring a broader context.
2.1.1
HUMAN CENTRICITY IN HEALTHCARE AND PUBLIC HEALTH
Human and centricity in healthcare have specific meanings. In the context of ‘planetary
health’, ‘Public Health’, and ‘one health’, the term ‘human’ is used to distinguish human
health from animal and environmental health while human centricity in this context does not
always carry positive connotations.
In healthcare patient- or person-centred care is a more common expression. Patient- or
person-centredness refers to an approach that prioritises the needs, preferences, and wellbeing of patients and caregivers in the process of health and social care delivery. It
emphasises a compassionate approach, where healthcare providers focus on understanding
and addressing the needs of each individual patient along the whole cycle of care, taking
into consideration their unique values, beliefs, culture and social context (Lawal et al., 2016,
Donabedian, 1988).
Due to ageing and the related rise of chronic conditions and co-morbidities, patient needs
become more complex. Accordingly, a better person-centred, integrated and coordinated
approach from a variety of actors in health and social systems is required to produce better
health outcomes.
Health outcomes are understood in this context as meaningful outcomes perceived by
patients and not results in (bio-) medical terms such as physiological functions and
10
EU Global Health Strategy to improve global health security and deliver better health for all
https://ec.europa.eu/commission/presscorner/api/files/document/print/en/ip_22_7153/IP_22_7153_EN.pdf
11
https://health.ec.europa.eu/publications/factsheet-european-union-actions-ensure-better-health-all-changing-world_en
12
https://commission.europa.eu/news/european-health-union-four-years-acting-together-peoples-health-2024-05-22_en
13
https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/promoting-our-european-way-life/europeanhealth-union_en
14
pathological indicators (Wolf, 2016). Patient-reported Outcome Measures or PROMs and
Patient-Reported Experience Measures or PREMs are measures that provide information
directly from patients about their health, symptoms, functional status, quality of life and wellbeing. PROMs and PREMs are typically used in healthcare to assess the effectiveness of
interventions, treatments, and healthcare services from the perspective of the patients
themselves: a key aspect of patient-centred care, as they provide insights into how patients
perceive their own health.
The implication is that person-centred care and related outcomes are the result of a total care
process and not separate and single actions, interventions, or technologies. The care
process or care-cycle comprises a sequence of events that a patient goes through in the
process of receiving healthcare services, from diagnosis to treatment to follow-up and
beyond (Desmedt et al., 2016, Carolan et al., 2022) .
2.1.2
INTEGRATED SERVICES TO ENSURE PERSON-CENTRICITY
Integrated care is a health and social care service delivery model that focuses on
anticipating individual health and social needs through delivering digital enabled integrated
and high-quality care services (Frisch and Rabinowitsch, 2019, Baxter et al., 2018). The
integrated care approach emphasises the process of integration, coordination, and
communication, while Value-Based HealthCare VBHC focus on improving outcomes
because of integrated services that optimise costs as the primary driver. It highlights the
value of care, which is defined as the patient relevant outcomes (what matters most),
achieved per costs incurred over the whole care-cycle. Both integrated care and VBHC aim
to improve patient outcomes, enhance patient experience, and optimise resource utilisation
by providing care that is evidence-based, technology enabled, patient-centred, integrated,
and efficient: figure 3.
Figure 3 Delivery of person-centred services over the whole care-cycle to produce outcomes
The use of AI, often related to or through medical devices, has the potential to facilitate the
delivery of health and social care services by supporting integrated care, improving patient
outcomes, and reducing costs. Artificial Intelligence can help to personalise care for
15
individual patients based on their specific needs and medical history through more efficient
and effective use of data. Possible AI applications are (Davenport and Kalakota, 2019,
Cingolani et al., 2022):






Artificial Intelligence-based chatbots or conversational agents similar to ChatGPT
(Thirunavukarasu et al., 2023) could facilitate understanding and accessibility of health
services for patients while helping professionals with symptom inventory, history taking
and triage (Li et al., 2023).
Predictive analytics, such as establishing of complicated risk-profiles or discharge
planning, through analysis of enormous amounts of historical data and pattern
recognition on patients with similar health issues. This can help healthcare providers to
predict and prevent health problems before they occur, allowing for earlier and more
effective interventions (de Hond et al., 2022).
Artificial Intelligence can help to coordinate care based on real-time data between
different healthcare providers to ensure that patients receive the right care at the right
time. This can be particularly beneficial for patients with complex care needs who
require input from multiple professionals (Lebcir et al., 2021).
Decision support based on AI could help health professionals to make more accurate
and informed decisions about patient care such as identifying potential drug
interactions or suggesting treatment options based on a patient's medical history
(Walker et al., 2022, Bajgain et al., 2023).
Artificial Intelligence can be used to monitor patients remotely, providing healthcare
providers with real-time data about a patient's health status. This can help to identify
potential problems early on, allowing for timely interventions (Dubey and Tiwari, 2023).
By reducing administrative burden through automation of time-demanding tasks
often related to management of patients in multi-disciplinary teams with the help of AI
as well as detection of (administrative) errors (Iqbal et al., 2022).
2.1.3
HEALTH SYSTEM RESILIENCE AND SUSTAINABILITY
Almost all health systems in the EU and beyond struggle with the longstanding structural
fragmentation in terms of services, technology, governance, and finances. This
fragmentation has led to a variety of challenges in the healthcare systems all over the world
that became strikingly apparent during the COVID-19 pandemic (Dal Mas et al., 2023).
Healthcare system fragmentation is due to several factors:


16
(hyper) specialisation, different providers, and varying levels of care. Patients may
receive care from multiple healthcare providers across different settings, such as
primary care, hospitals, social care, rehabilitation, and long-term care facilities. The lack
of coordination and communication among physicians, specialists, nurses, managers,
social care workers, and professionals from other organisations result in fragmented
care, leading to inefficiencies, medical errors, and poor patient outcomes.
A fragmented technology landscape due to the presence of numerous systems,
including medical devices, digital platforms, and Apps that do not always communicate
or integrate well with each other. Electronic health records (EHRs), used by different


providers may not be compatible, hindering the seamless exchange of patient
information. This impedes information sharing, care coordination and continuity of care.
Healthcare governance refers to the policies, regulations and organisational structures
that govern the healthcare system. Lack of coordination and alignment among
government agencies, insurance companies, healthcare providers and professional
associations as well as dispersed decision-making between central and de-central
authorities result in conflicting policies, inconsistent standards, and inefficiencies in
resource allocation.
Healthcare financing is often fragmented, with multiple sources of funding (also
research and innovation), such as private insurance, government programs, out-ofpocket payments, and various reimbursement models. This fragmentation causes
complexity in billing, reimbursement processes, and administrative burdens for both
providers and patients. Hence, it contributes to disparities for access to care and
financial burdens for individuals and populations with inadequate insurance coverage
and reimbursement, e.g., for medical devices.
All while the pressure caused by an increasing demand of more complex care due to an
ageing population is becoming unmanageable (Hiroshima, 2023).
The long-standing problems with coordination and collaboration as well as inadequate
human, financial and technical resources became apparent in a hampered response to an
instant public health crisis: the COVID-19 pandemic (figure 4) 14 . Accordingly, most
healthcare systems are considered not resilient to shocks such as infectious outbreaks or
other societal health threats such as anti-microbial resistance, pollution, and climate change.
The lack of sustainability of healthcare systems are explained by a variety of factors, including
sufficient financial and human resources and the ability of healthcare systems to adapt
to changing needs and circumstances. One of the biggest challenges of healthcare systems
is the rising cost of healthcare within a context of decreasing resources and a rising
demand of care.
In addition to financial sustainability, healthcare systems must also consider sustainability
and their environmental impact. The healthcare industry can be a significant contributor to
greenhouse gas emissions and waste generation, which can have a negative impact on
public health and the environment. By adopting sustainable practices, such as energyefficient buildings, waste reduction programs and sustainable procurement policies,
healthcare systems can reduce their environmental impact while also improving their longterm sustainability.
Overall, the sustainability of healthcare systems is an ongoing concern that requires ongoing
attention and investment. By addressing key challenges related to cost, access and
environmental impact, healthcare systems can work towards a more sustainable future.
14
17
Strengthening health systems resilience: key concepts and strategies https://eurohealthobservatory.who.int/
publications/i/strengthening-health-system-resilience-key-concepts-and-strategies
Figure 4 Challenges of health and social care systems
2.1.4
THE HEALTHCARE PARADIGM SHIFT
To address the challenges described above, healthcare systems may need to implement
new models of care that prioritise prevention and primary care, rather than expensive and
reactive treatments. With the rising cost of care, staff shortages, evolving patient
expectations, and persistent inequities in access to quality care, the current hospital-centric
model of care delivery is under growing pressure. Digital technology enables care that has
traditionally been delivered as mono-disciplinary in hospitals that could become a multidisciplinary provision across organisations in lower-cost settings such as primary care
centres in communities and/or at home as illustrated in figure 5 (Gray et al., 2021).
18
Figure 5 Healthcare paradigm shift
Integrated multi-disciplinary person-centred care provided in communities is impossible
without adequate digitally enabled services and seamless information exchange between
citizens, health and social care professionals and relevant agencies.
Digital platforms with integrated services and technology to support care provision
The COVID-19 pandemic limited the physical face-to-face interaction between people and
health professionals and in response there was an increased use of digital communication
observed in many countries: it showed the potential of digital technology as an alternative
interaction in healthcare at scale. Developments in technology, devices, digital solutions,
data science and AI hold the promise to make care more personalised, timely, efficient, etc.
Available digital applications are mostly stand-alone or fragmented solutions addressing
specific issues of health and disease while usually not integrated in a coherent functional
approach with other digital health solutions including medical devices.
Digital platforms could potentially facilitate integration of online and offline care services
also characterised as hybrid or blended care models. Such digital health service platforms
should integrate a range of health-related services and support: such as symptom
assessment, triage, appointment scheduling, online interaction, prescription management,
telemedicine and monitoring (including medical devices), By bringing these services
together in one platform, it will be easier for patients to manage their health, potentially
reducing healthcare costs and improving patient outcomes while also providing healthcare
providers with valuable tools for managing patient care, including real-time patient data,
analytics, decision support, medical record keeping, administrative support and
communication tools.
19
2.2
THE IMPACT OF INNOVATION AND TECHNOLOGY ON
HEALTH EXPENDITURE
Expenditure on innovation of healthcare with new pharmaceuticals and medical
technology (including health IT such as hospital information systems), can vary significantly
across member states, healthcare systems and the nature of new products, services, and
processes.
This generally includes the costs associated with the purchase or lease of solutions, as well
as costs related to training, maintenance, and upgrades. It may also encompass costs
associated with research and development of new innovations. In health expenditure
containment it is essential to balance the cost and benefits of innovation, ensuring their
accessibility and affordability of the novel solutions while optimising healthcare outcomes for
patients. Therefore, the evaluation of the cost-effectiveness and value for patients of these
new solutions are important considerations. Average spending on medical technology as
part of the total health expenditure budget in Europe varies between 3% and 7%, while the
spending on pharmaceuticals takes about 8% to 15% and the remaining is on operational
healthcare costs: up to 80%.
Whereas healthcare innovations potentially have a positive impact on health outcomes by
improving the diagnostic process, enhancing treatment options, enabling personalised care,
facilitating disease management, supporting rehabilitation and assistive technologies, some
developments significantly tend to increase healthcare expenditure unsustainably .
Accordingly, medical technology has the potential to improve patient outcomes and reduce
costs through more accurate diagnoses, less invasive procedures and more efficient care
delivery, they can also drive-up costs and do harm, particularly if they are overused or
misused 15 , leading to increased healthcare costs without necessarily improving patient
outcomes.
For example, the Netherlands Public Health Institute and Environment estimate a cost
increase of two-thirds in healthcare via the introduction of new pharmaceuticals and medical
technologies: figure 6.
15
20
OECD/European Union (2022), Health at a Glance: Europe 2022: State of Health in the EU Cycle, OECD Publishing,
Paris, https://doi.org/10.1787/507433b0-en
Figure 6 Effect of ‘innovation’ on healthcare spending
It is important to note that the impact of medical technology on healthcare costs is complex
and multifactorial. Medical technology could increase healthcare costs or lead to improved
patient outcomes and cost-savings. Evidence-based decision-making and careful
consideration of the value and cost-effectiveness of medical technologies are essential in
managing healthcare costs while optimising patient care when introducing AI-systems in
healthcare.
2.2.1
OVERUTILISATION
An often overlooked or underestimated cost factor in Health Technology Assessment HTA
is that new diagnosis or treatment solutions generate more and often unnecessary
utilisation of these solutions. Overutilisation or over-diagnosis and over-treatment can be
due to an increased awareness among patients, healthcare providers and insurers and the
belief that the new solution is superior to existing options. It is important for HTA to consider
not only the benefits of innovative solutions but also their potential costs and unintended
consequences, including the potential for increased utilisation.
Overutilisation can occur in various healthcare settings, including screening programs,
routine medical exams, imaging tests and laboratory tests i.e., typically, procedures likely
to be supported by AI-systems. It can lead to unnecessary medical interventions, such as
surgeries, medications, or other treatments, which can include associated risks, costs, and
potential harm, including adverse effects, complications, psychological distress, and
medicalisation. There are several factors that can contribute to overdiagnosis, including:

21
Screening programs, such as those for cancer, may identify small or slow-growing
tumours that would not have caused harm during a person's lifetime. However, once




detected, these tumours may be treated, even though they may not have posed a
threat to the person's health or longevity.
Diagnostic testing, advances in medical imaging and laboratory testing may lead to
the detection of incidental findings that may not have clinical significance or require
treatment. For example, small, asymptomatic abnormalities detected on imaging tests,
such as CT scans or MRI, may lead to further investigation and treatment, even though
they may not be clinically relevant.
Expanded disease definitions, changes in disease definitions or diagnostic criteria
may result in the identification of more individuals as having a condition or disease,
even if they do not experience symptoms or require treatment. For example, lowering of
diagnostic thresholds for conditions like hypertension or diabetes may lead to
overdiagnosis and overtreatment of individuals who may not actually benefit from
aggressive interventions.
Defensive medicine, this concerns medical malpractice or litigation that may lead
healthcare providers to over-diagnose and overtreat patients due to overcautiousness,
when clinical evidence for a specific condition may be uncertain.
Patient demand and expectations in relation to medical testing and treatment, driven
by factors such as health anxiety, information from the internet, or a desire for
reassurance, may contribute to overdiagnosis, as healthcare providers may feel
pressured to order unnecessary tests or treatments to meet patient expectations.
Increased utilisation can lead to unnecessary costs and potentially harm patients (see figure
7): when a new diagnostic test is introduced that is more sensitive than existing tests, it may
lead to more false positives, which can result in unnecessary testing and procedures.
Similarly, if a new drug or AI-based algorithms for diagnosis are introduced, it may be
prescribed more frequently than necessary, leading to increased costs and potential sideeffects.
There is also a significant financial incentive for various stakeholders in the healthcare
system, which can contribute to overutilisation occurrence. The investments in relation to the
purchase of the new solution, e.g., equipment, should be paid for or provide a return-ofinvestment.
22
Figure 7 Over-diagnosis, over-treatment, and over-use
Hence, novel solutions are frequently used in business models with the objective to
generate profit. Especially, in a fee-for-service or activity-based reimbursement system,
healthcare providers are paid for each service they provide, regardless of whether it is
necessary or not. This can create an incentive to perform more tests, procedures, and
treatments than necessary, leading to overutilisation and potentially to overdiagnosis.
Accordingly, responsible adoption of AI solutions in health and social care requires that
(autonomous), AI-systems are financially reimbursed and incentivised to contribute to costeffective, affordable and sustainable services (Abramoff et al., 2022, Siala and Wang, 2022).
Drivers of over-diagnosis and over-utilisation are well described. Advancing technology
allows detection of disease at earlier stages or ‘pre-disease’ states. Well-intentioned
enthusiasm and vested interests combine to lower treatment and intervention thresholds
so that ever larger sections of the healthy population acquire diagnoses, risk factors or
disease labels (Heath, 2013). This process is supported by medico-legal fear and by
payment and performance indicators that reward over-activity. It has led to a guideline culture
that has unintentionally evolved to squeeze out nuanced, person-centred decision making.
While populistic narratives around the supposed benefits of early detection and
intervention that are difficult concepts for professionals and public alike to reverse (Moynihan
et al., 2013).
In 2015 the Royal College of General Practitioners Council passed a policy paper on
overdiagnosis16 with a recommendation for five ‘tests’ to be applied to College output to
16
23
RCGP Standing Group on Overdiagnosis. For shared decisions in healthcare. http://www.rcgp.org.uk/policy/rcgppolicy-areas/~/media/Files/Policy/A-Z-policy/2015/C72 Standing Groupon Over-diagnosis - revise 2.ashx
reduce the risk of overmedicalisation: approved proposals and standards for screening
clarity about which populations benefit, shared decision making, patient involvement and
declaration of (financial) interests.
2.3
THE SHIFT FROM FEE-FOR-SERVICE TO OUTCOME–
BASED FINANCING AND PROCUREMENT
To anticipate overutilisation and fragmentation and incentivise person-centred care
services, there is a trend towards outcome or pay-for-performance. Performance-based
financing comprises Value-based payment models (also called Value-Based Agreements
and Value-Based Pricing), which shift payments from volume-based to performance or
value-based payments = health outcomes/costs. They align reimbursement with the
achievement of value-based care in a defined population in which healthcare service
providers (in partnership with patients and health care organisations), are held accountable
for achieving financial goals and health outcomes that matter to patients (Menser and
McAlearney, 2018).
Performance-based financing and value-based payment facilitate integrated digitally enabled
person-centred care by encouraging risk-sharing and coordination across providers to
improve health and social outcomes for both individuals and populations 17 . Providers
specialising in value-based care have become attractive to investors because of the
distinctive quality of care that they can provide18. In performance-based financing, providers
are incentivised to meet or exceed specific performance targets or quality standards in
order to receive payment in contrast to fee-for-service or activity-based financing models
which tend to incentivise overutilisation (Eijkenaar, 2011, de Bruin et al., 2011).
In performance-based financing the responsible provider or provider-network in case of
integrated care with collaborating providers, is paid for the outcome achieved and not for
the activity of service performed (Conrad, 2015). Therefore, it is the responsibility of the
provider which resources, activities, equipment, consumables, and medications are used to
deliver the best outcomes for a reasonable price. Consequently, technology such as medical
devices and software are considered to a product-service combination or the integration
of technology in the process of care and not seen as a separate item for reimbursement
(Whaley et al., 2014, Mantovani et al., 2023).
As explained in the previous paragraphs, outcome according to person-centred care is
patient relevant outcomes and not necessarily achieved medical outcomes. Consequently,
the logic of person-centred care is that the products and services should be integrated to
17
Talking Value: A taxonomy on Value-Based Healthcare. EU Alliance for Value in Health. November 2022
https://www.europeanallianceforvalueinhealth.eu/library/talking-value-a-taxonomy-on-value-based-healthcare/
18
https://www.mckinsey.com/industries/healthcare/our-insights/investing-in-the-new-era-of-value-based-care
24
meet the needs of individual patients and the achieve outcomes that matter for people =
value and, accordingly, providers are paid for their performance19: figure 8.
2.3.1
VALUE-BASED PROCUREMENT
Value-based procurement is an approach to procurement in which the focus is on gaining
the best possible value for money in terms of both the cost and the quality of the products
or services being procured. It involves taking a more holistic approach to procurement, which
considers not only the price of the services being procured but also their impact on patient
outcomes and the overall healthcare system.
Normal procurement tends to prioritise the lowest possible price of (single) products and
activities being procured. This means that the primary goal of normal procurement is to find
the cheapest viable option that meets the basic requirements of the procurement, without
necessarily considering other factors such as quality, long-term cost-effectiveness, or patient
outcomes.
Figure 8 Product-Service integration: a pre-condition to create value
In the context of healthcare, value-based procurement seeks to prioritise the selection of best
product-service combination that provide the greatest value to patients and the
healthcare system. This may include considering factors such as patient outcomes, safety,
and long-term cost-effectiveness, in addition to the upfront price of the product or service:
figure 9.
19
25
Porter ME, Lee TH. The Strategy That Will Fix Health Care. Harvard Business Review. Available at:
https://hbr.org/2013/10/the-strategy-that-will-fix-health-care.
Figure 9 From price to a value-based procurement
Value-based procurement also involves a collaborative approach between suppliers and
healthcare providers to ensure that the products and services being procured meet the needs
of patients and the healthcare system. This may involve communicating with suppliers to
understand the benefits and limitations of their products. This may apply to procurement of
medical devices as well and to explore ways to improve the overall value of the procurement.
Accordingly, procurement arrangements between suppliers and healthcare providers as well
as procurement arrangements between healthcare providers and public/private payers are
changing.
2.4
RELEVANT LEGISLATION AND REGULATION
Legislation and regulation establish legal requirements that healthcare providers and
organisations must comply with. This may include requirements related to patient safety,
quality of care, data privacy, and other issues. Healthcare providers and organisations that
fail to comply with these regulations may be subject to penalties or sanctions, such as fines,
license revocation, or legal action. There is specific legislation and regulation in healthcare
which might have implications for the development of standards for AI-based solutions.
2.4.1
EU LAWS AND REGULATIONS IN RELATION TO HEALTHCARE
The European Union (EU) has several laws and regulations that apply to healthcare,
including:
26




The Treaty on the Functioning of the European Union (TFEU),20 which treaty sets
out the framework for EU law and includes provisions related to public health. It gives
the EU the power to adopt measures to protect human health and prevent diseases
and to support the development of medical research.
The Cross-Border Healthcare Directive provides EU citizens with the right to access
healthcare services in other EU countries and to be reimbursed for those services. The
subject of Directive 2011/24/EU is the harmonisation of the health systems of the
Member States to achieve an elevated level of protection of public health21. To this end,
the Directive lays down rules to facilitate access to safe and high-quality cross-border
healthcare and promotes cooperation in the field of healthcare between Member
States. The Directive shall apply to the Directives on medical devices (93/42/EEC), and
on in vitro diagnostic medical devices (98/79/EC). The MDR and IVDR directive are
accordingly aligned.
The Patient Mobility Directive describes the rules for patients who wish to receive
medical treatment in another EU country, including the right to reimbursement for
certain treatments22.
The Directive on the Recognition of Professional Qualifications which defines rules
for the recognition of professional qualifications, including those in the healthcare
sector, across the EU23.
Note: The Medical Devices Regulation (MDR) and the General Data Protection Regulation
(GDPR), will be discussed in the chapter ‘Medical Technology’. Further, the European
Medicines Agency (EMA) is a regulatory body that is responsible for evaluating and
supervising medicines for use in the EU. It operates under the framework of EU legislation
and is responsible for ensuring that medicines are safe, effective and of high quality.
2.4.2
THE OVIEDO CONVENTION
The Convention for the Protection of Human Rights and Dignity of the Human Being
regarding the Application of Biology and Medicine, also known as the Oviedo Convention,
is a legally binding international treaty that outlines ethical standards for the use of medical
and biological technologies. The Oviedo Convention was adopted by the Council of
Europe and has been signed and ratified by many EU member states. As a result, the Oviedo
Convention is considered a relevant international legal instrument in the EU and its provisions
are often considered in the development of EU law and policy related to healthcare.
The Oviedo Convention primarily deals with ethical issues related to human biology and
medicine, rather than the regulation of medical devices and in-vitro diagnostics (IVDs)
20
Consolidated version of the Treaty on the Functioning of the European Union http://data.europa.eu/eli/treaty/
tfeu_2012/oj
21
2011/24/EU on the application of patients' rights in cross-border healthcare Directive 2011/24/EU, article 2. healthcare
http://data.europa.eu/eli/dir/2011/24/oj
22
Directive 2011/24/EU of the European Parliament and of the Council of 9 March 2011 on the application of patients’
rights in cross-border healthcare http://data.europa.eu/eli/dir/2011/24/oj
23
https://single-market-economy.ec.europa.eu/single-market/services/free-movement-professionals/recognitionprofessional-qualifications-practice_en
27
devices. However, the Oviedo Convention does contain provisions related to the use of
medical devices and IVDs (and related data24), in the context of medical research and clinical
practice. For example, Article 17 of the Oviedo Convention states that the use of medical
devices and IVDs in research and clinical practice should be subject to appropriate
regulation and oversight and that the risks and benefits of their use should be carefully
considered. The article also states that the use of these devices should be in accordance
with established medical practice i.e., clinical guidelines and with respect for the dignity
and rights of the individual. In addition, the Oviedo Convention recognises the importance of
protecting patients from harm in the context of medical research and clinical practice,
which can be achieved in part through appropriate regulation and oversight of medical
devices and IVDs.
On 7 June 2022, the Council of Europe’s Steering Committee for Human Rights in the fields
of Biomedicine and Health (CDBIO) issued a new report on the impact of Artificial Intelligence
on the doctor-patient relationship. The report examines AI-systems regarding the doctorpatient relationship in relation to the human rights principles and impact of AI according to
six themes:
1
2
3
4
5
6
Inequality in access to high quality healthcare;
Transparency to health professionals and patients;
Risk of social bias in AI-systems;
Dilution of the patient’s account of well-being;
Risk of automation bias, de-skilling, and displaced liability; and
Impact on the right to privacy.
While the Oviedo Convention does not specifically address the regulation of medical devices
and IVDs in detail, it provides a broader ethical framework for the use of these devices in
healthcare, emphasising the importance of patient safety, informed consent, and respect for
human dignity. These principles are relevant to the regulation of medical devices and IVDs
in the EU, which is governed by separate legislation such as the EU Medical Devices
Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR).
2.4.3
THE CLINICAL TRIALS REGULATION
In addition, the Oviedo Convention has influenced the development of EU regulations related
to healthcare, such as the GDPR and the EU Clinical Trials Regulation CTR 25 . Both
regulations incorporate principles of the Oviedo Convention, such as the right to privacy and
informed consent. The Oviedo Convention plays a key role in shaping ethical standards and
legal frameworks for healthcare in the EU and its provisions are an important reference point
for EU law and policy in this area.
24
Oviedo Convention and its Protocols Chapter 1. General provisions. Article 11. Medical confidentiality and data and
requirements on their processing. https://www.coe.int/en/web/bioethics/oviedo-convention
25
Regulation (EU) No 536/2014 https://health.ec.europa.eu/medicinal-products/clinical-trials/clinical-trials-regulation-euno-5362014_en
28
The CTR is a regulation, which came into application on 31 January 2022, that governs the
conduct of clinical trials of medicinal products for human use within the EU and is intended
to streamline the regulation of clinical trials across the EU, ensuring patient safety and
facilitating the conduct of multinational clinical trials. Although, the CTR primarily applies to
clinical trials of medicinal products for human use and medical devices that are used in
the context of clinical trials may be subject to certain provisions of the CTR. Accordingly, the
application of AI, machine learning and the development of algorithms maybe subjected
to CTR rules.
Artificial Intelligence can be used in many ways in clinical trials, such as to identify suitable
participants, to monitor patients’ safety and to analyse trial data. When AI is used in the
context of clinical trials, it must comply with the general principles of good clinical practice
guidelines26 GCP outlined in the CTR, which include the need for informed consent, the
protection of study participants and the reporting of adverse events. In addition, the use of
AI in clinical trials may raise specific ethical and regulatory issues, such as the need to ensure
the fairness and transparency of AI algorithms and the need to protect patient data
(Kleinberg et al., 2018).
The CTR consist of a centralised system of regulatory oversight for clinical trials conducted
within the EU, with the European Medicines Agency (EMA) playing a key role in the
authorisation and oversight of clinical trials. The CTR sets out the process for obtaining
authorisation to conduct a clinical trial, including the requirements for submitting a clinical
trial application and the criteria that must be met for authorisation to be granted.
The CTR contain requirements for the conduct of clinical trials, including the need for
informed consent, the protection of trial participants, the reporting of adverse events and the
use of good clinical practice. The CTR requires the public registration of all clinical trials
conducted within the EU, as well as the reporting of results of clinical trials in the EU Clinical
Trials Register. The CTR includes provisions to facilitate the conduct of multinational
clinical trials, such as the mutual recognition of clinical trial data across the EU. Hence, the
EU Clinical Trials Regulation provides a consistent and transparent framework for the
conduct of clinical trials, including the development and use of AI related to medical
devices, across the EU ensuring patient safety and facilitating the development of novel
solutions.
Overall, these laws and regulations provide a framework for healthcare within the EU, aimed
at ensuring patient safety and access to high-quality healthcare services.
2.4.4
SOFT LAW AND REGULATION IN HEALTHCARE PRACTISE
In healthcare practise, provisions and regulations exist as soft law. The term 'soft law' refers
to legally non-binding, but in practise its provisions and regulations are often followed; thus,
even if non-binding, soft laws have important implications for daily care practises. Soft law
includes standards, directives, and quality frameworks. Soft law is not limited to any of the
26
29
https://www.ema.europa.eu/en/ich-e6-r2-good-clinical-practice-scientific-guideline
hierarchical steps of legislation but occurs at every level of healthcare practise.
Accordingly, the rules and conventions must have such expressiveness that, despite the
absence of a legal status, are observed by the professional field. For example, professional
organisations may issue codes of conduct or ethical and clinical guidelines to help healthcare
providers navigate complex situations or to promote certain standards of care.
Soft law is typically established by professional groups structured according to their
speciality and organised in associations. This applies for both registered medical
professionals and non-medical support staff e.g., IT and financial managers. These
associations act on a state/regional level, such as in Germany and Spain as well as on a
national, European, and/or international level. Professional groups are important for its
speciality advocacy but have an essential role in ensuring the values and principles for
safe and high-quality most care through knowledge sharing, training, continuous education,
accreditation and anticipating future developments in medicine. As such, professional groups
are paramount in (clinical), guideline and standard development (Beauchemin et al.,
2019).
Soft law and regulation may work together to promote safe and effective healthcare
practises: a regulatory authority, payers or insurers may issue formal regulations that
establish minimum standards of care, while professional organisations may issue guidelines
or recommendations that promote higher standards of care or address emerging issues
not covered by the regulations (Shachar, 2022).
Therefore, it is important that, besides patient representatives, the professional groups
should be involved in the development of legislation, regulation, and soft law because these
professionals are responsible for the eventual implementation, adherence, and monitoring of
the related provisions. The attitudes of health professionals towards AI vary widely, as AI is
a rapidly evolving technology with potential applications across many aspects of healthcare.
Some health professionals are enthusiastic about the potential benefits of AI, such as
improved accuracy and efficiency in diagnosis and treatment, while others are more
cautious and concerned about the risks and ethical considerations associated with AI in
healthcare (Prakash et al., 2022). The perception of substitution of health professionals
by AI should be avoided while it is important that healthcare professionals learning how to
work with AI applications to generate benefits for the improvement of care (Mittelman et al.,
2018).
The principle of 'not doing harm', also known as non-maleficence, is a fundamental
principle in healthcare and medical ethics. Medical practitioners are required to ensure they
do not harm or allow harm to come to a patient through neglect. This concept is intricately
linked to beneficence, another fundamental principle in medical ethics that focuses on
promoting the well-being of patients27. Non-maleficence means "do no harm" and obligates
27
30
Patient safety - World Health Organization (WHO). https://www.who.int/news-room/fact-sheets/detail/patient-safety
healthcare professionals to refrain from causing harm to patients. This harm can be
intentional or unintentional for example through use of algorithms and AI in diagnosis.
This principle of 'not doing harm’ is applied in many ways. For example, it guides healthcare
professionals to avoid treatments or procedures that could cause harm or pose unreasonable
risks to patients. It also obligates them to take precautions to prevent harm from occurring,
such as by following safety protocols and guidelines 28 . One of the challenges with this
principle is that some treatments may have both beneficial and harmful effects. In such
cases, healthcare professionals must weigh the potential benefits against the potential harms
to make the best decision for the patient. Non-maleficence is closely related to other
principles of medical ethics, such as autonomy (respecting the patient's right to make their
own decisions), and justice (treating all patients fairly and equitably)29.
The principle of ‘Accountability’ in healthcare is a key aspect of professional ethics and
governance. Accountability in healthcare refers to the obligation of healthcare professionals
and organisations to answer for their actions, whether they are positive or negative, accept
responsibility for them and disclose the results in a transparent manner. The concept of
accountability contains three essential components:
1 There are multiple parties in healthcare that can be held accountable or hold others
accountable e.g., companies,
2 Parties in healthcare can be held accountable for various activities, including
professional competence, legal and ethical conduct, financial performance, adequacy
of access, public health promotion and community benefit and
3 These include formal and informal procedures for evaluating compliance with domains
and for disseminating the evaluation and responses by the accountable parties.
Accountability is applied in several ways in healthcare. For example, healthcare
professionals are accountable for the quality of care they provide and for their professional
and ethical conduct. Healthcare organisations, on the other hand, are accountable for
providing safe, effective, and high-quality care. One of the challenges with this principle is
ensuring transparency and openness in healthcare practices. This includes being open
about mistakes and learning from them to improve patient care.
Doctors who are employed and instructed by organisations or senior stakeholders
implementing new technology are to a great extent shielded from liability risks, though they
can often feel trapped in a complex web of distributed accountability (Ong et al., 2018).
Existing healthcare legislation, regulation and soft law are relevant for developing standards
affecting healthcare practise. This is not only important for the quality and safety of AI-based
solutions for patients, but also for the adoption of these technologies by professionals and
the effect on the healthcare system.
28
29
31
https://medicalschoolexpert.co.uk/the-four-pillars-of-medical-ethics/
The Four Principles of Biomedical Ethics - Healthcare Ethics and Law. https://www.healthcareethicsandlaw.co.uk/
intro-healthcare-ethics-law/principlesofbiomedethics
2.5









32
KEY CONCLUSIONS ABOUT HEALTHCARE
Human health is intrinsically linked to planetary health as well as social-economic,
political, and cultural factors.
A strategy for the development of standards on the use of AI-systems requires a
broader health context.
Human centricity in healthcare is the care provided by collaborating professionals and
organisations who provide integrated technology enabled services according to the
individual needs of people.
Due to demographic developments, there is an increasing demand of care by patients
with complex needs: combined social, mental, and physical health challenges.
In the context of decreasing resources and economic contraction, substantial costefficiency improvements benefit the resilience and sustainability of health and social
systems resilient and sustainable.
The introduction of modern technologies could result in over-diagnosis and overtreatment i.e., overutilisation and increasing healthcare expenditure. While technology
can contribute to increased healthcare costs, it can also lead to improved patient
outcomes and cost-saving potential.
Payments become outcome and value-based models which require integration of both
(digital) products and services.
Along with the shift towards product-service integration, the scope of financing is
broadening with requirements that regard sustainability, environmental and societal
responsibilities.
The specific legislation and regulation in healthcare might have implications for the
development of standards for AI-based solutions besides the existing regulations for
medical devices and AI.
3.
ARTIFICIAL INTELLIGENCE
This chapter briefly explains the definitions, concepts and methods used in the development
and validation of AI-systems as well as the challenges with the implementation and
application of AI-systems in the healthcare context. After introducing generative AI, the
mechanisms of adaptive algorithms are explained, including their development and machine
learning.
Special attention is given to the methodological, mathematical, statistical, and practical
limitations of AI and the importance of data quality as well as its validation process. This
chapter is concluded with elaborating on the application of AI-systems in the clinical decisionmaking process and decision support.
Note: the development, a further elaboration on development, validation and implementation
of AI-systems is discussed in the Chapter Medical Technology. This is because of the
unique characteristics of AI development and its use in healthcare and medical devices.
3.1
INTRODUCTION
3.1.1
ARITIFICAL INTELLIGENCE
Artificial Intelligence is an umbrella term for advanced statistical methods that are
particularly suitable for analysis and identifying patterns in complex and large data sets.
Increasingly in healthcare, larger amounts of data are collected that can be used for the
development of algorithms and be further trained with AI techniques. Artificial Intelligence
comprises an interdisciplinary field dealing with models and systems for the performance of
functions generally associated with human intelligence, such as reasoning and learning30;
hence a set of methods or automated entities that together build, optimise and apply a model
so that the system can, for a given set of predefined tasks, compute predictions,
recommendations and/or decisions31.
Both in medical research and healthcare practice, algorithms and AI applications have
already been used for many years. For example, more than 15 years ago AI and Machine
Learning ML have been used to detect breast cancer from radiographical images as well as
for analysis and diagnosis of laboratory samples (Jiang et al., 2017). Although the application
of AI in health is not new, there are many new and rapidly successive developments,
resulting in prolific applications in various healthcare domains (Bohr and Memarzadeh,
2020).
The progress in development and application in healthcare has led to much optimism and
even overhyped expectations about the use in life-sciences, health, and social care.
However, the current applications in healthcare are fragmented mostly single, standalone
30
31
33
ISO/IEC 2382:2015 Information technology — Vocabulary
ISO/IEC 22989:2022 Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
and are related to very specific topics rather than comprehensive solutions which could make
a significant difference from a health systems perspective (Benjamens et al., 2020,
Sechopoulos and Mann, 2020). Typical applications are the detecting abnormalities in
medical images, prioritising information in patient records and providing support during
medical decision-making (Tekkesin, 2019, Jiang et al., 2017).
Various techniques can be used to develop ML models such as linear models, Bayesian
models, decision trees and deep learning, either separately or in combination, but they will
not be further elaborated in this report. These techniques produce adaptive algorithms.
While traditional AI mainly consists of predictive models to perform a specific task, such as
estimating a number, classifying data, or deciding between a set of options, generative AI
creates original content.
3.1.2
GENERATIVE ARTIFICIAL INTELLIGENCE
Generative AI is an emerging technology that leverages deep-learning algorithms to create
– or generate – new content such as text, audio, video, or programming code. The utilisation
of content generated by AI models, like large language models (LLMs), such as ChatGPT,
GPT-3 and image generation models, is growing quickly. LLMs are adaptive computational
models known for their ability to perform general-purpose language generation and other
natural language processing tasks. LLMs acquire their abilities by learning statistical
relationships from extensive amounts of text during a computationally self-supervised and
semi-supervised training process. These models acquire knowledge about the syntax,
meaning and structures characteristic in human language, but they also inherit inaccuracies
and biases present in the data they are trained on.
In the context of healthcare, generative AI can be used for various purposes. Generative AI
models can create or modify text, images, audio, and video based on the data on which they
were trained. The potential application of generative AI is quite diverse:




34
Automating Clinical Documentation - generative AI can enhance patient interactions
by converting clinician dictations into organised notes using conversational language.
For instance, after a clinician records a patient visit using a mobile app, the platform
adds real-time patient information and prompts the clinician to fill any gaps. This
automation simplifies manual and time-consuming administrative tasks.
Analysing Unstructured Data - Healthcare operations produce large volumes of
unstructured data, like insurance claims, clinical notes, diagnostic images, and medical
charts. Generative AI can analyse this data, provide insights and recommendations.
Applications in Diagnostics and Treatment - Generative AI assists doctors by
analysing patient data from medical records, lab results and medical imaging (such as
MRIs and X-rays). It identifies potential issues and recommends additional testing or
treatment options.
Chatbots – A chatbot is a program or application designed to interact in real-time with
e.g. professionals and patients, answering common questions, providing personalised
recommendations and health information (Li et al., 2023). These chatbots use natural
language processing and machine learning algorithms to understand and respond to
user queries or questions. As such, a chatbot functions as an interface between the
user and the software programme. A chatbot could have an informative, conversational,
and/or prescriptive functionality. Also, a chatbot can automate the process of
scheduling appointments, send reminders to patients to take their medication. Some
chatbots are created to evaluate symptoms and direct patients to the right level of care
(triage). They can also offer help with mental health issues such as depression and
anxiety, providing coping strategies and engaging in therapeutic conversations.
Chatbots can be integrated in various platforms, including websites, mobile apps,
medical devices, and electric health records, making them accessible to a wide range
of users.
Three major factors have contributed to the recent advancements in generative models: the
increase in training data available on the internet, enhancements in training algorithms and
the rise in computing power for training the models (Shao et al., 2022). Industry, not
academia, is dominating the development of the generative AI technology (Ahmed et al.,
2023).
Large Language Models have the potential to bring about significant changes, but they need
to be approached with great care due to their distinct training compared to regulated AIbased medical technologies (please see below) especially in the critical context of patient
care (Meskó and Topol, 2023).
Generative AI models are built on extensive neural networks trained with vast amounts of
raw data in which they learn statistical patterns in the data and use these patterns to generate
new content. The quality of the generated content depends on both the quantity and quality
of the training data and the architecture of the neural network. The development and training
of LLMs typically involves the following steps:





35
Data Collection - The first step is to collect a large amount of data. This text could
come from books, websites, or other sources for a language model, or from images,
audio, etc., depending on the type of model being built.
Pre-processing - The raw data is then processed to make it suitable for training. This
could involve cleaning the data, removing irrelevant information, and converting the
data into a format that the model can understand.
Selection of Model Architecture - The choice of a neural network's architecture made
on basis on the use of application and available training data e.g., transformer model
for language tasks or a convolutional neural network for image tasks.
Training – A model is trained on the pre-processed data using a method called
backpropagation, which adjusts the weights of the neural network to minimize the
difference between the model’s predictions and the actual data.
Evaluation and Fine-tuning - The model's performance is assessed using a distinct
validation dataset and modifications are applied to the model's parameters to enhance
its performance.

Generation - Once the model is trained, it can generate new content. For a language
model, this could involve giving the model a prompt and having it generate text that
continues from the prompt.
Generative AI has the potential to enhance the use and development of medical devices.
Generative AI can be used to create innovative designs for medical devices based on
specific constraints and requirements. It can generate multiple design scenarios, allowing
engineers to choose the most efficient and effective one. Generative AI can support
personalisation of medical device functionality by using patient-specific data. For
example, AI can generate interface designs that are tailored to individual needs and thereby
improve the usability, adherence and consequently the effectiveness of the device.
Generative AI can be used to predict maintenance instead of when a medical device might
fail or require repair or replacement. This can help in preventing device failures and
ensures that the device is functioning optimally and safely. AI can generate virtual models of
devices and simulate their performance under various conditions i.e. simulation and testing.
This can enhance the testing process and help in the identification of potential issues before
the device is built.
3.2
ADAPTIVE ALGORITHMS
3.2.1
FIXED RULE-BASED ALGORITHMS VERSUS ADAPTIVE, AI-DRIVEN
ALGORITHMS
A rule-based algorithm is a systematic procedure that uses a set of predefined rules to
make decisions or perform actions, such as in a game of chess. These rules are static and
do not change over time or based on new data. For example, a rule-based algorithm for
diagnosing a disease may rely on a fixed set of symptoms and criteria that must be met to
make a diagnosis. One of the advantages of rule-based algorithm is that they can be easily
explained and understood as they are predefined. This makes them ideal for tasks where
transparency is important, such as in healthcare. However, rule-based systems are
relatively slow in performance and inflexible as well as that they can be limited in their ability
to deal with more complex situations.
In contrast to rule-based algorithms, adaptive algorithms can adjust their behaviour or
parameters in response to changes in the environment or input data. They are capable
of learning and self-tuning, allowing them to improve their performance over time based on
feedback or latest information. One of the greatest strengths of AI and machine learning
approaches in health care is that their performance can be continually improved based on
updates from automated learning from data (Gilbert et al., 2021). They are designed to adapt
to changing conditions, uncertainties, and dynamic environments, making them suitable for
problems where the optimal solution may vary over time or unknown in advance. Accordingly,
the concept of autonomy can ultimately be extended to the algorithms’ ability to be
36
governed by its own rules as the result of self-learning32: improving its performance over
time.
3.2.2
MACHINE LEARNING PRINCIPLES USED IN DEVELOPMENT AND
APPLICATION OF ADAPTIVE ALGORITHMS
Adaptive algorithms have wide-ranging applications, including pattern recognition, speech
and image processing, robotics, control systems, anomaly detection and predictive
modelling. Each application might use a different type of adaptive algorithm principle or
combination of adaptive algorithm principles for the most effective machine learning. Typical
machine learning principles are (Hastie et al., 2009, Bishop, 2016) 33:





Supervised learning algorithms learn from labelled training data, where the correct
output is provided for each input. They adjust their parameters to minimise the
prediction error on the training data and can generalise to make predictions on unseen
data.
Unsupervised learning algorithms learn from unlabelled data, where the correct
output is not provided. They adapt their parameters to uncover patterns, structures, or
representations in the data, such as clustering or dimensionality reduction algorithms.
Reinforcement learning algorithms learn from interacting with an environment and
receiving feedback in the form of rewards (success), or penalties (failure). They adapt
their actions based on the feedback to maximise the cumulative reward over time and
are commonly used in decision-making and control problems.
Evolutionary algorithms are inspired by the process of natural selection and evolve a
population of candidate solutions to a problem. They adapt the population through
genetic operators such as mutation and crossover and select the fittest individuals
based on a fitness function, leading to improved solutions over generations.
Online learning algorithms learn from data in an incremental and online manner,
updating their parameters as new data becomes available. They are suitable for
problems where data arrives sequentially and needs to be processed in real-time, such
as online data on health and disease progression, recommendation systems, or
financial markets.
The automated optimisation process of adaptive algorithms is mostly not transparent
(often described as a black box) and is difficult to understand. Hence, exceptional care should
be given to assess the impact of data characteristics on the learning process of adaptive
algorithms as well as the ability of the software to potentially change continuously once it
is put into practise. Consequently, adaptive algorithms should be monitored in the postmarket surveillance. Post-market surveillance of Medical Device software or algorithms has
32
ISO/IEC TR 24028:2020 Information technology - Artificial intelligence - Overview of trustworthiness in artificial
intelligence
33
ISO/IEC TR 24372 Overview of computational approaches for AI systems 8.4 Machine Learning
37
laid down as provision in the MDR with the need for a Post-Market Clinical Follow-up
PMCF plan34.
There is a lot of debate about the need of explainability of AI (Loh et al., 2022). 'Explainable'
means that it must be possible to explain why, where, when, and how AI has come to a
certain action, diagnosis, or recommendation. Some algorithms are so complex that they are
beyond our human capabilities and therefore unexplainable (Price, 2018). These are so
called ‘black-box’ algorithms. Their inexplicability raises the question whether AI can be
trusted and whether these algorithms in healthcare should be used. At the same time,
several ethicists say that explainability is not always necessary to trust an algorithm (Duran
and Jongsma, 2021, Richardson et al., 2022).
3.2.3
STANDARDS AND PROTOCOLS FOR THE DEVELOPMENT OF ADAPTIVE
ALGORITHMS
The High-Level Expert Group on Artificial Intelligence (HLEG), released a set of
guidelines called the "Ethics Guidelines for Trustworthy AI." These guidelines were
developed by a group of experts appointed by the European Commission and aim to provide
a framework for the development and deployment of AI-systems that are ethical, transparent
and respect fundamental rights35.
The HLEG's guidelines outline 7 key requirements that AI-systems should meet to be
considered trustworthy:






34
Human agency and oversight - AI should augment human decision-making and not
reduce or substitute human autonomy. People should have the ability to understand
and challenge AI outcomes.
Technical robustness and safety - AI-systems should be secure and resilient against
both intentional attacks and unintentional failures. They should be designed to minimize
risks of harm to individuals and society.
Privacy and data governance - AI-systems should ensure privacy and protect
personal data throughout their lifecycle. They should also be transparent about data
collection, use and storage practices.
Transparency - The data, algorithms and decisions made by AI-systems should be
explainable and understandable. Users should be able to access meaningful
information about the AI system's capabilities, limitations, and intentions.
Diversity, non-discrimination, and fairness - AI-systems should be designed to be
inclusive and avoid unfair bias. They should not discriminate against individuals or
groups based on characteristics such as gender, race, or socioeconomic status.
Societal and environmental well-being - AI should contribute to sustainable
development and the well-being of individuals and society. It should be deployed in
ways that respect the environment and promote social good.
MDR, Article 61(11) and Annex XIV part B. Annex III 1.1(b) 10th indent: “Post-Market Surveillance plan shall cover at
least: [… ] a PMCF plan as referred to in Part B of Annex XIV, or a justification as to why a PMCF is not applicable”.
35
https://altai.insight-centre.org/
38

Accountability - There should be mechanisms in place to ensure responsibility and
accountability for AI-systems. This includes clear lines of responsibility, redress
mechanisms and audits to ensure compliance with ethical standards.
It is challenging to define the criteria and requirements for the development or design of an
adaptive algorithm. There are so-called change protocols for machine learning-based
medical device software (Feng et al., 2021b). Change protocols describe the processes and
procedures that should in place to ensure that any changes or updates made to the software
do not compromise its safety, efficacy, or performance of the device. These change protocols
involve a series of steps to evaluate and validate the changes before they are implemented
e.g. in medical device software.
The Artificial Intelligence Medical Devices Working Group of the International Medical
Device Regulators Forum IMDRF has formulated specific terms and definitions to
characterise the changes/adaptations in AI such as cause, effect, trigger, domain, and
effectuation:





Cause is an adaptation that is triggered by a specific cause, such as a change in the
environment or a change in user behaviour;
Effect is an adaptation that is driven by a desired effect, such as improving the
accuracy of a prediction or reducing the number of false positives;
Trigger is an event or condition that initiates an adaptation, such as a change in the
input data or a deviation from expected results;
Domain is an adaptation that is specific to a particular domain or context, such as
medical imaging or natural language processing;
Effectuation is an approach to adaptation that involves continually adjusting the
algorithm in response to feedback from users and other sources, with the goal of
maximising the desired outcomes.
These attributes help to describe what changes, as well as why, where, when, and how the
machine learning model generates a change to the behaviour of algorithms 36 . When
updating a machine learning model used in a medical device, a change protocol may involve
the following steps:



36
39
Define the change: this is the characterisation of the machine learning model which
is responsible for the behaviour for the algorithm(s) along with any potential risks or
issues that may arise.
Test the change: the model is tested using a representative dataset to evaluate its
dynamics or behaviour and impact on the accuracy and performance of the model.
Evaluate the results: the results of the testing are analysed to determine whether the
model behaviour is acceptable and whether any additional changes or adjustments are
needed.
International Medical Device Regulators Forum 2022 https://www.imdrf.org/documents/machine-learning-enabledmedical-devices-key-terms-and-definitions


Document the change: The model behaviour is characterised and documented,
including any associated risks or issues and the steps taken to evaluate and validate
the future performance.
Implement the model: The machine learning model is implemented in the production
environment and its impact is monitored and evaluated to ensure that it is functioning
as expected.
These change protocols are important for establishing the explainability of AI and ensuring
that machine learning-based software in medical devices are reliable, safe, and effective.
Any updates or changes to the algorithms must be carefully evaluated and validated before
they are released and implemented.
3.3
METHODOLOGICAL, MATHEMATICAL, STATISTICAL AND
PRACTICAL LIMITATIONS OF AI
There are considerable mathematical, statistical methodological and practical
limitations in processing health data through AI-based analytics given the numerous
potential sources of variation. The main methodological, mathematical, and statistical
limitations are:






Data bias AI-systems are only as good as the data they are trained on: there are
numerous causes of biased data (please see below). Poor quality data can lead to
inferior performance of AI models. Hence, if the data used to train an AI algorithm is
biased or incomplete, the algorithm may not be able to accurately generalise to new
data;
When Training data is biased, the AI system will produce biased results - AI models
may be biased towards the training data they were trained on, leading to inferior
performance on new, unseen data;
Overfitting occurs when a model is too complex and captures noise or random
variation in the data instead of recognising the underlying pattern and this may affect or
limit the generalisability of the algorithm;
Underfitting occurs when a model is too simple and cannot capture the underlying
pattern in the data, resulting in deficient performance on both the training and test data;
Lack of transparency and interpretability AI models are inherently black-boxes,
especially deep learning models, making it difficult to understand how the logic of how
they come to their predictions or decisions. This can be a limitation in situations where
interpretability is important, such as in healthcare;
Limited context awareness AI-systems can be limited in their ability to understand the
context of a situation, such as situations in healthcare with unique patient
characteristics and where AI may not be able to apply common sense reasoning.
Generative AI can create the illusion of intelligence. While generative AI models can
sometimes produce outputs that appear human-like, the statistical patterns determine word
sequences without understanding the meaning or context in the real world. They often make
40
errors in reasoning and facts. Researchers in generative AI often describe the output
generated by LLMs as "hallucination", suggesting that it can be nonsensical, inaccurate,
divergent from the original content, deceptive or partially or completely incorrect.
A major concern is the biases found in internet data used to train generative AI models related
to race/ethnicity, gender, and disability status. Even though human feedback is employed for
scoring responses and enhancing the sensitivity and safety of generative AI models, biases
persist. There is a concern that generative AI models might produce manipulative language
because of the prevalence of manipulative content in internet data.
Incorrect output from generative AI models can often appear plausible to many individuals,
particularly those who are not familiar with the subject. A major issue with generative AI is
that individuals who are unaware of the correct answer to a question may not be able to
recognise if an answer is incorrect, which could result in over-reliance and too much
confidence in AI supported responses. Over-reliance on AI may lead to harm in instances
when human compassion, human touch, or human interpretation of data context is necessary
(Duffourc and Giovanniello, 2024). Human supervision is required to assess the precision of
generative AI output. Although generative AI products are improving, the ability to create
outputs that sound convincing but are incorrect is also increasing. Many people do not
realise how often generative AI models are incorrect.
Generative AI models can produce several types of errors, including factual inaccuracies,
inappropriate or harmful suggestions, nonsensical output, fabricated references, and
mathematical mistakes. Other issues include outdated responses reflecting the year when
LLM training took place and varying answers to different versions of the same question
(Marcus, 2022). A case of inappropriate or harmful suggestion was when a chatbot gave an
advice of calorie restriction to a patient with an eating disorder37.
Generative AI, including LLMs can produce incorrect or biased outputs for the following main
reasons (Naveed et al., 2023):




37
41
Training data bias LLMs learn from extensive text data, which may include biases that
exist in society. These biases may inadvertently influence the model’s output, leading to
incorrect or unfair results.
Ambiguity and Context LLMs struggle with context and ambiguity. Sometimes, they
produce responses that sound reasonable but are incorrect in the context. For instance,
they might misinterpret pronouns or misunderstand nuanced prompts.
Out-of-Distribution Inputs LLMs perform well within the distribution of data they were
trained on. However, when faced with inputs outside that distribution, their accuracy
drops. Unexpected or new prompts can result in incorrect responses.
Lack of Common Sense LLMs lack true understanding of common sense. They
generate text based on statistical patterns, not genuine comprehension. As a result,
they may provide nonsensical or factually incorrect answers.
https://www.bbc.com/news/world-us-canada-65771872

Fine-Tuning Challenges Adjusting LLMs for specific tasks can be challenging. If the
fine-tuning data is limited or noisy, the model’s performance may suffer.
In addition to accuracy, reliability and bias, there are numerous unresolved ethical and legal
concerns associated with generative AI. There are privacy issues related to the collection
and use of personal and proprietary data for training models without permission and
compensation. Legal issues related to plagiarism, copyright infringement and accountability
for errors and false accusations in generative AI output are major concerns (Monteith et al.,
2024).
Another problem is the invisibility or lack of transparency of inputs to the AI models,
leading to biased outputs. This is particularly a problem when applications have been
developed commercially and inputs may not be disclosed for commercial reasons
(Whitehead et al., 2023). A lack of transparency in Artificial Intelligence research can make
it difficult to apply in the real-world, rendering the work “worthless” when the results, no matter
how positive, are not reproducible38.
To mitigate these limitations, it is important to ensure that AI-systems are designed and
developed with careful consideration of the underlying data and algorithms and that they
are subject to rigorous testing and validation. Transparency around the inputs to AI
modelling must be improved. Dataset curators, developers and regulators should require the
transparent description of the datasets used in development, testing and monitoring of AIassisted medical devices39. It is also important to incorporate ethical principles into the design
and deployment of AI-systems to ensure that they are used responsibly and ethically.
Understanding, addressing, and managing unintended bias is key to enabling a trustworthy
AI system40.
3.3.1
THE IMPORTANCE OF DATA QUALITY
Besides the ambiguity of adaptive algorithms, also the quality of the data which is used
to develop and train algorithms is often of questionable quality.
High-quality, reliable and statistically sound data is a fundamental requirement for algorithms
(Batini and Scannapieca, 2006). Artificial Intelligence models are usually trained with 'clean'
relatively homogeneous data derived from carefully designed studies and trials in a
controlled/standardised environment to reduce the error or noise in the data. The typical trial
or clinical study data is not representative for the daily life circumstances of individuals but
ought to assess a single phenomenon among groups of people (intervention versus control)
under the same controlled conditions (Portney, 2020).
38
https://healthimaging.com/topics/artificial-intelligence/lack-transparency-ai-research
Standing Together. Draft recommendations for healthcare dataset standards supporting diversity, inclusivity, and
generalisability. Green Paper. 2023. https://www.datadiversity.org/draft-standards
40
https://etech.iec.ch/issue/2021-06/standards-help-address-bias-in-artificial-intelligence-technologies
39
42
Heterogeneous of nature of healthcare data
Even when data from clinical studies are collected in structured and controlled way, a bias
is often present in terms of gender, age, social class, and ethnicity. Social bias is defined as
systematic dispositions which unfairly advantage/disadvantage individuals, or groups and
ultimately harm because of policies and care decisions made with data that underrepresent
those groups when used as evidence to shape specific practices. Social bias leads to
disparities in access, service provision and treatment outcomes, ultimately leading to health
inequities among social groups (Celi et al., 2022). Accordingly, data is mostly biased and
thereby not representative for one’s individual characteristics including daily life context.
When biased data is used to train an AI system and it is not representative of the purpose
for which it is intended, it can lead to inaccurate results. For example, if an AI system is
trained to detect skin cancer using images of mostly fair-skinned individuals, it may not
perform as well on images of individuals with darker skin tones, leading to misdiagnosis or
missed diagnoses. Similarly, if an AI system is trained to identify fraudulent financial
transactions using data from a particular region, it may not perform well on data from other
regions. This is why it is important to ensure that the data used to train AI-systems is diverse
and representative of the population or context in which the system will be used.
In general, healthcare data is inherently heterogeneous of nature and not collected in a
consistent manner. There are numerous reasons why data are inconsistent and unreliable.
The presence of errors and/or noise in data is a common problem that produces various
negative consequences in classification problems, e.g. this results in a high rate of false
positives and/or negatives. These false scores significantly limit the use AI models for e.g.
in diagnostics (Chen et al., 2022).
Data governance and management are essential for ensuring accurate data. Proper data
governance and management practices help ensure that data is of high quality, accurate,
consistent, and reliable. Data governance involves establishing policies, procedures, and
standards for managing data, while data management involves implementing these policies,
procedures, and standards to ensure that data is managed effectively throughout its lifecycle,
from creation to deletion. Without proper data governance and management practices, data
can become of inadequate quality which can lead to incorrect decisions and actions. This is
especially important in the context of AI, where the accuracy and reliability of data are critical
for ensuring the effectiveness and safety of AI-systems.
3.4
PROVIDING (ECOLOGICAL) VALIDATION AND EVIDENCE
FOR AI-SYSTEMS
Given the numerous sources of variation and numerous random errors, the process of
testing and validation of AI to provide sufficient evidence for personal benefit or health
outcomes based on individual characteristics, will require major resources to collect
representative data for developing and training algorithms. Proprietary AI models make it
difficult to find errors in algorithms (Whitehead et al., 2023).
43
Reproducibility is a significant issue of AI application in healthcare. The inconsistency of AI
performance is often illustrated by that an algorithm perform well on one dataset may not
perform as well on another. For example, in a comparative study various algorithms for
automated lung cancer diagnosis were tested on a ‘fresh’ dataset of patient cases and the
accuracy dropped to 60-70%, with some performing no better than random guesses
(Sohn, 2023).
Taken in consideration the necessity of the number of variables, indicators and individuals
as well diversity, quality and granularity of this data variables to be appropriately to be
representative: figure 10 illustrates the number of individuals needed for the different levels
of validity and evidence (McCue and McCoy, 2017).
Figure 10 The validation process to provide clinical proof and resources needed
Note: Ecological validity of AI refers to the extent to which an AI model or system can
generalise its performance to new and real-world situations beyond the training data and
environment. It is a measure of how well an AI model can perform in the real world, in a
context that is different from the one it was trained on. A model with high ecological validity
can perform well on new and unseen data, while a model with low ecological validity may
not be able to generalise to new situations and may be prone to errors and biases. Ensuring
high ecological validity is important in developing robust and reliable AI models that can be
applied in various real-world applications.
The requirements to establish ecologically valid AI demands serious investment in digital
infrastructures, implementation of standards for data collection, interoperability, semantics,
44
computational power, cybersecurity, research capacity, alignment of policies and regulations,
involvement of citizens and communities as relevant sample settings41, etc.
3.4.1
MORE QUALITY DATA AND COMPUTATIONAL POWER WILL NOT SOLVE
PROBLEMS OF UNCERTAINTY AND UNPREDICTABILITY
In contrast to popular arguments, the mathematical and statistical limitations are not
necessarily solved with the availability of more quality data and computational power. For
example, over the last 15 years major genomics datasets across globe were created. Despite
the huge volume of well-structured high-quality data and all computational power as well as
bioinformatics expertise, the accuracy of prediction models for multi-gene interaction remains
low and clinically hardly useful.
A recent retrospective review study revealed that innovative precision medicine i.e. DNA
techniques (Luo et al., 2009), for metastatic cancer and enabled by AI analytical methods,
did not met the expected overall improvement as compared to traditional cancer drugs
(Luyendijk et al., 2023). Hence, the majority of proposed precision medicine anti-cancer
treatments do not reach to clinical use because of problems with efficacy or toxicity, often
for unclear reasons: that proposed mechanism of action was incorrect (Lin et al., 2019).
With each AI application, clear information will always have to be added about the
population on which the AI is trained, which populations are missing and for which
application the AI is suitable. Further research is also needed after implementation to
determine whether the AI is being applied correctly and whether problems arise due to, for
example, bias in the training data. An algorithm that has been trained in one hospital cannot
therefore be automatically applied in another hospital. First, it must be carefully investigated
whether it can be used in a different context than in what it was trained.
Prediction models in healthcare and life-science use predictors to estimate for an individual
the probability that a condition, disease or complication is already present (diagnostic model),
or will occur in the future (prognostic model). Healthcare providers, guideline developers and
policymakers are often unsure which prediction model to use or recommend for what kind of
persons and in what settings. Hence, systematic reviews of these studies are increasingly
demanded and required (Wolff et al., 2019, de Hond et al., 2022).
3.5
AI-BASED DECISION-MAKING
Various concerns were raised on the use of AI and algorithms and their potential adverse
effects on clinicians’ decision-making process (Sujan et al., 2019).
3.5.1
THE CLINICAL DECISION-MAKING PROCESS
During examination, clinical reasoning, diagnosis, treatment, re-examination and follow-up,
healthcare professionals must make numerous decisions during the process of care.
41
45
https://living-in.eu/
Evidence-based guidelines and inter-collegial consultations are usually part of this process
to help professionals making effective decisions in the context of the patients’ preferences
and values (Robinson et al., 2021).
The decision-making process starts with the collection of data during the patient intake and
recording of their medical history, examination, and/or re-evaluation. Obtaining the
appropriate, unbiased and reliable data is essential for the quality of the clinical reasoning
process (Joplin-Gonzales and Rounds, 2022). As for what has been clarified in previous
paragraphs, data collection for clinical reasoning and decision-making through use of
algorithms should meet the highest standards of validity, reliability and accuracy (Ponathil
et al., 2020, Soni et al., 2022).
3.5.2
CLINICAL DECISION SUPPORT
Clinical decision support refers to the use of digital information technology tools and
systems to provide healthcare providers with timely, relevant, and evidence-based
information at the point of care to aid in making informed clinical decisions. Clinical
decision support systems are designed to assist healthcare professionals in making
decisions about patient care, taking into consideration a patient's individual characteristics,
medical history and best practices in healthcare (Bezemer et al., 2019). Note: in conventional
decision-making process, there is causal relationship between information and decision
which is based on a hypothesis.
In AI based decision-making, the causation often relies on statistical correlations that
cannot be understood by humans i.e. ambiguity of algorithms as black boxes. AI-models,
especially complex ones like deep learning networks, can create associations and make
predictions based on statistical correlations in the data they are trained on. However, these
correlations do not necessarily imply causation and the reasoning behind these decisions
can be hard to decipher = black box (Brożek et al., 2024).
3.5.3
VIRTUAL AGENTS WITH AUTONOMOUS DECISION-MAKING
CAPABILITIES
Virtual multi-agent systems are systems in which multiple autonomous agents interact with
each other to achieve a common goal. In multi-agent system, the agents are representing
and acting on behalf of users and owners with complex tasks. The agents cooperate,
coordinate and negotiate with each other in the same way that patient and professionals
cooperate, coordinate and negotiate in day-to-day care.
Advanced adaptive algorithms can function as independent intelligent virtual agents or
software robots. These agents are entities with their own decision-making capabilities and
can act independently, making decisions based on their own information sources and goals
(Fan and Liu, 2022). These are AI-systems that can work fully autonomously i.e., humans
are not involved in the decision loop and do not oversee the decisions taken by the system.
Individual virtual agents interact with each other virtual agents through communication,
coordination and negotiation in a distributed data and communication network (e.g. clouds)
while their actions can affect the state of the system and the behaviour of other agents
46
(Balaji and Srinivasan, 2010). Multi-agent systems are considered as the best and most
appropriate technology that can be applied to fulfil the need required in healthcare system
(Thakur and Gupta, 2022).
These virtual multi-agent systems refer to systems where multiple autonomous agents,
each with its own data sets, capabilities, knowledge, and goals, interact with each other to
achieve individual and/or collective objectives. They are used to model and simulate complex
systems where the behaviour of multiple agents interacting with each other and with their
environment can lead to emergent properties and adaptive behaviour of a whole
information system (Shakshuki and Reid, 2015).
The behaviour of interacting agents may not be predictable from the behaviour of single
algorithms or AI system. Given the black-box nature and the potential risk of autonomous
virtual multi-agent systems monitoring and control are required to ensure their proper
functioning. Rules for human oversight and standards for monitoring, feedback and control
mechanisms for multi-agent systems should be enacted42.
3.5.4
HUMAN AGENCY OVERSIGHT OF AI
Human judgement is required to assess the precision of generative AI output. Human
oversight of AI is important to prevent AI from compromising human autonomy and to prevent
adversarial effects arising from hidden bias, software anomalies or undesired behaviour of
algorithms.
The design of software determines the level of autonomy and human oversight in the AI
system’s operations through the build control mechanisms. The level of autonomy and
operational control performed by a person should be part of a benefit-risk evaluation of AI
medical device software (Haselager et al., 2023).
Too much confidence in AI supported decision-making could result in over-reliance and
erroneous decisions. Many people do not realise how often generative AI models are
incorrect. People are largely unaware that unless they are experts in the field, they must
carefully check the result of AI-based solutions. Although generative AI are improving, the
ability to create outputs that sound convincing but are incorrect is also increasing.
Over-reliance in clinical practise could be induced by a high workload in which a clinician has
not have sufficient time to critically appraise on AI generated recommendations. In contrast,
mistrust or under-reliance of AI-systems can equally lead to patient safety threats. There are
multiple reasons for over- or under-reliance (Goddard et al., 2011). Bias due to over- or
42
47
The assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment, Independent high-level expert
group on artificial intelligence set up by the European Commission, 978-92-76-20008-6, European Commission; B1049 Brussels: 2020.
under-reliance and oversight measures can be addressed in the risk management process43,
as well as in the software development planning, implementation, and validation44.
3.6












43
KEY CONCLUSIONS ABOUT ARTIFICIAL INTELLIGENCE
In health and medicine, AI has already been used for many years.
Artificial Intelligence-based solutions are often stand-alone and not adequately
integrated in care processes to fully realise their potential.
Adaptive algorithms ‘learn’ and adjust their behaviour and effect.
There are significant mathematical, statistical methodological and practical limitations in
processing health data through AI-based analytics.
Algorithms could be so complex that they are beyond human understanding and
therefore unexplainable i.e., black-box algorithms.
The quality of data is essential for the development of algorithms and machine learning
strategies, and this should be ensured by a methodical approach to data sampling,
handling, and analysis as well as data governance.
Existing datasets are often not adequate for developing AI-based solutions which meet
the needs in healthcare.
Building new interoperable data infrastructures and validation procedures require
considerable financial and human resources.
There are specific terms and definitions to characterise adaptative algorithms and there
are guidelines with requirements for trustworthy AI-systems.
A multi-agent system is a network of adaptive algorithms with autonomous capabilities,
typically suitable for complex tasks as in healthcare, although human oversight and
control is required.
There are numerous applications in medicine, health and social care which could
benefit from AI, including the design, simulation, maintenance, and personalisation of
medical devices.
While AI holds great promise for improving healthcare delivery and medicine worldwide,
it is crucial to put ethics and human rights at the heart of its design, deployment, and
use.
MDR, Article 10 & GSPRs, I.I.3: Manufacturers shall establish, implement, document and maintain a risk
management system. GSPRs, I.III.23
44
https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principlesfor-its-design-and-use
48
4.
MEDICAL TECHNOLOGY
This chapter describes medical devices in the broader context of medical technology or
MedTech (short for medical technology) offering an overview and explanation of the typical
digital technologies and data, products, and services as well as important developments and
market dynamics. This chapter also integrates the Healthcare aspects, Artificial Intelligence
and the relevant regulations and legislation.
Following the introductory section presenting the role of medical technology (including
medical devices), the distinct categories and emerging technologies, the market
characteristics are discussed. In the section ‘intelligent medical devices’, the connection
is made with algorithms and AI as well as their application in the hospital settings and use
across care facilities. Special attention is given to smart assisted devices and implantable
devices as well as the role of algorithms and AI are discussed.
Because data and information exchange are the sources for the working of AI-systems, the
key components, and functions, of connected medical devices are discussed: including the
role of sensors, interfaces, and the role of data storage facilities e.g. European Health
Data Space. This is followed by a further elaboration on inter-connectivity of devices,
cybersecurity, and cloud services as well as its implications for medical devices.
Within the following section, EU regulation and legislation relevant for AI-systems used in
medical devices in the context of healthcare are discussed. Subsequently, the Medical
Device Regulation and In-Vitro Diagnostic Medical Device Regulation are reviewed,
especially in relation to medical device software. The EU Artificial Intelligence Act and
European Health Data Space regulations are presented as well as the principles of data
protection and patient privacy with its implications for AI driven medical devices.
In the next section, the development, validation, and implementation of AI for medical devices
are discussed in the context of the medical device regulatory life-cycle paradigm. It is
explained why use-cases are an essential part of the methodology for the development of
integrated person-centred solutions.
The role of Living Labs and innovation eco-systems as a relevant setting with relevant
actors for the development, validation, and application of AI-based solutions are highlighted
as well as the validation requirements per MDR classification category and AI Act are
explained.
The procedures for market access and the role of notified bodies and responsibilities for
developers and manufactures are discussed together with procurement approaches such
as public procurement of innovations i.e. AI-based solutions and value-based procurement.
Special attention is given to pre-commercial procurement emphasised as a promising
approach for the co-creation with both developers and buyers of AI-systems.
49
This chapter concludes with an elaboration on the use of AI systems, post-market
surveillance as well as how monitoring the function of AI-systems incorporating user
experiences might direct continuous improvement and re-development of solutions.
4.1
INTRODUCTION
Medical technology, MedTech or health technology are broad umbrella terms, defined as
the application of structured knowledge and skills in the form of tools, procedures and
systems developed to solve a health (-care), problem and improve the quality of life45. The
term encompasses a heterogeneous group of healthcare products and equipment intended
for use in a broad range of preventive, diagnostic, interventional, and rehabilitation services.
Medical technologies are diverse in nature, applications, and user categories46.
There are an estimated 2 million different types of medical devices in the global market,
categorised into more than 7000 generic devices groups. Medical devices are used in
many diverse settings, for example, by laypersons at home, by paramedical staff and
clinicians in remote clinics, by opticians and dentists and by health-care professionals in
advanced medical facilities, or for prevention and screening and in palliative care. Such
health technologies are used to diagnose illness, to monitor treatments, to assist disabled
people, and to intervene and treat illnesses, both acute and chronic47.
Medical technology products are often integrated into complex care processes within
healthcare facilities. These products can range from simple tools to sophisticated devices
and systems. Here is how they typically fit into the healthcare process:




45
Diagnostic Tools - These include imaging devices like MRI machines, CT scanners
and X-ray machines, as well as laboratory equipment for conducting tests. These tools
help in diagnosing diseases and conditions.
Monitoring Equipment - Devices like ECG machines, blood glucose monitors and
pulse oximeters fall into this category. They are used to monitor a patient’s condition
continuously or at regular intervals.
Rehabilitation and Assistive Devices - These include prosthetics, orthotic devices,
and mobility aids like wheelchairs. They assist patients in regaining or improving their
physical capabilities.
Therapeutic Devices - These include infusion pumps for delivering medication,
surgical instruments, and implantable devices like pacemakers. They are used to treat
patients and manage their conditions.
16th World Health Assembly. World Health Assembly resolution WHA60.29. 2007; 2–3.
Organisation of Economic Co-operation and Development OECD (2017), New Health Technologies: Managing Access,
Value and Sustainability, OECD Publishing, Paris.
47
https://www.who.int/health-topics/medical-devices#tab=tab_1
46
50

Health Information Systems - These include electronic health record systems,
telemedicine platforms and decision support systems. They facilitate the management
of patient information and support clinical decision-making.
Each of these products plays a specific role in patient care and they often need to interact
with each other to provide coordinated and effective care. For instance, data from a
monitoring device might be fed into a health information system to track a patient’s progress
over time. Similarly, a diagnostic tool might be used in conjunction with a therapeutic device
to guide treatment.
The integration of these products into healthcare processes requires careful planning and
coordination. It also raises important considerations around safety, data privacy, and
interoperability. As such, the development and use of medical technology products is a
complex process that involves a wide range of stakeholders, including healthcare providers,
patients, regulators, and technology developers.
4.1.1
EMERGING TECHNOLOGIES
Technology can have a considerable influence on the organisation, quality, effectiveness,
and costs of healthcare delivery. The medical technology landscape is characterised by
frequent and rapid changes derived from innovative technologies from the medical device
industry, information and communications technology (ICT), pharmaceutical industry
and adjacent industries. The average development cycle of a medical device is estimated
to be around 2 years (Van Norman, 2016), while the development of pharmaceuticals takes
12 years48. Technological developments and innovation in this sector are resulting in an
increasingly rapid availability of health intervention alternatives. In the last decade, the
number of patent applications related to medical devices in the world has tripled and the
technology cycle times are reportedly half as long as just five years ago (Alexander et al.,
2019).
Changes and developments are also stemming from the increased connectivity and vast
amounts of digital data that are being generated by the healthcare system and individuals.
These data collectively hold enormous potential for fostering improvements in various
healthcare activities, services, and products. However, they also pose previously
unprecedented challenges such as data ownership and data privacy issues.
Along with big data, Internet-of-things, some of the innovation trends and emerging
technologies that are perceived to have the potential that could have implications for the
European healthcare system are robotics, 3D printing, Artificial Intelligence, digital health,
tissue engineering and nanotechnology.
In the case of emerging in-vitro diagnostic technologies (IVD), the identified impactful
innovations include liquid biopsy, next-generation sequencing, point-of-care
diagnostics and synthetic biology (van der Maaden et al., 2018). Another change driver
48
51
NHS. Office for Life Sciences: How To guide A guide to navigating the innovation pathway in England.
https://www.gov.uk/government/publications/innovation-pathway-for-nhs-products
that is significantly influencing the healthcare landscape, is the rise of various combined
intervention technologies that blur the lines between pharmaceuticals, medical devices,
ICT, and software and healthcare services49.
The use of AI, often related to or through medical devices, has the potential to facilitate the
delivery of health and social care services by supporting integrated care, improving patient
outcomes, and reducing costs. Artificial Intelligence can help to personalise care for
individual patients based on their specific needs and medical history through more efficient
and effective use of data. Possible AI applications are (Davenport and Kalakota, 2019,
Cingolani et al., 2022):






Artificial Intelligence-based chatbots or conversational agents. similar to ChatGPT as
discussed in the previous section (Thirunavukarasu et al., 2023), could facilitate
understanding and accessibility of health services for patients while helping
professionals with symptom inventory, history taking and triage (Li et al., 2023).
Predictive analytics, such as establishing of complication risk-profiles or discharge
planning, though analysis of enormous amounts of historical data and pattern
recognition on patients with similar health issues. This can help healthcare providers to
predict and prevent health problems before they occur, allowing for earlier and more
effective interventions (de Hond et al., 2022).
Artificial Intelligence can help to coordinate care based on real-time data between
different healthcare providers to ensure that patients receive the right care at the right
time. This can be particularly beneficial for patients with complex care needs who
require input from multiple professionals (Lebcir et al., 2021).
Decision support based on AI could help health professionals to make more accurate
and informed decisions about patient care such as identifying potential drug
interactions or suggesting treatment options based on a patient's medical history
(Walker et al., 2022, Bajgain et al., 2023).
Artificial Intelligence can be used to monitor patients remotely, providing healthcare
providers with real-time data about a patient's health status. This can help to identify
potential problems early on, allowing for timely interventions (Dubey and Tiwari, 2023).
By reducing administrative burden through automation of time-demanding tasks
often related to management of patients in multi-disciplinary teams with help of AI as
well as detection of (administrative) errors (Iqbal et al., 2022).
4.2
MEDICAL TECHNOLOGY SECTOR AND MARKET
The global medical technology or medtech market has been steadily growing over the
years (annually 5% on average) and it is projected to reach a value of over 500 billion Euros
by 2025. The cost range of medical devices varies considerably: from 0.25 cent for e.g., a
syringe to 3 million euros for an MRI scanner. The European sector is responsible for about
49
52
Drug Device Combination Products Market | Industry Report, 2018-2024, https://www.grandviewresearch.com/
industry-analysis/drug-device-combination-market
50% of the global medical technology market. The medical technology sector is highly
globalised, with many countries engaged in the import and export of medical devices and
technologies. This international trade creates business opportunities and fosters
economic growth, as countries trade medical technologies to meet their healthcare needs.
These opportunities and growth are driven by expectations of data and AI for medical
technology services and products.
There are more than 34.000 medical technology companies employing directly more than
800,000 people in Europe. Siemens, Philips, Dräger and Roche are major players while
Small and medium-sized enterprises (SMEs) make up around 95% of the European medical
technology industry. Germany had the highest absolute number of people employed in the
medical technology sector, while the number of medical technology employees per capita is
highest in Ireland and Switzerland50.
The medical technology sector is a complex and dynamic industry that encompasses a
wide range of organisations, including private companies, academic and research
institutions, government agencies, regulatory bodies, trade associations, and healthcare
providers. The sector is diverse, depending on the specific context and purpose of the
organisation. Common types of organisations within the medical technology sector are:




50
51
53
Medical technology companies: These are private or publicly traded companies that
develop, manufacture, and sell medical devices, equipment, software, and other
medical technologies. These companies can range from small startups to large
multinational corporations, and they may specialise in various areas, such as medical
imaging, surgical instruments, diagnostic tests, or digital health technologies.
Academic and research institutions: Academic and research institutions, such as
universities, research hospitals and academic medical centres, play a crucial role in
advancing medical technology through research and development. These institutions
conduct cutting-edge research, develop new technologies and train healthcare
professionals and scientists in the field of medical technology.
Government agencies and regulatory bodies: Government agencies and regulatory
bodies at the local, national, and international levels oversee the regulation, approval,
and safety of medical technologies. Examples include the U.S. Food and Drug
Administration FDA, the European Medicines Agency EMA, and the World Health
Organisation WHO51, which establish regulations, standards, and guidelines for
medical technology development, manufacturing, and use.
Trade associations and professional organisations: There are numerous trade
associations and professional organisations in the medical technology sector that
represent the interests of different stakeholders, including medical technology
companies, healthcare providers, researchers, and other industry professionals e.g.
https://www.medtecheurope.org/datahub/employment-companies/
https://www.who.int/groups/strategic-and-technical-advisory-group-of-experts-on-medical-devices-(stag-medev)



MedtechEurope52, COCIR53, EHTEL54. These organisations provide networking
opportunities, advocacy, and resources to promote the advancement and adoption of
medical technologies.
Healthcare providers: Hospitals, clinics and other healthcare providers are significant
users of medical technology. They employ various medical technologies in their daily
operations, such as medical imaging equipment, surgical instruments, and electronic
health record systems, to diagnose, treat and manage patients.
Start-ups and incubators: The medical technology sector has a vibrant start-up
ecosystem, with many start-ups and incubators focused on developing innovative
medical technologies. These start-ups often collaborate with academic institutions,
research organisations and industry partners to develop and commercialise new
medical technologies.
Supply chain and distribution companies: The medical technology sector relies on a
complex supply chain and distribution network to manufacture, transport and distribute
medical devices and equipment. These companies play a critical role in ensuring that
medical technologies are available to healthcare providers and patients when needed.
The organisation and structure of the medical technology sector can vary depending on
geographic location, regulatory environment, and specific subsectors within the industry.
Collaboration and coordination among different stakeholders, including medical
technology companies, research institutions, regulatory bodies, and healthcare providers,
are crucial for the advancement and successful implementation of medical technologies to
improve patient care and outcomes. Together with user representative organisations, e.g.
patient organisations, the actors, and stakeholders mentioned above are the typical
audience that contribute to the development, establishment, implementation and
maintenance of standards and standardisation.
4.3
CATEGORIES OF INTELLIGENT MEDICAL DEVICES
Considering the diversity of Medical Devices and In-Vitro diagnostic Medical Devices, AI is
not necessarily relevant for every medical device. Many medical devices do not contain
electronics for processing data: these are non-active medical devices such as a syringe or
an implantable hip prosthesis. Although sensors like RFID-chips are increasingly in use for
passive devices for example: real-time traceability, identification, communication, monitoring
temperature and other processes55. Accordingly, these passive devices could be within a
52
European trade association representing the medical technology industrieshttps://www.medtecheurope.org/
European Trade Association representing the medical imaging, radiotherapy, health ICT and electromedical industries
https://www.cocir.org
54
European eHealth Multidisciplinary Stakeholder Platform https://ehtel.eu/
55
https://www.himss.org/resources/benefits-and-barriers-rfid-technology-healthcare
53
54
network of information, i.e. Internet-of-Things (IoT), which can be used for AI purposes such
as device management56, logistics and maintenance (Hijazi and Subhan, 2020).
4.3.1
MEDICAL DEVICES USED IN HOSPITALS
Medical devices are used across the health care system, from primary health care level
e.g., point-of-care diagnostics to specialised care such as in hospitals for a wide range of
purposes. For example, medical imaging machines for radiology specialists (ultrasound and
MRI machines, PET and CT scanners and x-ray machines), are used to aid in diagnosis.
Treatment equipment like surgical equipment (including robots), infusion pumps, medical
lasers etc; life support equipment are used to maintain a patient’s bodily function e.g.,
analysers to measure blood gas, pH, electrolytes and metabolites as well as medical
monitors. Most of these devices generate and process data which is likely to be used for AI
applications (Yang et al., 2022).
Radiology is the healthcare specialty where AI is used most. Other emerging hospital
application areas for AI are intervention or therapeutic decision support, modelling prognosis,
resource management, logistical planning and scheduling as well as finance. In a Dutch
survey among Chief Information Officers CIO’s in 42 hospital was asked which topics they
find important in AI policies for their organisations: strategy, vision and ambition,
governance, practical laws and regulations, data governance, security and privacy,
knowledge sharing (internal and external), user acceptance and purchase/procurement as
well as application lifecycle management while validation = proof in practise, was
mentioned by the CIO’s as most important topic57 mentioned: please also see the Section
4.9 below on ‘Development, validation and implementation’. Explainable AI was also
mentioned while it should create better understanding, trust and adoption among
employees58.
In this context the concept of smart or intelligent hospital refers to interconnected
infrastructure (including equipment and devices), services and people which provide realtime data to create centralised operational insights to forecast and manage processes (Hu
et al., 2022). Such as workflows, patient logistics across departments, resource
utilisation, Intensive Care Unit alarm management (Kwon et al., 2018), monitoring location
and maintenance of medical equipment.
4.3.2
HOSPITAL-TO-HOME CONCEPTS
The concept of Hospital-To-Home is an innovative model that brings acute hospital care to
a patient’s home, providing a more comfortable and convenient alternative to traditional
inpatient hospitalisation (Gaillard and Russinoff, 2023). It is an illustration of person-centred
digitally enabled integrated care provided at a community level support with AI-driven medical
56
https://www.who.int/teams/health-product-policy-and-standards/assistive-and-medical-technology/medicaldevices/management-use
57
AI Hospital Monitor 2023' M&I/Partners https://mxi.nl/kennis/613/ai-monitor-ziekenhuizen-2023
58
Vasseur, PJ. (2020). Towards a better understanding of the explanation of AI-based clinical decision support for
medical specialists. [Masterscriptie UvA]. https://scripties.uba.uva.nl/search?id=record_27599
55
devices (Guldemond, 2024). The number of medical technologies used in home settings has
increased substantially over the last 10–15 years (ten Haken et al., 2018).
Considering the inherent interconnectivity and data/information exchange as part of Hospital
at Home models, AI is increasingly being used to enhance service delivery, personalise care
and meet the growing demands of home care59.
Hospital at home is a service that could serve patients with short – term hospital level care
at home 60 , for example for recovery at the home after surgery or outpatient parenteral
antibiotic therapy and chemotherapy (hospital-at-home, 2021). It is also suitable for chronic
disease management, monitoring and management of patients with chronic obstructive
pulmonary disease, heart failure and coagulation monitoring and management61 as well as
home-based end-of-life care (Shepperd et al., 2016).
Hospital at Home care is delivered by a multidisciplinary team, including specialist doctors,
nurses and other health and social care professionals. Services should be available 24 hours
a day, 7 days a week, ensuring patients receive care when they need it despite residing at
their home (Liao et al., 2018).
Hospital at Home programs often rely on a variety of medical devices and technologies to
provide comprehensive care services across organisations:




59
Remote Monitoring Devices - These devices allow healthcare providers to monitor
patients' vital signs and health status remotely. Examples include blood pressure
monitors, pulse oximeters and blood glucose monitors62.
Therapeutic Devices - These devices are used to manage and treat medical
conditions. Examples include ventilators for respiratory support, systems for haemo- or
peritoneal dialysis and infusion pumps to provide nutrition or medication (Rajkomar et
al., 2014).
Diagnostic Devices - These devices are used to diagnose diseases and monitor
treatment progress. They can range from simple tools like thermometers to more
complex devices like ECG machines63. This category includes Point-Of-Care (POC)
devices who provide immediate diagnostic results and enable rapid clinical decisionmaking64.
Digital Health Solutions - These include electronic health records, telehealth
platforms and mobile health apps. They facilitate communication between patients and
healthcare providers, manage patient data and support clinical decision-making
(Dandoy and Copin, 2016, Whitehead and Conley, 2022).
https://www.homecareassociation.org.uk/resource/how-technology-and-a-i-are-shaping-home-care-services.html
https://www.hospitalathome.org.uk/whatis
61
https://www.nhsinform.scot/care-support-and-rights/hospital-at-home/
62
Guidance on managing medical equipment within virtual .... https://www.england.nhs.uk/long-read/guidance-onmanaging-medical-equipment-within-virtual-wards-including-hospital-at-home/
63
Medical devices and digital tools. https://www.england.nhs.uk/long-read/medical-devices-and-digital-tools/
64
https://www.s3connectedhealth.com/blog/hospital-at-home-medical-devices-to-provide-healthcare-anytime-anywhere
60
56
Procedures and practices differ between and within hospitals, depending on the technology
and care process used in hospital-to-home services. There are already strict guidelines and
regulations for the use of medical technology in hospitals settings to ensure the safe care
service. However, the policies, regulations and guidelines for hospital-to-home based care,
which refers to specialist medical care provided in people's homes (including AI), are still
under development (Baartmans, 2024).
4.3.3
ASSISTED TECHNOLOGY AND DEVICES
There are numerous medical devices which are used outside the (physical) context of
primary and hospital care. Assisted technology or assistive devices entails a tool,
equipment and/or software that is designed to support individuals with challenges
(disabilities or limitations) to perform tasks, activities, or functions that they may have difficulty
with due to physical, sensory, cognitive, or other impairments. Assistive devices can come in
many forms and can be used in different settings, such as at home, in school, at work or in
public spaces. Various assisted devices and related applications already using or are likely
to have AI-based technology for their function (Yang et al., 2022, Colnar et al., 2020, Ma
et al., 2023).
Assistive devices are typically prescribed or recommended by healthcare professionals
based on the individual's specific needs and abilities. They are often customised or
adapted to the individual's requirements and may require training or support to ensure their
safe and effective use. Assistive devices can play a crucial role in promoting independence,
inclusion, and accessibility for individuals with disabilities and senior people empowering
them to living more independently, participate optimally in their communities and lead more
fulfilling lives. Common types of assistive devices include:



57
Mobility aids include wheelchairs, walkers, canes, crutches, and scooters, which help
individuals with mobility impairments move around and perform activities such as
walking, standing, or transferring. Smart mobility systems can help users navigate their
environment more easily. These systems can use sensors, cameras, and other
technologies to detect obstacles and adjust the wheelchair's speed and direction
accordingly. They can also provide voice-activated controls and other features to
improve user convenience. Hence, AI-based systems can also be used to detect when
a user is at risk of falling and provide alerts or assistance to help prevent falls.
Hearing aids are devices that amplify sound for individuals with hearing loss or
deafness, helping them to hear speech and other sounds more clearly. Algorithms and
AI can be used to enhance speech in noisy environments. This can help users better
understand conversations and improve communication with others while machine
learning algorithms to analyse sound and identify and reduce background noise. This
can help users hear more clearly in noisy environments such as restaurants or public
spaces.
Vision aids include magnifiers, screen readers, Braille displays and other devices that
assist individuals with visual impairments in reading, writing, or accessing information.
Algorithms and AI are helping to improve the functionality and usability of vision aids,




making them more effective at helping people with visual impairments to live
independently and participate fully in society through AI enabled object recognition
and description to users helping them to navigate their environment safer and easily,
recognition and reading out text, personalisation of aid settings e.g., through
adjusting brightness or contrast based on the user's individual needs.
Communication aids are augmentative and alternative communication devices,
speech generating devices or specialised software that help individuals with speech or
language difficulties to communicate effectively. Communication aids can use AI
enabled voice recognition technology to translate spoken words into written text as
well as it can be used to predict what the user is trying to say, based on their
previous patterns of communication allowing individuals with speech impairments to
communicate more effectively with others.
Environmental control devices allow individuals with impairments or senior people to
control their home with electronic devices, appliances, or environmental features such
as lights, thermostats, or door openers i.e. smart home technologies systems: please
see next paragraph for more information.
Prosthetics and orthotics can replace or augment missing or impaired body parts,
such as artificial limbs (prosthetics) or braces (orthotics) to restore or enhance physical
function. AI can also be used to improve prosthetic limbs by providing more natural and
intuitive control. Algorithms and AI can analyse sensor data from the limb and interpret
the user's movements, allowing for more responsive and accurate control.
Cognitive aids such as devices or software that assist individuals with cognitive
impairments in memory, mood (Morrow et al., 2022), organisation, time management or
other cognitive tasks. Artificial Intelligence based social robots could be considered as
cognitive aids (Verbeek, 2009).
Considering that users of assistive devices are typically consist of vulnerable groups e.g.
elderly with dementia or children, special attention should be given to the development,
implementation, use and monitoring of AI in assistive devices (Velazquez, 2021).
Note: The Ambient Assisted Living (AAL) initiative is an EU-funded research and
innovation programme65 aimed at developing AI-based solutions to improve the quality of life
of older adults and people with disabilities. The programme has supported the development
of various AI-based solutions, including:


65
58
Smart home technologies systems, including devices that enable older adults to live
independently in their own homes by helping with daily tasks, such as cooking, cleaning
and medication management. Examples of smart home technologies developed under
the AAL Programme include the iStoppFalls system for fall prevention and the
CAREGIVERSPRO-MMD system for dementia care.
Health monitoring systems that enable remote monitoring of the health status of older
adults and people with disabilities, allowing for early detection of health issues and
http://www.aal-europe.eu/projects-main/



timely intervention. Examples of health monitoring systems developed under the AAL
Programme include the ACTIVE system for monitoring physical activity and the
PANDORA system for monitoring chronic obstructive pulmonary disease.
Social inclusion technologies that enable older adults and people with disabilities to
stay connected with their social networks and participate in social activities.
Examples of social inclusion technologies developed under the AAL Programme
include the SOFOOT system for promoting physical activity through football and the
AMPLIFY system for enhancing communication skills in people with autism.
Mobility assistance technologies that help with mobility and navigation, enabling
older adults and people with disabilities to move around safely and independently.
Examples of mobility assistance technologies developed under the AAL Programme
include the GOAL system for guiding visually impaired people and the DEWI system
for supporting mobility in people with dementia.
Cognitive robots for elderly care. One such project is CARESSES (Cultivating
Acceptance and Responding to Emotions of Sociable Robots for the Elderly), which
developed a socially assistive robot that can interact with elderly users using
natural language and gestures. Another project is ENRICHME (ENabling Robot and
assisted living environment for Independent Care and Health Monitoring of Elders),
which developed a robot that can assist elderly users with various daily tasks and
provide health monitoring services.
4.3.4
INTERCONNECTED IMPLANTABLE MEDICAL DEVICES
A special group of intelligent devices are the interconnected implantable medical
devices. This group of medical devices include pacemakers, insulin pumps, cochlear
implants and brain/neurostimulators. They all feature wireless communication (figure 11).
59
Figure 11 Interconnected implantable medical devices by MIT
A typical example is the implantable cardioverter-defibrillator (ICD), which is composed
of electronics that monitors the electrical activity of the heart and generate pulses to
normalise the cardiac rhythm. Such implant can also have additional features, such as the
ability to process data through monitoring of the heart rate, store data on heart rhythms and
transmit data to a remote (external) monitoring system for evaluation by health
professionals.
Another example are implants for the neural system. Neural implants for the brain might be
even a more intrusive technology as they create a Brain-Computer Interface BCI with
possible AI-based applications (Chen et al., 2023). A BCI, is a device that directly interfaces
with the brain to record, stimulate or modulate its electrical activity to enable communication
or interaction between the brain and external devices such as an App and/or prosthetic
devices. They can be used for various applications (Alharbi, 2023), including:



60
to monitor electrical signals from the brain, which provide insights into brain function,
facilitate research in neuroscience and help diagnose and monitor neurological
disorders.
deliver electrical stimulation to the brain, which can modulate neural activity and
potentially treat conditions such as epilepsy, depression, or Parkinson's disease.
Electrical stimulation can also be used to induce sensory perceptions or control motor
functions.
BCI’s be used to decode neural signals from the brain and translate them into
commands to control external devices, such as prosthetic limbs or assistive
technologies. This can enable individuals with paralysis or limb loss to regain some
level of functional control.

BCI’s have been explored for cognitive enhancement purposes, such as improving
memory, attention, or learning. These applications are still largely experimental and not
widely available for clinical use.
It's important to note that BCI’s raise ethical, social and privacy concerns, as they involve
direct manipulation of the brain and can have far-reaching implications (Prakash et al., 2022,
Velazquez, 2021). The development and use of brain chips are typical the subject for
regulatory oversight and ethical considerations to ensure safety, efficacy and protection of
individual rights and privacy.
4.4
COMPONENTS AND FUNCTIONS OF INTERCONNECTED
INTELLIGENT MEDICAL DEVICES
To understand the essential aspects of medical devices in relation their functionality, data,
Artificial Intelligence and interconnectedness in the context of health and social care, key
components and functional concepts will be explained.
In figure 12, the different components and functions of interconnected intelligent medical
devices are depicted.
61
Figure 12 Components and functions of interconnected intelligent medical devices
62
4.4.1
INTELLIGENT DEVICE (1 IN FIG. 12),
Typically, intelligent medical devices contain three core elements: a sensor, a processor,
and an actuator. A sensor responds to a physical stimulus (such as heat, light, sound,
pressure, magnetism, or a particular motion), and transmits a resulting impulse = input to a
processor. The processor processes the data in an elementary form (e.g. for transmission
to an external computer), or in more advanced procedure (e.g. through embedded software).
The processed data are sent = output to an actuator that performs an action with a chemical,
electrical, mechanical, and/or other physical effect. Usually, this effect is monitored by the
sensor: i.e., feedback system.
In short, sensors perform detections, the microprocessors processing data and the actuators
performing the resulting actions. Note: the actuator is not always present since many medical
devices are just used for obtaining information for diagnosis or monitoring purposes.
Algorithms for use in intelligent medical devices, often use data generated by sensors.
Therefore, sensor integrity is critical in medical devices, as it directly affects the accuracy
and reliability of the device's measurements and readings. Medical devices rely on sensors
to detect and measure a wide range of physiological parameters, such as heart rate, blood
pressure, blood glucose levels and oxygen saturation. If a sensor is not functioning properly,
it can lead to inaccurate or unreliable readings which also influence the development and
functioning of algorithms and accordingly AI, which can have grave consequences for
patient health and safety. For example, if a blood glucose monitor's sensor is not calibrated
correctly, it could give inaccurate readings that lead to incorrect dosing of insulin, potentially
causing a patient to experience hypoglycaemia or hyperglycaemia.
Similarly, if a heart rate monitor's sensor is not functioning properly, it could fail to detect a
serious arrhythmia or other cardiac event, which could delay or prevent timely medical
intervention. The same applies to actuators: when an actuator is malfunctioning, the
erroneous output or effect could have harmful consequences and/or negatively influence
the functioning of the device and software. Accordingly, integrity of the sensor input and
actuator output should be ensured by requirements and criteria in relevant technical
standards (Badnjevic et al., 2023).
Depending on the medical devices’ function, data is processed in a more advanced
procedure for which dedicated software is needed. Medical device software could be an
(embedded) application that is intended to be used independently or in combination with
other software applications, for the purpose as specified in the definition of a medical device
in the Medical Device Regulation 66 . The embedded software and accordingly possible
application of algorithms and Artificial Intelligence, in medical devices is responsible for
controlling the device's functions, processing data from sensors and communicating with
other devices or systems. It can also manage user-interfaces (please also see next section
66
63
MDCG 2019-11 Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and
Regulation (EU) 2017/746 – IVDR
on user interfaces), display data and provide feedback to the user such as patients and/or a
health professional (Fraser et al., 2023).
Medical devices that rely on (embedded) software include anything from pacemakers and
insulin pumps to imaging systems and robotic surgical equipment. The software is a critical
component of these devices and must be carefully designed and tested as required in
standards to ensure their safety and efficacy (Giordano et al., 2022).
As a critical component, medical device software malfunction is particularly problematic for
implantable medical devices because they can lead to serious harm. In the last eight years,
about 1.5 million software-based medical devices were recalled. Between 1999 and 2005,
the number of recalls of software-based medical devices more than doubled: more than
11% of all medical-device recalls during this period were attributed to software failures. Such
recalls are costly and could require surgery if the model is already implanted (Fu and Blum,
2014). According to a provider of technology related risk-benefits market intelligence, the
combination of operational challenges and increasingly assertive and strict safety regulators,
is leading to increased device recalls and enforcement actions67.
In summary, an intelligent medical device consists of elementary functional components
which can be relatively simple to more advanced. The software as basis for its intelligence is
sensitive for malfunction also potentially caused by sensor input and/or actuator output.
4.4.2
(GRAPHIC) USER INTERFACE (2 IN FIG. 12)
User interfaces
User interface software for medical devices refers to the software that enables users to
interact with the device. This could be a manual, tactile, acoustic, and/or graphical/visual
interface that allows users to control the device's operation. User interface software is critical
for medical devices as it allows users to access and control the device's features and
functions: e.g., view data and results and provide input or feedback. User interface software
for medical devices can take many different forms, from simple text-based interfaces to
complex interactive graphical user interfaces (GUIs).
The choice of interface will depend on the specific needs of the device and its intended users,
as well as other factors such as cost, complexity and regulatory requirements including
standards. Medical usability, including GUI standards are managed through the standard
IEC62336 (Usability Engineering) entailing the general requirements of usability engineering,
preparing the use specification, establishing user interface specification and evaluation plan,
designing user interface, and performing formative and summative evaluations.
Besides that, user interface software for medical devices should be designed to be userfriendly, intuitive and easy-to-navigate it also should be designed to be accessible to a
wide range of users = inclusive, including those with disabilities as well as be safe and
reliable, minimizing the risk of errors or accidents that could potentially harm patients i.e.
67
64
https://www.medtechintelligence.com/news_article/q3-medical-device-recalls-increase-36-software-issues-remaintop-reason/
inadequate user interfaces could adversely impact safety especially in life-critical situations
and environments such as ICU’s (Nadj et al., 2020).
Note: ISO/IEC JTC 1/SC 35 takes responsibility for standardisation in the field of user-system
interfaces in information and communication technology environments and support for these
interfaces to serve all users, including people having accessibility or other specific needs,
with a priority of meeting the requirements for cultural and linguistic adaptability68.
User Interfaces and human behaviour
Artificial Intelligence might be used to personalise interaction and facilitate accessibility:
e.g. making complex information simple, understandable, and actionable, learning from the
user preferences and anticipating users’ behaviour. Note: persuasive technology, typically
driven by AI technology, refers to interfaces built with the capabilities to influence the users’
attitude and/or behaviour and motivates the user to do something she/he would not
deliberately do otherwise: either positively or negatively (Orji and Moffatt, 2018, Spahn, 2012,
Verbeek, 2009). AI-augmented visualisation could contribute to explainability and
interpretability of AI and accordingly improve the understanding and trustworthiness
(Vellido, 2019, Walker et al., 2022). Accordingly, information that helps the user to interpret
the software’s output could contribute to better clinical use of AI. Hence, the medical device
should be accompanied by appropriate training and support materials to help users learn
how to use the device and the software/interface effectively.
Mobile phones as an interface
Special attention should be given to mobile phones as interface in connection to medical
devices. Mobile phones can connect with medical devices through Bluetooth or other
wireless protocols to transmit data from the medical device to a mobile app on the phone.
These include blood glucose monitors, heart rate monitors and hearing aids can all connect
to a mobile app to provide real-time health data to the user. In addition to receiving data
from medical devices, mobile phones can also be used to control or programme certain
medical devices such as in insulin pumps and continuous glucose monitors that often come
with accompanying mobile apps that allow users to adjust their insulin dosages or view their
blood glucose levels in real time. While apps can be a convenient way to access and manage
data from medical devices, it could create potential risks for hacking, malware and
malfunctioning AI and proper measures should be in place to protect the privacy, security,
and safety of patients: please see also following paragraphs.
4.4.3
INTER-CONNECTIVITY AND CYBERSECURITY (3 IN FIG. 12)
Inter-connectivity
‘Interconnectedness’ refers to the ability of different devices or systems to connect and
communicate with each other, allowing for the exchange of data and information. In the
context of healthcare, interconnectedness and the flow-of-data can enable seamless
68
65
https://www.iso.org/committee/45382.html
communication and coordination between various devices, systems and people e.g.
healthcare professionals and patients (Chadwick, 2007).
Inter-connected medical devices refer to medical equipment or devices that are connected
to a dedicated network allowing them to collect and transmit data to other devices or
systems. Hence, the Internet of Things IoT refers to a wider network of interconnected
devices, objects and sensors that can communicate and exchange data with each other over
the internet. In this context, the term Big Data applies to the large collections of both
structured and unstructured data generated by all these devices, objects, and sensors and
which might be used for analysis to discover trends, insights and patterns, typically by AI
technology, which enable users to make decisions69.
Interconnected medical devices
Connected medical devices can take many forms, from wearable health monitors,
implantable devices, and fitness trackers to sophisticated medical imaging systems.
They can also include home healthcare devices, such as blood glucose monitors, blood
pressure cuffs and smart home devices that transmit data to relevant organisations. This
data can include vital signs, medication adherence, activity levels and other health-related
information (Phatak et al., 2021). These devices are equipped with wireless
communication capabilities.
Most devices have a two-way ‘send-receive’ functionality (two opposite directed red arrows:
fig. 5), though communication protocols like Wi-Fi, 5G and/or Bluetooth. One of the main
benefits of connected medical devices is their ability to provide real-time data and remote
monitoring capabilities. Software updates and maintenance e.g. an improved algorithm
could be performed through this connection but also this porte d’entrée makes the device
vulnerable for hacking, malware, and software viruses, which could compromise patient
related data and the functioning of the devices themselves.
Note: standard IEC 80001 deals with the application of risk management to IT-networks
incorporating medical devices and provides information on the roles, responsibilities, and
activities necessary for risk management70.
Mobile phones as connecting devices
A mobile phone could function as a connecting gateway between medical devices, e.g. a
sensor and the internet. While third-party apps on mobile phones could enable users to
access and share data from and to a variety of sources. Many mobile apps also rely on
cloud technologies (please see below), to store to online databases and process data in
the cloud. This allows one to remotely monitor patients who are using certain medical
devices: e.g., a doctor or nurse could use a mobile app to remotely monitor a patient's heart
rate, blood pressure or oxygen saturation levels. Note: third-party apps installed on mobile
phone might collect data and monitor user behaviour related to the medical device without
69
70
66
https://www.ema.europa.eu/en/about-us/how-we-work/big-data
https://www.iso.org/obp/ui/en/#iso:std:iec:tr:80001:-2-2:ed-1:v1:en
the user being aware. Patients and professionals should be cautious about the installation of
third-party apps, especially if they require access to sensors, actuators, and health data.
Cybersecurity
As with any connected device, there are concerns about data privacy and security. Medical
device cybersecurity is the process of protecting medical devices and associated networks
from cyber threats such as hacking, malware and other types of cyber-attacks. It is therefore
essential that connected medical devices are designed with appropriate security measures
and protocols to protect patient privacy and prevent unauthorised access and use of sensitive
data (Fu and Blum, 2014). Key considerations for medical device cybersecurity are:






Conducting a risk assessment to identify potential vulnerabilities in medical devices
and associated networks can help to develop an effective cyber security plan.
Ensuring that medical devices are designed and developed to prevent vulnerabilities
and minimise the risk of cyber-attacks.
Implementing appropriate access control measures can help to prevent unauthorised
access to medical devices and associated networks.
Encryption can help to protect data transmitted between medical devices and
associated networks, making it more difficult for hackers to intercept or manipulate
data.
Segmenting networks can help to prevent cyber-attacks from spreading throughout a
healthcare organisation's network, limiting the potential impact of a cyber-attack.
Having an incident response plan in place can help to minimise the damage caused
by a cyber-attack and ensure that the affected medical devices and networks are
restored to normal operation as quickly as possible.
Note: Ransomware attacks disrupt care delivery and jeopardise information integrity.
Healthcare ransomware attacks have at least doubled in the past 5 years, since 2020, e.g.
healthcare systems and services have been placed at the epicentre of malicious cyber
activities where 60% of ransomware attacks throughout 2021 targeted healthcare facilities
or healthcare industry services. At the same time the data recovery from backups has
decreased and it is now common for data to be stolen and publicly released following a
successful attack. Current monitoring/reporting efforts provide limited information and could
be expanded to potentially yield a more complete view of how this growing form of cybercrime
affects the delivery of health care (Neprash et al., 2022).
Note: AI and algorithms, as part of medical device software, can be vulnerable to attacks
such as data poisoning, adversarial attacks, and backdoors, which can compromise their
performance and integrity.
4.4.4
CLOUD-BASED TECHNOLOGY (4 IN FIG. 12)
Cloud computing allow devices, via internet, access, and use of IT resources like servers,
storage, and software on demand, from any location with an internet connection. In the
context of healthcare, cloud-based technology is being used to store and manage data from
multiple sources e.g., Internet-of-things (IoT), could facilitate collaboration among
67
healthcare providers and enable the delivery of remote healthcare services. A cloud
technology can be based on a public, private or a hybrid infrastructure. A medical device may
be connected with the cloud directly over the public Internet or it can use a gateway (e.g.,
Virtual Private Network-VPN), to enable connection/integration with the “cloud”.
Note: As previously mentioned, the European Health Data Space EHDS is a cloud-based
facility for health data sharing. While the EHDS is not exclusively a cloud-based facility, it
does rely on cloud-based technologies to facilitate health data sharing among healthcare
providers, researchers, and other stakeholders.
Cloud-based technology can offer several benefits for medical devices, particularly in terms
of data management, analytics, and remote monitoring. Medical devices that are
connected to the cloud can transmit data to the cloud for storage, analysis, and processing,
allowing healthcare providers to access the data from any location with an internet
connection. Some examples of cloud-based technology application to medical devices are:



Cloud-based technology can provide a secure and scalable platform for storing,
managing, and sharing data and connect with other data sources.
Cloud-based medical devices can be updated/maintained remotely, ensuring that
devices are always running the latest software and firmware. This can potentially help
to improve device performance, fix bugs, and address security vulnerabilities.
Cloud-based AI driven analytics tools that can be used to analyse medical device
data to identify trends and patterns.
Note: Software may be qualified as Medical Device Software regardless of its location (e.g.
operating in the cloud, on a computer, on a mobile phone, or as an additional functionality on
a hardware medical device).
4.4.5
IMPLICATIONS FOR MEDICAL DEVICES
Better understanding of the reasoning mechanisms behind software through explainable and
interpretable AI could limit over- or under-reliance and improve the proper management and
trustworthiness of these technologies by health professions and patients (Markus et al.,
2021, Shin, 2021). However, there is no specific requirement regarding explainability or
interpretability in the MDR. Nevertheless, manufacturers of software using algorithms
based on data learning optimisation may consider including interpretability and explainability
as part of their risk management process. Particular attention should be given to the
documentation of the risk management process related to the ‘state of the art’71 relevant to
the AI technology that is used. Analysis of risks and associated monitor and control
measures related to software autonomy, the context of use (e.g., the clinical environment),
71
68
GSPRs, I.I.4: Risk control measures adopted by manufacturers for the design and manufacture of the devices shall
conform to safety principles, taking account of the generally acknowledged state of the art. To reduce risks,
Manufacturers shall manage risks so that the residual risk associated with each hazard as well as the overall residual
risk is judged acceptable. […]
reasonably foreseeable misuse72and use error73 needs to be documented in line with the
type of AI technology and the intended use of the software, as required in MDR.
The safe and timely translation of AI research into clinically validated and appropriately
regulated AI-systems for medical devices that can benefit everyone is challenging. Robust
clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond
measures of technical accuracy to include quality of care and patient outcomes, is
essential. Further work is required to identify themes of algorithmic bias and unfairness
while developing mitigations to address these to improve generalisability and to develop
methods for improved interpretability of machine learning predictions (Kelly et al., 2019, Aung
et al., 2021).
4.5
RELEVANT EU LEGISLATION
4.5.1
BACKGROUND
Regulation governs the development and commercialisation of products in the market.
Innovative technologies are also impacted by existing regulation. Regulatory changes are
also driven by the advancement of various technologies. Managing technology involves
adhering to technical guidelines, standardising processes, paying taxes, disclosing
information, and regulating the market. Regulation has the potential to encourage innovation
such as AI-systems while also potentially blocking new entrants (Sharma and Manchikanti,
2024).
More than 500.000 medical devices are available for sale on the EU Market74. The variation
in complexity, risk profile and applications of these devices has complicated the efforts to
create a harmonised regulatory process across EU member states.
Main regulatory categories are Medical Devices and In-Vitro diagnostic Medical Devices:

72
Medical devices - could be any instrument, apparatus, appliance, software, implant,
reagent, material or other article intended by the manufacturer to be used, alone or in
combination, for human beings for the following medical purposes75: diagnosis,
prevention, monitoring, prediction, prognosis, treatment or alleviation of disease or
compensation for an injury or disability, investigation, replacement or modification of
bodily functions. As well as devices providing information by means of in vitro
examination, devices for the control or support of conception; and products specifically
intended for the cleaning, disinfection, or sterilisation of devices.
GSPRs, I.I.3.c: (risk management system) […] estimate and evaluate the risks associated with and occurring during,
the intended use and during reasonably foreseeable misuse;
73
GSPRs, I.I.5
74
Recital 50 MDR and Recital 46 IVDR.
75
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices,
amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing
Council Directives 90/385/EEC and 93/42/EEC with EEA relevance https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=CELEX%3A02017R0745-20200424
69

In-vitro diagnostic medical devices - could be any medical device which is a
reagent, reagent product, calibrator, control material, kit, instrument, apparatus, piece
of equipment, software or system, whether used alone or in combination, intended by
the manufacturer to be used in-vitro for the examination of specimens, including blood
and tissue donations, derived from the human body, solely or principally for the purpose
of providing information76.
Note: in the context of this report the term Medical Devices will be used for both Medical
Devices and In-Vitro diagnostic Medical Devices unless stated differently.
4.5.2
MAIN CHANGES INTRODUCED BY THE NEW MDR AND IVD REGULATION
The former Medical Device Directive (MDD) (93/42/EEC) and the Directive on Active
Implantable Medical Devices (AIMD) (90/385/EEC) were replaced in 2017 with the new
Medical Device Regulation (MDR) (2017/745) 777879 . The former Directive for In-Vitro
Diagnostic Medical Devices (98/79/EC) is replaced by the new In Vitro-Diagnostic device
Regulation (IVDR; 2017/746).
Currently, the EU has a comprehensive legislative framework for medical devices and
consists of three Directives and three new Regulations:






Directive 90/385/EEC on active implantable medical devices (AIMDD), applicable from
1 January 1993 until 25 May 2021;
Directive 93/42/EEC on medical devices (MDD), applicable from 1 January 1995 until
25 May 2021;
Directive 98/79/EC on in vitro diagnostic medical devices (IVDMDD), applicable from 7
June 2000 until 25 May 2022;
Regulation (EU) 2017/745 on medical devices (MDR), fully applicable from 26 May
2021;
Regulation (EU) 2017/746 on in vitro diagnostic medical devices (IVDR), fully
applicable from 26 May 2022;
Regulation (EU) 2017/1004 on certain aspects of medical devices for human use
containing materials of animal origin.
The Medical Device Regulation MDR and In-Vitro Diagnostic Medical Device Regulation
replace the previous Medical Device Directives MDD and the In-Vitro Diagnostic Medical
Device Directive IVDD, respectively.
The MDR and IVDR are directly applicable in all EU member states and aim to ensure a high
level of safety and performance of medical devices. They apply to all medical devices and in
vitro diagnostic medical devices placed on the EU market, including those manufactured in
76
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic
medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU Text with EEA relevance
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02017R0746-20220128&qid=1680535534075
77
Article 10 par. 13 MDR and Article 10 par. 12 IVDR.
78
Article 10 par. 16 MDR and Article 10 par. 15 IVDR.
79
Articles 31 and 30 MDR and Articles 28 and 27 IVDR.
70
the EU and those imported from outside the EU. The MDR is aimed at manufacturers,
European agents, distributors, and importers of medical devices as well as healthcare
organisations (such as university hospitals), if they develop 'in-house' medical devices and
related software.













Scope and classification of IVD products80.
Common Specifications.
Technical files and Declarations of Conformity.
Clinical evidence, vigilance, and post-market surveillance.
Obligations of Economic actors and subsequent liabilities.
Quality Management Systems.
Requirements for a person responsible for Regulatory Compliance.
Notified Body supervision and re-designation.
Unannounced Audits.
Introduction of the Medical Device Coordination Group.
UDI to improve Traceability and Transparency.
EUDAMED database.
Portfolio rationalisation/elimination of unnecessary products.
The regulations set out the requirements for the conformity assessment of medical devices,
including the involvement of notified bodies in the assessment process for certain devices.
They require manufacturers to have a quality management system in place and to provide
appropriate labelling and instructions for use. In addition, the regulations establish a
European database on medical devices EUDAMED to enhance transparency and
traceability of medical devices on the EU market. They also set out the obligations of
economic operators, such as importers and distributors and the requirements for clinical
evaluation and post-market surveillance.
4.5.3
MEDICAL DEVICE SOFTWARE
The MDR requirements for medical device software include the following:



80
The first step in the MDR process is to classify the software according to its risk level.
This is done based on the intended use of the software and the potential risks
associated with its use.
Medical device software must undergo a clinical evaluation to demonstrate its safety
and effectiveness. This includes verifying the intended use, clinical performance, and
safety of the software81.
Risk management is an essential part of the MDR requirements. The manufacturer
must identify and evaluate all potential risks associated with the software and take
measures to mitigate them.
With reference to new rule 11 of MDR for software devices: many software devices of class I according to the old
Directive 93/42/CEE has become at least class IIa according to MDR which implies the assessment by a notified
body.
81
MDCG 2020-1 Guidance on clinical evaluation (MDR) / Performance evaluation (IVDR) of medical device software
71



Once the software is on the market, the manufacturer must monitor its performance
and report any adverse events to the regulatory body.
The manufacturer must create and maintain technical documentation that
demonstrates compliance with the MDR requirements. This includes documentation on
the software design, testing and validation.
The manufacturer must establish a quality management system that ensures the
software is designed, developed and maintained in compliance with the MDR
requirements.
Note:


Not all the software used in healthcare are medical devices and therefore are not
subjected to the MDR/IVDR. For example, a software-based system primarily intended
to store and transfer patient information generated in association with the patient’s
intensive care treatment are not qualified as medical devices (see guideline MDCG
2019-11).
Software driving or influencing the use of a (hardware) medical device may be qualified
as an accessory for a (hardware) medical device82.
The MDR introduces more stringent requirements for medical device software. Any
software providing prediction or prognosis of a disease or medical condition falls under the
scope of MDR. Medical device software is mainly classified as medium-low- to high-risk
device according to the new criteria83, clinical evaluation requirements are more explicit than
the previous directive 84 and post-market surveillance obligations is more strictly defined.
Manufacturers must address more explicit and stringent requirements before and after
introducing/implement their software on the market. In the Medical Device Regulation,
software and related AI algorithms are qualified as a medical device. The MDR does not
provide any specific requirement for AI software using Machine Learning.
The lack of explicit requirements and regulatory guidance is challenging for manufacturers
because AI-enabled medical devices have unique characteristics that are not adequately
addressed by current regulatory frameworks in the EU. For example, AI-enabled devices
often use complex algorithms to analyse data and make decisions, which makes it difficult to
validate and verify their safety and effectiveness.
The MDR does state the importance of a comprehensive evaluation of AI accordingly, i.e.
the same steps as described above for traditional medical device software: intended use,
potential risks, existing scientific and technical knowledge, and validation of the AI
performance:

82
The intended use of AI should be clearly defined, including its clinical purpose and the
patient population for which it is intended;
Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU)
2017/746 – IVDR
83
MDR, Annex VIII, rule 11 and MDCG 2021-24
84
MDR, Article 61, Annex XIV and Annex XV
72




A risk assessment should be conducted to identify and evaluate all potential risks
associated with the AI-enabled medical device. This includes considering the
complexity of the AI algorithm, the quality of the data used to train the algorithm
and the potential for the algorithm to make incorrect or biased decisions;
The developer/manufacturer should review the existing scientific and technical
knowledge related to the use of AI. This includes considering the latest research and
developments in AI and medical devices, as well as any relevant industry standards
and best practices;
Based on the results of the risk assessment and the review of scientific and technical
knowledge, the state-of-the-art of the AI-enabled medical device should be
determined. This includes considering the current level of technological development
e.g. TRL’s85, the potential for innovation and the level of competition in the market;
Finally, the AI-enabled medical device should be validated to ensure that it meets the
requirements of the MDR and is safe and effective for its intended use. This includes
conducting clinical studies, assessing the device in simulated use environments or
so-called living labs86 and verifying that the device meets all relevant regulatory
requirements in a meaningful real-life context87.
The Medical Device Coordination Group MDCG and the International Medical Device
Regulators Forum IMDRF (Artificial Intelligence Medical Devices Working Group) have
provided suggestions for guidance on definitions, requirements and criteria as formulated in
MDCG 2021-5 88 and IMDRF/AIMD WG/N67 on Machine Learning-enabled Medical
Devices 89 . Pending further guidance and the formulation of common criteria and
specifications, the development, conformity assessment and implementation of AI based
software in medical devices relies on contextual interpretation of the requirements and the
state-of-the-art analysis. Note: contextual interpretation of AI medical device software refers
to the process of analysis and understanding the context in which the software, i.e. AI, is
being used and how that context affects its performance and safety.
4.6
ARTIFICIAL INTELLIGENCE: THE AI ACT 90
The AI Act is a flagship legislative proposal to regulate Artificial Intelligence based on its
potential to cause harm. With the objective to ensure safe use of Artificial Intelligence with
regards to EU Charter of Fundamental Rights, Non-Discrimination and Gender Equality, the
85
https://ec.europa.eu/research/participants/data/ref/h2020/wp/2014_2015/annexes/h2020-wp1415-annex-g-trl_en.pdf
https://futurium.ec.europa.eu/en/digital-compass/regions/best-practice/living-ineu-movement-needs-you
87
https://www.ema.europa.eu/en/about-us/how-we-work/big-data/data-analysis-real-world-interrogation-network-darwineu
88
Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU)
2017/746 – IVDR
89
International Medical Device Regulators Forum 2022 https://www.imdrf.org/documents/machine-learning-enabledmedical-devices-key-terms-and-definitions
90
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206&qid=1680543339434
86
73
European Commission published the AI Act proposal on April 23rd, 2021 91 . The EU
lawmakers reached a political agreement on 27 April 2023. On Thursday 11th of May 2023,
the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating
mandate on the first ever rules for Artificial Intelligence in the European Parliament92. On 21st
of May 202493 the European Council approved this ground-breaking law aiming to harmonise
rules on Artificial Intelligence.
It should be noted that the Council of Europe is currently preparing a treaty to ensure that
human rights, democracy and the rule of law are protected and promoted in the digital
environment94, while the AI Act is a regulation which considers AI as a product with concrete
requirements for AI development, application, transparency, etc.
4.6.1
A LAYERED FRAMEWORK FOR RISK ASSESSMENT AND
CLASSIFICATION
This act aims to be a horizontal regulation like General Data Protection Regulation (see
Section 4.7 below)95. One of the key objectives of the EU AI Act is to ensure that AI-systems
are developed and used in a way that is safe and beneficial for individuals and society as a
whole: a significant risk is defined as “a risk that is significant as a result of the combination
of its severity, intensity, probability of occurrence and duration of its effects and it’s the ability
to affect an individual, a plurality of persons or to affect a particular group of persons”. To this
end, the Act outlines a regulatory framework for AI that is based on risk assessment and
classification which consists of four main layers:



91
The 1st layer includes a list of AI practices that are considered prohibited or prohibited
AI practices and are therefore not allowed in the EU. These include AI-systems that
are used to manipulate people's behaviour, create social credit scores, or violate
human dignity or other applications deemed to pose an unacceptable risk.
The 2nd layer includes a list of AI-systems that are considered high-risk and are
therefore subject to additional requirements and controls. This includes AI-systems
used in critical infrastructure, transportation, healthcare, and other areas where there is
a high potential for harm or risk to human rights.
The 3rd layer includes a list of AI-systems that are considered minimal risk and are
therefore subject to less stringent requirements. This includes AI-systems used in
consumer goods, such as chatbots and voice assistants.
AI Act (Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on
Artificial Intelligence = Artificial Intelligence Act and Amending Certain cross-domain data exchange Union
Legislative Acts, which was proposed by the European Commission in April 2021
92
https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-onartificial-intelligence
93
https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-finalgreen-light-to-the-first-worldwide-rules-on-ai/
94
https://www.coe.int/en/web/artificial-intelligence
95
Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free movement of such data and repealing
Directive 95/46/EC (General Data Protection Regulation). 2016, Official Journal of the European Union, L 119. p. 1–
88.
74

The 4th layer includes requirements for governance and oversight of AI-systems,
including requirements for transparency, documentation, and monitoring. This
includes requirements for human oversight of high-risk AI-systems, such as medical
devices and the creation of national supervisory authorities to oversee compliance
with the regulation.
In addition, on the 17 of July 2020, the High-Level Expert Group on Artificial Intelligence
(AI HLEG) presented their final Assessment List for Trustworthy Artificial Intelligence96 which
is based on seven key requirements97. The seven requirements should be considered as the
ethical foundation of the AI Act and were translated into provisions into the Act. These seven
requirements are:







Human agency and oversight.
Technical robustness and safety.
Privacy and data governance.
Diversity, non-discrimination, and fairness.
Environmental and societal well-being.
Accountability.
Transparency.
Transparency should be established through a set of measures such as interpretability and
explainability, communication, auditability, traceability, information provision, record-keeping,
documentation as well as data management and data governance. Transparency measures
should always be contextualised with accountabilities of involved actors e.g. AI developers,
manufactures, notified bodies, healthcare professionals and patients (Kiseleva et al., 2022).
Note: the AI Act mainly concerns the use of AI-systems and less their development.
4.6.2
IMPLICATIONS FOR MEDICAL DEVICES
After three years of negotiations, the EC reached a landmark moment with the publication of
the AI Act in the Official Journal of the EU 98 . The focus now formally shifts to its
implementation. The AI Act will not be applied separately from other relevant regulation and
legislation with the medical technology industry required to interpret the new law in
conjunction i.e., the MDR and IVDR. Currently the scope of the AI Act is mainly covering
aspects of the application of AI-systems and less its development. Several key areas will
need to be addressed over the course of time:

96
Alignment of AI Act, MDR and IVDR processes and procedures, using the MDR and
IVDR frameworks where possible. Standardisation will play a critical role in this and as
such consideration is needed on existing international and vertical standards (please
see Chapter 5 on Standards).
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
98
https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-finalgreen-light-to-the-first-worldwide-rules-on-ai/
97
75



A clear and streamlined designation process for notified bodies, leveraging existing
MDR and IVDR software codes: this ensures a smooth designation process, avoiding
unnecessary delays.
A framework for development of AI-systems should be further be established and
refined.
The continuity of a clear pathway and criteria for clinical validity and performance
studies for AI-enabled medical technologies, to ensure medical technologies, including
those employing AI, are safe, perform as intended, and provide clinical benefit to
patients and healthcare professionals (WHO, 2021).
Further, AI-based medical devices which are classified as high-risk applications, will be
subjected to safety requirements and ex-ante/ pre-market conformity assessment. The AI
Act also requires that developers of high-risk AI-systems in healthcare ensure that their
products are transparent, explainable, and auditable.
In addition, the AI Act stipulates that individuals must be provided with clear and accessible
information about the AI system and its potential impact on their health and well-being i.e.
the obligation to inform, explain, and educate. The EU AI Act strives for a balance between
promoting innovation in AI and protecting individuals' rights and interests, including
those related to health.
More specifically, in the context of healthcare the EU AI Act identifies certain types of AIsystems as high-risk, including those used for:



medical diagnosis,
treatment, and
prediction of health outcomes.
The AI Act aims to address the safety risk of AI while the overall integration of AI in the
medical device will be assessed according to MDR. Conformity assessment regarding the AI
Act requirements will be integrated to the overall MDR conformity assessment to ensure
better efficiency and clarity. It is expected that related harmonisation of standards,
publication of common specifications and specific guidance documents by the relevant
actors, such as the standardisation bodies and MDCG 99 , will be further intensified to
anticipate the current lack of common criteria and specifications, the development, conformity
assessment, and implementation of AI based software in medical devices.
Considering that medical devices use and generate data, they are often part of a digital
service = product-service combination. The DSA could also create new obligations for
medical device manufacturers who use online digital health service platforms to functionally
integrate their products and services. The DSA could require actors who are responsible for
these platforms to provide more information about how they collect and use patient data:
including data for the use of AI and the development of algorithms.
99
76
Ongoing guidance development and deliverables of MDCG Subgroups – October 2021
The DMA and DSA might require more transparency on cost and pricing models in relation
to product-service combination which is integrated and offered through digital platforms: e.g.
to protect people from hidden costs. Under the DMA, gatekeeper platforms (such as
online marketplaces or app stores) are subject to certain obligations to promote
transparency and fairness, such as providing clear and accurate pricing information,
preventing unfair pricing practices, and ensuring that users are not locked into contracts
or subscriptions without their consent. These rules can help protect people from hidden costs
that may be associated with buying medical devices online, such as additional fees for
shipping or handling100.
The DSA also aims to protect people from hidden costs in the service and supply chain by
requiring digital service providers to provide clear and accessible information about the costs
of their services, including any additional fees or charges. Providers must also obtain users'
explicit consent before charging them for any additional services or features. This can
help protect people from unexpected costs associated with using digital health services or
products, such as hidden fees for accessing certain features or data101.
4.7
DATA PROTECTION AND PATIENT PRIVACY
The data protection framework refers to the legal and regulatory framework that governs
the processing of personal data within a particular jurisdiction. It consists of laws, regulations
and policies that are designed to protect individuals' privacy and personal data and to ensure
that their data is processed fairly and lawfully.
In the European Union, the data protection framework is primarily governed by the General
Data Protection Regulation GDPR and the Data Protection Law Enforcement Directive
DPLED.
The GDPR is a regulation that sets out rules for how personal data of individuals in the EU
must be collected, processed and stored. The GDPR applies to all EU member states and
regulates the processing of personal data by both public and private sector organisations,
regardless of their location, also outside the EU, which process the personal data of EU
residents. Note: The protection offered by the GDPR travels with the data, meaning that
the rules protecting personal data continue to apply regardless of where the data is ultimately
handled and processed.
This also applies when data is transferred to a country which is not a member of the EU102.
The GDPR requires organisations to obtain consent from individuals for data processing,
implement measures to protect data and notify authorities of any data breaches. It also
100
https://www.europarl.europa.eu/news/en/headlines/society/20211209STO19124/eu-digital-markets-act-and-digitalservices-act-explained
101
https://www.europarl.europa.eu/news/en/press-room/20220701IPR34364/digital-services-landmark-rules-adoptedfor-a-safer-open-online-environment
102
https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-andorganisations/obligations/what-rules-apply-if-my-organisation-transfers-data-outside-eu_en
77
gives individuals the right to access and control their personal data. In addition to the
GDPR, there are several other EU directives and regulations that apply to data protection
and patient privacy in healthcare settings, including:



The ePrivacy Directive which regulates the processing of personal data in electronic
communications, including email, messaging, and internet telephony. It applies to
healthcare providers that use electronic communications to interact with patients.
The Clinical Trials Regulation (please see also section Healthcare above), which
describes the rules for the conduct of clinical trials of medical devices and other
healthcare products. It includes specific provisions for the protection of patient privacy
and the handling of personal data.
The Medical Device Regulation MDR includes provisions related to data protection
and privacy, particularly in relation to the use of connected medical devices and the
processing of patient data.
The DPLED describes the rules for how personal data must be processed and stored by law
enforcement agencies in the EU. It establishes safeguards for data processing and storage
and gives individuals the right to access and correct their personal data held by law
enforcement agencies.
Further, in the context of the EU data protection framework, a data subject is an individual
who can be identified, directly or indirectly, by reference to personal data. This can
include information such as a person's name, identification number, location data, or other
factors specific to their physical, physiological, genetic, mental, economic, cultural, or social
identity. Under the EU data protection framework, data subjects have several rights
including:



the right to access, rectify, and erase their personal data;
the right to object to the processing of their data;
and the right to data portability103.
Data subjects also have the right to be informed about the collection and processing of their
personal data and must give explicit consent for its use in certain circumstances.
The access, use, and sharing of personal data by entities other than data subjects should
occur in full compliance with all data protection principles and rules. Moreover, products
should be designed in such a way that data subjects are offered the possibility to use devices
anonymously or in the least privacy-intrusive way possible.
The data protection framework aims to establish harmonised rules on the access to and use
of data generated from a broad range of products and services, including connected objects
Internet-of-Things, medical or health devices and virtual assistants.
103
78
https://gdpr-info.eu/art-20-gdpr/
Note: Laws such as the GDPR in Europe and HIPAA (Health Insurance Portability and
Accountability Act) in the United States apply equally to medical devices. Failure to comply
with these regulations can result in significant fines and reputational damage.
4.7.1
IMPLICATIONS FOR MEDICAL DEVICES
The GDPR and other EU data protection and patient privacy regulations require
manufacturers of medical devices and AI-systems to prioritise the privacy and security of
personal data. They must implement measures to protect personal data, be transparent
about their data practices and obtain explicit consent from patients for the collection and
processing of their personal data. The implications are:






Medical devices and AI-systems must minimise the amount of personal data
collected and processed. This means that manufacturers must limit the collection of
personal data to what is necessary for the algorithm, device, or system to function
properly.
Medical devices and AI-systems must be designed with privacy in mind from the
outset. This includes implementing privacy and security features that protect personal
data, such as encryption and access controls. Note: when considering this in design, it
is important not only to protect data through expected operation of the device, but to
protect it from security threats both known and unknown at the time of
manufacture.
Patients have the right to know what personal data is being collected by medical
devices and AI-systems, how it is being used and with whom it is being shared.
Manufacturers must be transparent about their data practices and provide patients
with clear and concise information about their data processing activities.
Patients must give explicit and informed consent for the collection and processing of
their personal data by medical devices and AI-systems. Manufacturers must obtain this
consent and must provide patients with the ability to withdraw their consent at any
time.
Patients have the right to access, rectify, and delete their personal data collected by
medical devices and AI-systems. Manufacturers must provide patients with a way to
exercise these rights.
Medical devices and AI-systems must be designed to ensure the security of personal
data. This includes implementing technical and organisational measures to protect
against unauthorised access, disclosure, or destruction of personal data.
In addition, the Cyber Resilience Act CRA104 aims to reinforce by common cybersecurity
rules the security of connected digital products being introduced on the EU market.
MedTech Europe as sector representative supports that the CRA105:
104
Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European
Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and
repealing Regulation (EU) No 526/2013 (Cybersecurity Act), (2019) Official Journal of the European Union, L 151/1
105
https://www.medtecheurope.org/digital-health/cybersecurity/
79




Sets harmonised baseline cybersecurity requirements for connected digital
products and services;
Recognises the capabilities of existing sectoral legislation, specifically the MDR and
IVDR, including the associated guidance of the Medical Devices Coordination Group
MDCG;
Enshrines a sectoral approach to medical technology cybersecurity and a consistent
regulatory interplay with existing and future EU law;
Contributes to the security of digital product users and patients, while equally promoting
innovation and the provision of state-of-the-art technologies i.e., a balance of
interests.
In response to the growing demand for better and secure access to health data across the
European Union, the European Commission initiated the European Health Data Space.
4.8
THE EUROPEAN HEALTH DATA SPACE
The aim of the European Health Data Space EHDS is to create a secure and integrated
data infrastructure for health data, which can be used to support research, public health, and
the provision of healthcare services106. The EHDS regulation was proposed to create data
infrastructures for the use and re-use of health data107.
The EC has shown a strong commitment to the establishment of a general European Data
Space with more specific focus on several strategic sectors. In spring of 2024, the European
Parliament and the Council of the EU reached a political agreement on the EHDS proposal
but was not yet formally adopted by EC itself at the time of publication of this report108.
The potential of the EHDS for better person-centred care is aimed at improving
interoperability between information systems, optimisation of healthcare services and
ultimately of health systems, management of diseases and personal health management
achievement of a higher quality of life for patients (Genovese et al., 2022, Guldemond, 2013).
A common EHDS will be built based on strong data governance, data quality, and
interoperability. It aims to promote greater exchange and access to diverse types of health
data. This will not only support healthcare delivery, the so-called primary use of data, but
will also enhance health research and health policy-making: so-called secondary use of
data (Hazarika, 2020).
The EHDS presents several opportunities and implications for medical devices109110:
106
EHDS https://www.eesc.europa.eu/sites/default/files/2024-03/dorazil_ehds_presentation_2024-03.pdf
EUR-Lex - 52022PC0197 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52022PC0197
108
https://ec.europa.eu/commission/presscorner/detail/en/IP_24_2250
109
Questions and answers - EHDS
https://ec.europa.eu/commission/presscorner/api/files/document/print/en/qanda_22_2712/QANDA_22_2712_EN.pdf
110
EHDS concerns and opportunities https://www.eucope.org/the-european-health-data-space-challenges-andopportunities-with-respect-to-the-different-proposals-currently-subject-to-the-trilogue-negotiations/
107
80





The EHDS aims to make diverse categories of health data accessible, which can
support healthcare delivery, health research, innovation, policymaking, regulation, and
personalised medicine. This increased accessibility can benefit medical device
manufacturers by providing them with more data to inform the design and improvement
of their products.
The EHDS is working towards improved interoperability, meaning that health
professionals will be able to access a patient's medical history across borders. This
can have implications for medical devices, as they will need to be designed to comply
with common standards and practices to ensure their data can be integrated into the
EHDS.
The EHDS could lower barriers for small and mid-sized enterprises (SME) to use and
reuse high-quality health data sets for research and innovation purposes. This could
lead to the development of more effective and personalised medical devices.
The EHDS will empower individuals across the EU to fully exercise their rights over
their health data. This could lead to increased demand for medical devices that allow
patients to monitor and manage their own health data.
Medical device manufacturers will need to ensure that their products comply with the
rules and regulations set out by the EHDS. This could involve changes to the way data
is collected, stored, and shared by these devices111.
4.8.1
PRIMARY AND SECONDARY USE OF DATA
The EHDS regulation has two main components:


Primary Use - this refers to the use of health data by individuals i.e. patients to whom
the data belongs too and healthcare providers. The EHDS aims to empower individuals
to take control of their health data and facilitate the exchange of data for the delivery of
healthcare across the EU.
Secondary Use - this refers to the re-use of health data for research, innovation,
policy-making, and regulatory activities. Secondary use concerns data from health
records, public registries, clinical studies, research questionnaires and social,
administrative, genetic, genomic, or biomedical data such as biobanks.
Data for secondary use would only be shared in aggregated, anonymised and
pseudonymised forms. Accordingly, the EHDS should provide a consistent, trustworthy, and
efficient system for reusing health data.
Health Data Access Bodies (HDABs) are entities proposed under the European Health
Data Space Regulation 112 and are responsible for implementing accountability and
security measures in relation to EHDS. This is important for maintaining trust in the system
and protecting patient privacy. Health Data Access Bodies are tasked with issuing permits
for accessing health data. Access will only be granted if the requested data is used for
111
112
81
EHDS Challenges and Opportunities. https://www.sieps.se/globalassets/publikationer/2024/2024_2epa.pdf.
Health Data Access Bodies https://hadea.ec.europa.eu/calls-proposals/support-health-data-access-bodies-fosterefficient-pathways-ai-healthcare_en
specific purposes, in closed, secure environments and without revealing the identity of the
individual (Saelaert et al., 2023).
4.8.2
REGULATED CONTROL AND DATA ACCESS
Another aspect is who should have access to the data, apart from researchers. The
availability of high-quality and integrated health data in Europe, will stimulate crossborder collaboration and contribute to the development and innovation of new drugs,
devices, setting health policy goals and measures, the management of epidemics and
outbreaks (Scardoni et al., 2020), as well as improve the quality of health and social care
(Secinaro et al., 2021).
The EHDS will give citizens complete control over their own health data. They will be able
to add information to rectify errors to restrict access and find out which health
professionals had accessed their data.
The EHDS includes opt-out rules for primary use, where Member States can offer a
complete opt-out from the infrastructures to be built under the EHDS113. For secondary use,
the text includes rules on opting out that build a good balance between respecting patients’
wishes and ensuring that the right data is available to the right people for the public
interest114.
The EHDS builds on existing regulations such as the previous discussed GDPR, the Data
Governance Act and the Network and Information Systems Directive. However, considering
the sensitivity of health data, the EHDS provides specific sectoral rules115.
National Health Data Access bodies
Industry and researchers will require a permit from National Health Data Access bodies
and even in that case, they can only data needed for a specific project will be provided. These
bodies play a key role in ensuring that health data is accessed and used in a manner that is
secure, ethical, and compliant with relevant regulations. They are responsible for reviewing
and approving requests for access to health data, ensuring that these requests are justified
and that appropriate safeguards are in place to protect the privacy and confidentiality of the
data116.
National Health Data Access bodies are set up at the national level to review requests for
access to data and issue data permits. The nature and extent of the powers of the Health
Data Access Body, which will grant and control access to the data, has not been decided
yet117.
113
https://eurohealthnet.eu/wp-content/uploads/publications/2022/2210_policybriefing_ehds.pdf
https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en
115
https://data.europa.eu/en/news-events/news/european-health-data-space-what-you-need-know
116
https://www.european-health-data-space.com/
117
https://www.consilium.europa.eu/en/press/press-releases/2023/12/06/european-health-data-space-council-agreesits-position/
114
82
4.9
DEVELOPMENT, VALIDATION, AND IMPLEMENTATION OF
AI FOR MEDICAL DEVICES
The development, validation, and implementation of AI for medical devices will be discussed
according to the life-cycle process which is an important aspect of the MDR and IVDR
framework: please see figure 13.
Medical device companies that provide products for the EU market are now responsible for
meeting new, comprehensive requirements and compliance expectations during the entire
lifecycle of their products, including software and AI: figure 13. The foundation of MDR
legislation is based on historical product lifecycle issues and quality concerns. This new
legislative framework is designed to enact an increase in both regulatory education and
corporate accountability across the entire industry118. Economic operators in the supply chain
become responsible for reporting complaints to the device manufacturer, which includes
registering medical devices distributed across their supply chain to healthcare providers.
Figure 13 Lifecycle approach to medical device
118
83
Reg. 2017/745, Articles 25-34:34-40
4.9.1
DEVELOPMENT
New devices as well as AI-systems are often developed by (academic) research institutes
and venture-funded start-up companies, while larger companies tend to ‘innovate’ through
iterations of existing devices and related technologies. In many cases, devices are developed
through a cyclic process where each stage of development is aligned with the technological
possibilities and requirements according to the market needs, regulations, and clinical
evidence/proof119 . There is no consensus in literature regarding the number of life-cycle
stages of medical technology development and this linear or step-by-step process rarely
occurs in practice. Nonetheless, there are several consecutive and interlinked stages from
idea generation to the device becoming obsolete and disposed120.
New products are usually developed with a focus on an unmet clinical challenge supported
by new ideas, evidence, and technological possibilities. In the first stages, the device concept
will continually be tested and redesigned and a team of engineers will typically collaborate
closely with clinical experts to bring the device prototype towards the viable end-product121.
These initial stages are also decisive for whether the device development and testing will
continue, or the project will be stopped due to, e.g. unlikely financial return of investment,
technical un-viability, subpar testing results. These preclinical development stages tend to
take between a few months to a few years, depending on the device and the success of the
realisation of the concept idea122. In this stage, the patenting process is initiated.
Use-cases
Use-cases play a key role in the development of AI-based solutions. A use-case is a
methodology to specify the requirements of an AI system. A use-case illustrate the
functionality of an AI system by a set of behaviours as result of interaction with its inputs
and which corresponds to the intended goals: it is practical framework for understanding how
AI can be applied to solve real-world problems in healthcare (Suri, 2022, Tyrväinen et al.,
2018).
119
European Commission. Manual on borderline and classification in the Community regulatory framework for medical
devices, https://ec.europa.eu/docsroom/documents/29021 (2018
120
Santos I.C., et al. Medical device specificities: Opportunities for a dedicated product development methodology. Expert
Review of Medical Devices 2012; 9: 299–311.
121
Pietzsch J.B., et al. Stage-Gate Process for the Development of Medical Devices. J Med Device 2009; 3: 021004.
122
Kaplan A V., Baim DS, Smith JJ, et al. Medical Device Development. Circulation 2004; 109: 3068–3072.
84
Figure 14 Interrelation structure of AI application areas for AI in hospitals
Thematic formulated use-cases for development of AI-systems in healthcare are illustrated
in figure 14 (Klumpp et al., 2021). These use-case formulations are of more general nature
and serve to define a scope or domain of interest.
A more concrete example of a use-case is the monitoring and prevention of antimicrobial
resistance (AMR). This use-case originates from the pre-commercial procurement project
Dynamo - Modelling and dynamic assessment of integrated health and care pathways
enhancing response capacity of health systems123.
The city of Treviso in Italy has a significant elderly population, for which the susceptibility to
infections increases i.e. heightening the risks associated with AMR. Areas with high
population density, like Treviso, facilitating a more rapid spread of resistant infections. This
creates an additional burden on healthcare facilities, which are already struggling with the
complexities of treating drug-resistant illnesses. The early detection and monitoring (medical
devices) of infections with AMR pathogens and AI-system enabled coordination and planning
of anticipative measures would help to manage and control AMR infections: such an
approach could also be used for other types of outbreaks of infectious disease.
123
85
https://dynamo-pcp.eu/
Another example of a concrete use-case is cervical cancer screening formulated by WHO
in the report Generating Evidence for Artificial Intelligence Based Medical Devices: A
Framework for Training Validation and Evaluation (WHO, 2021):
WHO’s global strategy to accelerate the elimination of cervical cancer as a public health
problem makes cervical cancer screening a suitable and justifiable use-case illustration.
More than 85% of the 311 000 women (2018), who die as a result of cervical cancer globally,
live in LMICs. The WHO states: ‘When diagnosed, cervical cancer is one of the most
successfully treatable forms of cancer, as long as it is detected early and managed
effectively. Cancers diagnosed in late stages can also be controlled with appropriate
treatment and palliative care. With a comprehensive approach to prevent, screen and treat,
cervical cancer can be eliminated as a public health problem within a generation’.
Other use-cases for AI-systems in the context of medical devices and IVD are for example
related to the application of medical imaging124, gastrointestinal endoscopy, colonoscopy,
breast, prostate cancer screening, and others125 126.
Methodologies for the development of algorithms and AI
Figure 15 shows a typical approach for performing a project for the development of algorithms
and AI. Such a project usually starts with the unmet clinical challenge which defines the usecase. Considering that the AI solution should be patient/person centred, the needs
assessment as well as the consecutive stages of development should follow an inclusive
co-creation process.
User-centred co-creation
Co-creation and user involvement are important for the development of AI solution for health
and social care. It ensures that the resulting AI solution is designed with the end-users in
mind and that it is more likely to meet their needs and preferences (Gao and Huang, 2019,
Sanders and Stappers, 2008). Involving the end users during the development process can
help to identify the most relevant clinical questions and ensure that the technology addresses
the most pressing healthcare challenges (Visram et al., 2023, Guldemond, 2011).
Furthermore, involving patients in the development process can also help to ensure that the
technology is designed in an ethical and responsible way, taking into account issues such
as privacy, data protection and the potential for unintended consequences (Zhu et al., 2022,
Habers and Overdiek, 2022).
124
https://www.onixnet.com/blog/how-ai-powered-medical-imaging-is-transforming-healthcare/
https://research.aimultiple.com/healthcare-ai-use-cases/
126
https://medwave.io/2024/01/how-ai-is-transforming-healthcare-12-real-world-use-cases/
125
86
Figure 15 Methodology for the development of algorithms and AI.
Studies have shown that involving patients and other stakeholders in the development
process can lead to more effective and efficient development, as well as greater
acceptance and adoption of the technology or AI solution and improved outcomes for
patients (Wen et al., 2022). Accordingly, the co-creation approach not only facilitate usercentred development but also implementation.
As described in Chapter 2 on Healthcare, human centric or person-centred care is not the
result from single solutions or discrete products i.e. medical devices but such solutions
are part of a whole care process or integrated service for a population with certain needs
within a specific social-economic context e.g. a community with a certain infrastructure in a
country with specific health system characteristics (Terry et al., 2022, Moore et al., 2023).
Patient needs, the individual values, the social-economic context, the place of living including
the national values, legislation and regulation are inherently inter-related. Accordingly, AI
solutions should ideally be developed in such a real-world context to be optimally personcentred with the end-users in mind (Wolf, 2016).
Living Labs and Innovation Eco-systems
So-called (community-based) living labs are useful for co-creating and involving users in
the development of AI solutions which reflect the real-world context (Ståhlbröst, 2008). Living
labs are real-life settings where researchers, industry and users collaborate to develop and
test new technologies, products and services in a user-centred way (Van der Walt et al.,
2009). Living lab as an approach, methodology and environment, has been practiced since
the early 2000s (Kim et al., 2020), also for life-science and medical technology innovations
(Guldemond, 2010, van Geenhuizen et al., 2018).
As such, living labs provide a valuable platform for involving end-users and stakeholders in
the co-creation and testing of AI solutions, to ensure that they are effective, user-friendly and
meet the needs of all stakeholders i.e. process redesign rather than product design (Ben-
87
Tovim et al., 2008). Living labs can also provide a valuable source of feedback and validation
for AI solutions, helping to ensure that they are fit for purpose and meet the highest standards
of safety, quality and usability (Bessariya, 2022). Note: co-creation in living lab settings or
real-live contexts will facilitate the process of ecological validation.
Definition of the use-case inspired goals and objectives should define the data requirements.
As explained in the previous paragraphs, often there is historical data which is potentially
biased data; while collecting new or more appropriate data is expensive and time-consuming
considering that representative data mostly needs sufficient variables, indicators, and
individuals, as well as requiring diversity, quality, and granularity. While developing, the initial
AI solutions are tested through various stages of statistical inference.
4.9.2
VALIDATION
As mentioned, validation is an especially important aspect for healthcare providers and
patients127. Validation of AI-driven medical devices is the process of verifying that the medical
device and related AI-system meets its intended use and performs as expected in its intended
environment (Roper et al., 2023).
The first step is to determine the identity and define the intended use of the AI-system and
its performance characteristics. This involves defining the user needs, intended use
environment, device specifications, and any regulatory requirements.
According to the MDR and the IVDR, medical devices (including software, algorithms, and
AI) that qualify for a standalone medical device are divided into classes by rules based on
their characteristics and risks. Classification of medical devices into a higher risk class often
entail additional obligations for manufacturers and other economic operators. The
classification rules are set out in annexes to the MDR, the IVDR and the AI Act:
Classification according to the MDR
The MDR uses a classification where medical devices are placed in one of the risk classes
I, IIa, IIb or III (WHO, 2021). In total, the MDR uses 22 rules that can roughly be divided into
rules concerning non-invasive devices, invasive devices, and active devices. The last
category is formed by special rules. When classifying a medical device, various criteria are
used. The four broad questions that manufacturers must answer in the classification process
include:


127
88
What is the purpose of the medical device? The purpose of the device is determined
by the manufacturer and shall be included in his documentation regarding the device.
What is the envisioned length of usage of the medical device? The duration of use is
divided into three categories. There are devices for temporary use, short-term use, and
long-term use. Temporary use should not exceed 60 minutes. Short-term use is limited
to 30 days or less. Long-term use includes use for more than 30 days.
AI Hospital Monitor 2023' M&I/Partners https://mxi.nl/kennis/613/ai-monitor-ziekenhuizen-2023


The duration of use is not to be confused as the time of application but rather as the
time it takes for the device to achieve its intended effect.
Is the medical device invasive? A device is invasive if the device penetrates all or part
of the body, either through a body opening or body cavity or through the body surface.
The part of the body on which the medical device has an effect should also be
considered.
Classification according to the IVDR
The IVDR introduces a classification which uses risk classes A, B, C and D. Medical devices
belonging to the lowest class, class A, are considered to present low risk and do not require
the approval of a notified body before being admitted to the market. The IVDR contains fewer
rules than the MDR for classifying medical devices. The use of fewer rules for classification
has the consequence that medical devices for in-vitro diagnostics are more likely to be
classified in a higher risk class. It is expected that 85% of in-vitro diagnostic medical devices
will fall into a risk class where a conformity assessment by a notified body is required.
In summary, the classification of the device is based on
1 the way it is being used;
2 the duration that the patient is exposed to the medical device;
3 the magnitude of risks to the patient when the device fails.
Classification of medical devices into a higher risk class often entails additional obligations
for manufacturers and other economic operators128.
Classification according to the AI Act
As described in the chapter ‘Medical technology’, the AI Act has a classification system based
on the level of risk. The risk classification system includes four categories:
1 Unacceptable risk which includes AI-systems that pose a significant threat to the health
and safety of individuals or that undermine fundamental rights.
2 High risk includes AI-systems that may pose a risk to health and safety or that may
interfere with fundamental rights. Examples of high-risk AI-systems include those used
in critical infrastructure, employment and education, law enforcement and migration,
among others. Such high-risk AI-systems:
a undergo a conformity assessment process to ensure they comply with the
requirements of the AI Regulation.
b ensure the data quality and representativeness for training and the decision-making
process. They must also have appropriate human oversight to ensure that the AI
system is functioning as intended and to intervene if necessary.
c must be designed to provide transparency to the end-users. They must provide clear
and meaningful information about how the AI system makes decisions and operates,
128
89
REGULATION (EU) 2017/ 745 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL - of 5 April 2017 - on
medical devices, amending Directive 2001/ 83/ EC, Regulation (EC) No 178/ 2002 and Regulation (EC) No 1223/
2009 and repealing Council Directives 90/ 385/ EEC and 93/ 42/ EEC, https://eur-lex.europa.eu/legalcontent/EN/TXT/PDF/?uri=CELEX:32017R0745
including how it processes data and identifies errors. The AI system must also be
able to trace its decision-making process i.e. traceability.
d must be accurate, reliable, and robust. They must be designed to operate within the
intended environment and to perform their functions under various conditions and
scenarios i.e. adaptive algorithms or AI-systems behaviour.
e must maintain detailed documentation and record-keeping, including a description
of the AI system's intended use, technical documentation, and risk management
documentation.
f should comply to notification and information obligations i.e. manufacturers of
must notify the relevant authorities before and after placing the product on the market
= post-market surveillance. They must also provide information to the authorities
about the AI system's characteristics and how it meets the requirements of the AI
Regulation.
3 Limited risk includes AI-systems that have a low level of risk and that are subjected to
certain transparency obligations:
a AI system providers should provide clear and transparent information about the AI
b
c
d
e
system's capabilities, limitations and intended use to its users.
Explaining the AI system's decision-making i.e. if the AI system makes decisions
that affect individuals, the provider should ensure that the decision-making process is
explained in a clear and transparent manner.
Providers should maintain records on the AI system's development, deployment, and
operation, including any changes made to the system.
If requested by relevant authorities, the AI system provider should provide information
i.e. documentation regarding the system's development, deployment, and operation.
Providers should ensure that the AI system is subject to human oversight and
intervention to ensure that it operates as intended and to address any errors or
unintended consequences.
4 Minimal risk includes AI-systems that have a negligible risk and that are not subjected to
any additional requirements beyond existing laws and regulations.
Any medical device software providing diagnosis, prediction, or prognosis of a disease or
medical condition, are subject to the scope of MDR. Medical device software is generally
classified as a medium-low to high-risk device according to the new rules129 and clinical
evaluation requirements are more explicit than the previous directive130 as well as the postmarket surveillance obligations are sharpened. As a result, manufacturers must address
more explicit and stringent requirements before and after placing their AI-based software
on the market131.
129
MDR, Annex VIII, rule 11 and MDCG 2021-24
MDR, Article 61, Annex XIV (and Annex XV upon applicability)
131
https://www.medtecheurope.org/market-data/
130
90
However, the regulations do not provide specific requirements for all aspects related to AIenabled medical devices. Further specification should lead to explicit requirements. The
current combination of requalification and reclassification of AI Medical device software is
challenging for both manufacturers and notified bodies. New standards are being developed
but it will likely take some time before they are harmonised with MDR, the IVDR and the AI
Act. Awaiting further guidance from MDCG and/or the issuing of common specifications, the
conformity assessment of AI medical device software relies on contextual interpretation of
the requirements and the state of the art (Courivaud and Suyuthi, 2022).
Once the classification and requirements are clear, a plan that outlines the validation
approach, acceptance criteria, testing methods, and documentation requirements should be
written. This plan should include the roles and responsibilities of those involved in the
validation process as well as timelines and resources needed.
The MDCG, describes a 4-step approach to the performance and the establishment of
(clinical) evidence 132:
1 a valid clinical association (or scientific validity) between the software output and the
intended use of the software,
2 technical performance and
3 clinical evidence, generated through clinical investigation such as trials and/or additional
information may be used to establish the clinical association validity and
4 performance updated, through post-market surveillance during the lifetime of the device.
Clinical testing and collection of evidence for safety and performance of AI
It is important to note that the specific requirements for clinical testing and evidence collection
may vary depending on the risk classification of the AI-based medical device and the
regulatory framework in the country or region where it will be marketed (WHO, 2021).
To obtain regulatory approval, it is necessary to have the appropriate evidence
demonstrating that the product is safe and performs as intended. The extent of the clinical
investigation requirements depends on the device characteristics. Generally, higher-class
devices require more intensive clinical investigations and evaluations while lower risk
devices might require only a literature review, or a review and a smaller trial133. Initial clinical
investigations are often more focused on technical safety and performance than on clinical
benefit.
The clinical trial requirements for the development of AI solutions depend on the type and
intended use of the AI solution. If the AI solution is intended to be used as a medical device,
then it will need to undergo clinical trials in accordance with the applicable regulatory
requirements. There are important reasons why the traditional randomised clinical trial
approach may not be suitable for the development and training of AI. Because the traditional
randomised clinical trial relies on a pre-specified hypothesis with strict data collection
132
133
91
MDCG 2020-01, Guidance on Clinical Evaluation (MDR) / Performance Evaluation (IVDR) of Medical Device Software
EC MEDDEV. 2.7.1 Rev.4: Clinical Evaluation: A Guide for Manufacturers and Notified Bodies Under Directives
93/42/EEC and 90/385/EEC. MEDDEV 271 Rev4 2016; 1–9
methods on homogeneous groups enrolled with strict inclusion criteria focused on a
single underlying mechanism with a limited number of outcome measures. This may
not represent the population and context and all underlying mechanisms with interacting
effects that an AI system should incorporate to deliver meaningful effects for individual
patients and healthcare systems. The quality of reporting of trials in AI validation is
currently suboptimal (WHO, 2021). As reporting is variable in existing trails, caution should
be exercised in interpreting the findings of some studies (Shahzad et al., 2022, Shelmerdine
et al., 2021). Reporting guidelines for clinical trials evaluating AI interventions are needed
(Liu et al., 2019).
Whilst from a regulatory perspective, clinical investigations collect data in order to provide
evidence of a medical device’s compliance (e.g. safety, performance, benefit) for device
developers the aim of clinical studies is to gain new, real world data about safety and
effectiveness (WHO, 2021).
Additionally, AI-systems may be designed to learn and adapt over time, making it difficult to
accurately evaluate their performance using traditional clinical trial methods. Finally, the
speed at which AI-systems can be developed and deployed is much faster than the typical
duration traditional clinical trials take, e.g. 4-6 years, to keep up with the pace of innovation,
requiring new and more flexible approaches to evaluation and evidence generation.
As described in the chapter Healthcare, there are obligations for the registration and use of
data for AI development and clinical testing in the Good Clinical Practice GCP 134 and
Clinical Trials Regulation CTR135 Guidelines. These guidelines ensure that clinical trials
are conducted ethically and in accordance with established standards.
4.9.3
MARKET ACCESS
After the validation of an AI solution for medical devices is established, the manufacturer
must go through a regulatory approval process before the product can be put on the market
and sold. In the EU, this involves obtaining a CE mark and a conformity assessment must
take place which indicates that the product complies with relevant regulations and standards
for safety, performance, and efficacy.
Regulatory challenges in approval
The emergence of AI-systems in medicine also creates challenges and questions: which
regulators the applicant must pay attention to? Or what AI-system in healthcare should be
reviewed by which regulator? What evidence should be required to permit marketing for AIbased software as a medical device? How can we ensure the safety and effectiveness of AIsystems that may change over time as they are applied to new data and situations (Gerke
et al., 2020, Muehlematter et al., 2021)?
134
135
92
https://www.ema.europa.eu/en/human-regulatory/research-development/compliance/good-clinical-practice
Regulation (EU) No 536/2014 https://health.ec.europa.eu/medicinal-products/clinical-trials/clinical-trials-regulationeu-no-5362014_en
For the latter exists a term bio-creep, which refers to the gradual expansion of the scope
and purpose of a technology beyond its original intended use, often without the explicit
consent or knowledge of the user. In the context of medical devices and AI, bio-creep may
occur when an AI-algorithm originally developed for one specific purpose is subsequently
used for other purposes without appropriate validation or regulatory approval. This can lead
to unexpected or unintended consequences, such as misdiagnosis or inappropriate
treatment and can undermine the trust and confidence of patients and healthcare providers
in AI technology (Feng et al., 2021a, Pennello et al., 2021).
Conformity assessments and notified bodies
Conformity assessments can be performed by the manufacturer themselves in case of
class I and class A devices and in other cases by notified bodies. A notified body according
to the MDR and IVDR is a conformity assessment body designated in accordance with the
regulations136. It is a private institution that is accredited by a Member State to assess if
medical devices comply with the provisions as set out in the MDR, IVDR or AI Act. The
assessment is accepted in all other EU states. Typical examples of main bodies in Europe
are: TÜV SÜD and DEKRA based in Germany, DNV based in Norway, SGS based in
Switzerland, and BSI Group based in UK137.
The main task that notified bodies to perform is the conformity assessments of higher classed
AI-systems and medical devices as mentioned above138. The proper functioning of notified
bodies is deemed crucial for ensuring a high level of health and safety protection and
for citizen confidence in the system, read the regulations139, as well as the proficiency of the
notified body experts. As such both the MDR and the IVDR aim to strengthen the position of
the notified bodies in relation to manufacturers140.
Certification by a notified body usually takes between a minimum of twelve weeks up to
several months, although with the implementation of the new MDR, IVDR and AI Act
regulations, longer process times are foreseen due to the more rigorous requirements for
clinical evidence and technical documentation141. Under the new regulations, medical device
manufacturers must provide more extensive clinical data to demonstrate the safety and
performance of their products, including those that utilise AI. This will most likely increase the
time and resources required for clinical trials as well as other testing procedures and
documentation requirements (Cekada and Steinlechner, 2021, Wellnhofer, 2022).
Additionally, the increased scrutiny and requirements for notified bodies under the new
136
Article 2 par. 42 MDR and Article 2 par. 34 IVDR.
NANDO. EUROPA - European Commission - Growth - Regulatory policy - NANDO. http://ec.europa.eu/growth/toolsdatabase
138
Articles 52-56 MDR and Articles 48-51 IVDR.
139
Recital 50 MDR and Recital 46 IVDR.
140
Recital 52 MDR and Recital 48 IVDR.
141
Commission Implementing Decision (EU) 2020/1695 of 17 November 2020 on the harmonised standards for medical
devices drafted in support of Regulation (EU) 2017/745 of the European Parliament and of the Council. Retrieved
from https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=uriserv%3AOJ.L_.2020.385.01.0012.01.ENG&toc=OJ%3AL%3A2020%3A385%3ATOC
137
93
regulations may also result in longer assessment times for conformity assessment (Giordano
et al., 2022).
Manufacturers
Before a medical device or an in-vitro diagnostic medical device may be placed on the
market, the manufacturer must meet a set of requirements. The requirements see to
regulatory compliance as stated under the section on development (ref. 4.9.1) and the way
to demonstrate regulatory compliance for market access. Accordingly, the manufacturer
needs to draw up a declaration of conformity of the medical device and related AI
system142.
Furthermore, all medical devices need to be CE-marked143. ‘This Conformité Européenne or
CE-marking is done by the manufacturer and is subject to general provisions as set out in
Regulation 2008/765/EC 144 . The manufacturers need to appoint a so-called authorised
representative that will manage the potential regulatory issues related to the device. The
representative information must also be included on the label of the device. Based on this
assessment and with the approval of the notified body, manufacturers can certify the devices
they produce. They obtain CE certification and can place the CE marking on their product,
along with the declaration of conformity. The CE certificate is a declaration by a manufacturer
that the product complies with the European health, safety, and environmental
requirements145. Both the declaration of conformity and the CE-marking constitute the
manufacturer’s responsibility for complying with all applicable rules and regulations regarding
the medical device. It is necessary for market access, and it addresses technical quality,
safety, and registration. Note: it does not guarantee effectiveness or clinical relevance of
the product. Once a device has received a CE mark, it is possible to sell, lease, lend, or gift
the product in Europe. Thereafter, a renewal audit takes place146. The CE mark certificates
have a period of validity not exceeding five years. Notified Bodies can grant, suspend, or end
the CE certification.
EUDAMED database
The medical device and accordingly the related AI solution, also needs to be registered into
the EUDAMED database where it also obtains the Unique Device Identifier UDI number.
The purposes of this database are to:




142
function as an information database for the public regarding marketed medical devices
as well as for clinical research and evaluation;
constitute a means of unique identification to enhance traceability;
enable manufacturers to comply with information obligations;
facilitate competent authorities and notified bodies in fulfilling their duties.
Article 10 par. 6 and Article 19 MDR and Article 10 par. 5 and Article 17 IVDR.
Article 10 par. 6 and Article 20 MDR and Article 10 par. 5 and Article 18 IVDR.
144
Article 30 Regulation 2008/765/EC setting out the requirements for accreditation and market surveillance relating to
the marketing of products and repealing Regulation (EEC) No 339/93.
145
Santos I.C., et al. Medical device specificities: Opportunities for a dedicated product development methodology.
Expert Review of Medical Devices 2012; 9: 299–311.
146
Loh E, Boumans R. Understanding Europe ’ S New Medical Devices Regulation. 2018.
143
94
Though the word database is used by the regulations, EUDAMED consists of several
databases, including the UDI database. Access to EUDAMED is determined by the
qualification of the user. Member states and the European Commission have full access,
while the public has limited access.
Commercialisation
Commercialisation sees to the moment of market placement. At this stage import, export,
and distribution come into play. The main stakeholders at the time of commercialisation are
the manufacturers, authorised representatives, importers, and distributers.
During the commercialisation and marketing of medical devices with AI-systems, it is
essential to communicate information about the intended use, limitations, and potential
risks of the device to users e.g., patients and healthcare professionals. This includes
information about the device's performance, accuracy, and reliability as well as any known
limitations and potential adverse events147.
Furthermore, the European Commission's AI Act requires that certain information be
provided to users of high-risk AI-systems, including information on the system's purpose,
its intended users, its input and output data, its performance metrics and any
foreseeable risks or limitations. The AI Act also requires that users be provided with clear
and understandable information about how the AI system works and how to interact with it.
Finally, it is important to provide ongoing support and maintenance for AI-driven medical
devices to ensure their continued safety and effectiveness. This may include regular
updates to the device's software, periodic calibration and testing, user training and
support148.
4.9.4
PROCUREMENT
The procurement and commissioning of AI-based solutions in healthcare require careful
consideration to ensure that the solutions meet the needs of the healthcare setting and its
users. Procurement involves the process of identifying and selecting an AI solution from
potential suppliers, while commissioning involves the process of implementing the selected
solution into the healthcare system149.
To ensure successful procurement and commissioning, it is essential to involve key
stakeholders in the decision-making process, including clinicians, patients, healthcare
providers, and technical experts. The procurement process should also include a thorough
assessment of the AI solution's safety and effectiveness. Furthermore, the commissioning
process should involve the development of clear implementation plans, including training and
147
European Commission. (2021). Proposal for a Regulation laying down harmonized rules on artificial intelligence
(Artificial Intelligence Act) and amending certain Union legislative acts. https://eur-lex.europa.eu/legalcontent/EN/TXT/PDF/?uri=CELEX:52021PC0201&from=EN
148
International Medical Device Regulators Forum. (2019). Clinical Evaluation of Software as a Medical Device (SaMD):
Key Principles and Concepts. https://www.imdrf.org/documents/documents.asp
149
European Commission. (2021). White Paper on Artificial Intelligence: A European approach to excellence and trust.
Brussels: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellenceand-trust_en
95
support for users and the integration of the AI solution into existing healthcare processes:
see following Section ‘Use’ (ref. 4.9.5). Evaluation and monitoring of the AI solution's
performance should also be conducted to ensure it continues to meet the needs of the
healthcare system and its users over time150.
Procurement starts from the clinical needs to be met = use-case definition and is usually
a complex process with multiple stakeholders and various potential AI solutions that fulfil the
needs. Accordingly, a multi-disciplinary team composed of different stakeholders with
appropriate skills should be in place from the start. Specific Key Performance Indicators KPI and metrics need to be defined on the basis for which market research is conducted.
This involves researching the available AI solutions and vendors that could potentially
meet the identified needs and requirements. It may also involve evaluating the potential risks
and benefits of different options.
Note: Procurement of person-centred AI is different from procurement of purely technical or
technological solutions. The users’ needs should be met with person-centred solutions and
therefore the specifications should reflect the aim to improve outcomes for the patient, among
other service aspects, such as clinical efficiency and/or operational efficiency (Naqa et al.,
2020).
After the market research, procurement documentation needs to be prepared. This includes
documents such as requests for information, requests for proposals, and invitations to
tender, that outline the requirements for the AI solution and the evaluation criteria that will
be used to assess proposals. After advertising the procurement request and invitation of the
potential vendors to submit offers, the proposals should be reviewed using the evaluation
criteria to determine which proposals best meet the identified needs and requirements
of the use-case and provide the best value for money. After selecting the vendor and
awarding the offer, the terms of the contract will be negotiated with the selected vendor,
including the scope of work, deliverables, timelines, and pricing.
This is the moment where acceptance testing should be performed to verify the
implementation and operational use through stress testing in relevant (critical) situations.
Commissioning should test the suitability of the AI tool for the intended use in the local
institution. Desk research could be performed to explore the potential benefit for relevant
populations and possibly local data sets can be prepared to assess the KPIs i.e., virtual
clinical trials to simulate the functioning of the AI system of interest (Bosmans et al., 2021,
Mahadevaiah et al., 2020).
Note: in case the testing procedures are not successful or satisfactory, solutions of
alternative vendors could be explored (e.g., part of contract conditions) and the
procurement process could be re-initiated possibly with adoption of the use-case
specifications and request for information. This is also part of managing the ongoing vendor
150
96
NHS Digital. (2020). Assessment of international approaches to the procurement and regulation of AI in healthcare.
London: NHS Digital. https://digital.nhs.uk/data-and-information/publications/reports-and-studies/assessment-ofinternational-approaches-to-the-procurement-and-regulation-of-ai-in-healthcare
relationship for monitoring performance and ensuring compliance with the terms of the
contract.
The relation between suppliers, providers, and procurers
Suppliers or vendors might deliver purely technical solutions with an AI system to healthcare
service providers. Often suppliers deliver comprehensive product-service solutions in which
an AI system forms a part of the whole and the procurement request has additional
specifications and criteria. Hence, the use-case definition usually goes beyond the
functionalities of the AI system and includes requirements regarding patient outcomes,
operational efficiency, and sustainability (Fleiszer et al., 2015, De Rosis and Nuti, 2018).
Consequently, procurement in healthcare is often addressing a network or consortium of
partners who deliver a package of product-service combinations which defines the
arrangement between the supplier network and healthcare provider. Note: as explained in
the chapter ‘Healthcare’, a provider of specialist care is often dependent on primary care,
nursing homes, pharmacies, and other actors to provide integrated person-centred care
services. Accordingly, a network of suppliers and providers are all responsible to deliver
care services together, including enabling technology and to deliver better patient outcomes
i.e. value. Therefore, procurement of an AI system should be seen in the perspective of
person-centred service provision, and this is often interdependent on many other suppliers
and providers in the network.
Procurers who buy a digital enabled integrated person-centred service are often different:
e.g. a local or national authority, payers, and/or insurer. As described in the chapter
‘Healthcare’, there is a trend toward outcome-based financing i.e. procurement and
reimbursement of services. As a result, the network of suppliers and providers are paid for
their joint effort to deliver better outcomes at lower costs = value. The model of service
delivery and how the network makes financial and operational arrangements to enable the
service with activities, products, technology, etc., is the value model. The relation between
suppliers, providers, and procurer roles is depicted in figure 16.
The consequences of the network interdependency of both suppliers and providers as well
as the interconnected AI-systems and the digital infrastructure, is that the procurement
process become more complex. However, there are developments in procurement methods
which support the purchasing of more comprehensive and innovative product-service
combinations with overlapping requirements: pre-commercial procurement, public
procurement of innovations, and value-based procurement.
97
Figure 16 The relation between the service provider network and value-based procurement
Pre-Commercial Procurement
Pre-Commercial Procurement PCP allows contracting authorities to acquire R&D services
to research, develop and test innovative products, services or works that are not already
available on the market. The co-creation with suppliers, users and buyers is an opportunity
to develop human- and person-centric AI-systems. The complexity of products, systems
and services as well as managing the PCP process itself, require standardisation and the
use of standards.
Procurers buy research and development from several competing suppliers to compare
alternative solution approaches and identify the best value for money option to address their
needs. PCP is split into phases solution design, prototyping, original development, and
validation/testing of a limited set of first products. The number of competing suppliers
being reduced after each phase151. PCP is not a procurement award procedure under the
scope of the EU Public Procurement Directive152. The EU issues multiple co-financed calls
for cross-border PCP procurements annually153.
There are various PCP projects in which AI-driven medical devices are part of the cocreation process. These projects are all aimed at improving healthcare delivery using
innovative technologies and integrated care models:
151
https://innovation-procurement.org/why-buy-innovation/
The EU Public Procurement Directive is a regulation that sets out the legal framework for public procurement within
the European Union https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32014L0024
153
https://iptf.eu/the-growing-importance-of-public-procurement-of-innovation/
152
98




Procure4Health - This is an EU project that aims to overcome barriers to EU-wide
adoption of innovation procurement by creating an open community of health and
social care procurement stakeholders. Its 33 founding partners are actively promoting
innovation procurement through knowledge sharing and capacity building, networking
and matchmaking, identification of common needs and the launch of joint actions to
address them154.
CareMatrix - This is a European H2020 project for integrated care solutions
designed to challenge the health market to develop innovative technology for People
with Multimorbidity (PMM). It aims to improve the treatment of chronic diseases,
rehabilitation in remote areas, predictive analysis for frailty prevention and integrated
care solutions addressing multimorbidity challenges155.
InCareHeart – This concerns a project that focuses on the pre-commercial
procurement of innovative ICT-enabled integrated care solutions to advance
multidisciplinary health and care for patients with chronic heart failure156.
Dynamo PCP – This is a project that focuses on the modelling and dynamic
assessment of integrated health and care pathways through AI-based systems,
enhancing the response capacity of health systems. It aims to for a lean and powerful
solution enabling quick, data-driven, and platform-independent planning of care
pathways for situations where health system functionality is threatened157.
Public Procurement of Innovations
Public Procurement of Innovations PPI is the procurement of innovative solutions by the
public sector and facilitates the wide diffusion of innovative solutions to the market. PPI
typically involves three steps158:



mobilising a critical mass of purchasing power on the demand side i.e. various buyers;
making an early announcement of innovation needs i.e. tendering;
actualising public procurement of innovative solutions through one of the buyers.
It allows public sector organisations to stimulate the development and deployment of
innovative solutions such as AI-systems by leveraging their purchasing power. The wider
diffusion of AI-systems in the market requires standardisation and harmonisation e.g. to
facilitate integration and interoperability. By acting as a ‘lead customer’, public sector
organisations can provide a significant reference for other potential customers. Each year,
the EU issues multiple co-financed calls for cross-border PPIs procurements to foster the
introduction of innovative solutions in health and social care159.
Along with public procurement, standardisation can serve to facilitate market entry or
facilitate the diffusion of AI-based solutions in the case of market failure (Blind et al., 2020).
154
https://procure4health.eu/
https://carematrix.eu
156
https://incareheart.eu
157
https://dynamo-pcp.eu
158
https://research-and-innovation.ec.europa.eu/strategy/support-policy-making/shaping-eu-research-and-innovationpolicy/new-european-innovation-agenda/innovation-procurement/public-procurement-innovative-solutions_en
159
https://iptf.eu/the-growing-importance-of-public-procurement-of-innovation/
155
99
More procurement data will be published in a standard, open format, so suppliers will be able
to identify new opportunities to bid and collaborate (Chicot and Matt, 2018).
Value-based procurement
Value-based procurement VBP awards a supplier's contract based on what matters to
patients and care providers and aims for an impact on the outcomes of healthcare delivery
and management of the total cost of care delivery. It is a multidisciplinary approach for
collaboration between healthcare providers, procurers and medtech suppliers in all phases
of the procurement process with the aim to achieve better quality of care, outcomes from
different perspectives, and cost-efficient care to optimise economically advantageous
purchasing.
In response to the EU Public Procurement Directive, MedTech Europe and its members have
embraced VBP because it supports patient-centric, high-quality, and affordable
healthcare160 . By shifting to value-based procurement, manufacturers and procurers can
better respond to the mounting challenges facing health systems and accelerate the shift to
value-based high-quality healthcare ³⁴. This approach is viewed by the medtech industry and
the procurement community as a tool with the power to unlock value in healthcare.
Reimbursement
Reimbursement for AI-based healthcare services in Europe is a complex process that
varies across countries and regions within countries (Zhou and Gattinger, 2024),161. The
criteria for eligibility of reimbursement are typically based on necessity, effectiveness,
cost-effectiveness, and feasibility 162 . Because publicly funded health systems are the
norm in Europe, reimbursement from one source, the payers, tends to offer the greatest
revenue potential and is the goal for most companies developing AI-based solutions. This is
also the most difficult path to reimbursement where standardisation and harmonisation
regarding quality-of-care criteria and outcome indicators on basis payments could be
granted. The pay-for-performance concept implies that there is no reimbursement available
for single care procedures or technology i.e. devices but only for product-service
combinations with proofed benefit for patient related outcomes.
Artificial Intelligence adoption varies significantly by geography and socioeconomic
factors, as well as the type of hospital (academic hospitals versus general hospitals). Further
research is warranted to investigate the barriers to equitable access or wider adoption of
AI in healthcare.
It is important to note that the rate of AI adoption in the USA is not a reflection of the rest of
the world, especially since the USA has both public and private payors as opposed to
jurisdictions with a single-payor health system. Nevertheless, due to similarities in
160
https://www.medtecheurope.org/access-to-medical-technology/value-based-procurement/
https://www.mckinsey.com/industries/life-sciences/our-insights/the-european-path-to-reimbursement-for-digitalhealth-solutions
162
https://www.auntminnieeurope.com/imaging-informatics/artificial-intelligence/article/15661724/reimbursement-issuesgive-impetus-to-ai-adoption
161
100
technology, the trends in adoption in the USA may also be applicable in other jurisdictions
(Zhou and Gattinger, 2024).
4.9.5
USE
The implementation of AI solutions in healthcare is dependent on multiple factors and a
collaborative effort between actors and stakeholders, including healthcare professionals,
industry, policymakers, and patients. Organisations can be overwhelmed by the many
clinical, ethical, economic, and technical questions to be answered prior to the clinical
implementation of an AI solution.
Notwithstanding the previous indicated interdependency of suppliers and providers as
well as the integration of AI-systems in the existing digital infrastructure and processes. Key
success factors for AI implementation are:







A clear regulatory framework where AI solutions must adhere to regulations and
guidelines set by regulatory bodies to ensure safety, efficacy, and privacy.
The availability and access to sufficient quality of data to train and validate AI models
which are typically data sharing agreements i.e. data management plan, interoperability
standards and data security measures.
As discussed in the previous chapter, AI solutions should undergo rigorous clinical
validation to demonstrate safety, and efficacy. This requires e.g. clinical trials, realworld evidence studies, and other forms of testing.
AI solutions should be designed to integrate with existing clinical workflows and
systems, such as electronic health records (EHRs), medical imaging systems, and
patient care pathways (Ben-Tovim et al., 2008).
As discussed in the Section ‘Development’ of this chapter, involvement of end-users,
such as healthcare professionals and patients, is critical to ensure that AI solutions
meet their needs and are tailored to their use.
A skilled multi-disciplinary workforce for the AI development, implementation, and
maintenance, including data scientists, engineers, clinicians, and others to monitor.
This involves managing the implementation of the AI solution, including testing, training,
maintenance, and ongoing monitoring with evaluation to ensure that the solution is
meeting the identified needs and requirements.
AI solutions should have a sustainable business model to ensure long-term viability
and scalability.
Implementations of digital technologies are notoriously difficult due to a range of interrelated technical, social, and organisational factors that need to be considered (Cresswell
and Sheikh, 2013). It should be noted that cost savings (Donovan et al., 2023, Sharma et
al., 2016, Voets et al., 2022), and better use of human resources (Watcharasriroj and Tang,
2004) are not always obvious, even five years after adoption, with the implementation and
use of digital solutions (Agha, 2014).
101
There is a great need for implementation science expertise in AI implementation projects,
to improve both the implementation process and the quality of scientific reporting (Chomutare
et al., 2022).
4.9.6
POST-MARKET SURVEILLANCE
Placing a medical device and related AI system on the market does not release
manufacturers from their responsibilities or liability. Manufacturers are responsible and
liable for a medical device during its entire life cycle. It is therefore imperative that
manufacturers can monitor their products and intervene when necessary.
To meet those requirements, manufacturers should implement and maintain a post-market
surveillance system163. The system must be proportionate to the risk class of the medical
device and appropriate for the type of device164. Any incidents that happen with their medical
devices or field safety corrective actions undertaken by manufacturers must be recorded and
reported to the competent authorities. Manufacturers shall therefore implement a system that
enables them to comply with this obligation 165 (please see section on post-market
surveillance below for further information). To fulfil monetary obligations for damages caused
by medical devices placed on the market, manufacturers shall provide sufficient coverage in
respect of their potential liability under Directive 85/374/EEC166.
Post-market surveillance, vigilance, and market surveillance are covered in articles 83-100
of the MDR and address/cover the following167:





Post-market surveillance system of the manufacturer,
Post-market surveillance plan,
Periodic safety update report,
Reporting of serious incidents and field-wide corrective safety actions,
Trend reporting.
Post-market surveillance and vigilance activities are meant to facilitate awareness and initiate
corrective actions addressing medical device-related issues, including those resulting from
AI-systems. In addition, these activities help assure sufficient knowledge of an evolving
device technology landscape to assess the benefit-risk profile for a medical device. An
effective post-market surveillance program provides:


163
Real-world experience using a broad spectrum of physicians and patients, outside the
confines of pre- and post-market trial(s);
Early warning signs of problems by continuously and systematically collecting and
evaluating data;
Article 10 par. 10 MDR and Article 10 par. 9 IVDR.
Article 83 MDR and Article 78 IVDR.
165
Article 10 par. 13 MDR and Article 10 par. 12 IVDR.
166
Article 10 par. 16 MDR and Article 10 par. 15 IVDR.
167
Reg. 2017/745, Articles 83-100:71-82
164
102



Incentives for early corrective action, such as initiating corrective and preventive
actions or a device recall;
Increased compliance with relevant legislation; and
Additional value beyond compliance e.g., usability.
Accountability
In the scientific and public debate, the often-raised question is: who is responsible if the AI
application makes a mistake and therefore causes harm to patients? By law, the ultimate
responsibility lies with the healthcare professional who is using the AI system. While AIsystems can provide support and assistance to healthcare professionals, they are not
intended to replace the healthcare professional's judgment and decision-making abilities.
Therefore, the healthcare professional must exercise their professional judgment and
take responsibility for the decisions they make based on the information provided by the AI
system.
However, in the case of an adverse event caused by a failure of the AI system, the
manufacturer of the AI system may also be held responsible and may face legal
consequences. It is important to note that accountability and responsibility can be shared
among multiple parties, including the manufacturer of the AI system, healthcare providers,
and regulatory bodies, in cases where the cause of the adverse event is not clear.
Accountability for harm or adverse events due to failures of AI may depend on the specific
circumstances of the event and the jurisdiction in which it occurred.
Healthcare providers have a responsibility to use AI-systems in accordance with their
intended use and to monitor patients for any adverse events that may occur. They may also
be responsible for reporting adverse events to responsible authorities. The accountability
of medical professionals is typically regulated by professional organisations and licensing
bodies as well as by laws and regulations.
In most countries, medical professionals are required to be licensed and regulated by a
national or state licensing board, which sets standards for education, training, and practice.
These boards have the power to discipline or revoke the licenses of medical professionals
who engage in unethical or unsafe behaviour. In addition, medical professionals are typically
required to carry malpractice insurance, which provides financial protection for patients who
are harmed because of medical errors or negligence. Patients who are harmed by a medical
professional may also have the right to sue for damages in a civil court (please see also
paragraph relevant legislation and regulation in the Healthcare chapter).
Manufacturers are generally responsible for ensuring that their AI-systems are safe and
effective for their intended use and for providing appropriate warnings and instructions for
use. They may also be required to report adverse events to regulatory bodies.
Regulatory bodies are responsible for overseeing the safety and effectiveness of AIsystems and may require manufacturers to submit reports on adverse events. They may also
take enforcement action if an AI system is found to be unsafe or ineffective.
103
Artificial Intelligence-based systems cannot account for their behaviour: it is an algorithm that
gives output based on calculations. The legal definition of responsibility for damage
caused still needs to be clarified further, but it is useful to place this responsibility on the
institutions in which these applications are used.
4.9.7
IMPROVEMENT AND INNOVATION
It is inherent to the life-cycle approach of the MDR that medical devices and related AIsystems are safe, effective, and of high-quality that afford patients and healthcare
professionals to have the confidence in their use. It is therefore important that lessonslearned translate in further improvement and innovation.
Manufacturers and suppliers will need to be agile enough to react to the results following
the post-market surveillance and to quickly address necessary corrective actions. Having
a cross-functional triage process driven by risk management can help the regulatory team
make appropriate risk-based decisions. Through the post-market data analysis, the benefit
becomes a deeper understanding of periodic safety, complaints, literature, and overall
performance of the device and related AI system (Khorashahi and Agostino, 2023).
4.10 KEY CONCLUSIONS ABOUT MEDICAL TECHNOLOGY







104
The medical technology sector is dynamic with diverse range of products and related
services, thousands of SMEs, as well as due to rapid innovation and development.
The ongoing trend of increasing connectivity and data-driven solutions in medical
devices is accelerated and leveraged by Artificial Intelligence and machine learning.
These developments make digital medical devices inherently part of an adaptive and
evolutionary functional process rather than the traditional static standalone device.
The development, implementation, and maintenance of AI-based (smart) medical
devices have additional implications and responsibilities for developers, suppliers,
authorities, and users as well as a stronger need for collaboration and communication.
The different components and functions of interconnected medical devices have
implications for how supportive AI-systems should be developed, validated,
implemented and used: the overall performance (i.e. effect, quality, and safety) is
dependent on its components which often operate in a network of devices (e.g.
sensors, mobile phone, cloud service).
Various regulations and legislations apply to AI-driven interconnected medical devices
such as the Medical Device Regulation, In-Vitro Diagnostic Medical Device Regulation,
and medical device software which are required to comply to specific criteria e.g.,
regarding safety, trustworthiness, explainability, and transparency as well as data
protection and patient privacy.
In addition, there are different EU framework regulations which apply to AI-driven
interconnected medical devices such as the European Health Data Space, General
Data Protection Regulation, the AI Act, and various healthcare, clinical practice, and










105
research regulations: accordingly, the regulatory landscape, which is still in
development, is difficult to oversee and navigate along the medical device life cycle.
The medical device life cycle, forming the key foundation of the new MDR, describes
the separate phases of development, validation, market access, procurement, use
post-market surveillance, innovation, and improvement: for each of these phases there
are specific requirements regarding data handling, safety, evidence, conformity
assessment, administration, and reporting.
Use-cases are essential for the development of AI-driven medical devices as they
serve as a methodology to define comprehensive requirements which represent realworld problems in healthcare.
The involvement of end-users through co-creation with use-cases is essential for the
development of humancentric AI-systems which are characterised by integrated
person-centred product-service solutions.
Identifying relevant use-cases and the co-creation of solutions should happen in Living
Lab settings or innovation eco-systems which serve as real-life environments for
development and testing of human-centric AI-solutions.
Validation of AI-driven medical devices is the process of verifying that the medical
device and related AI-system meets its intended use and performs as expected in its
intended environment: ecological validation should not only include the validation of the
device but also the whole care process and context which are part of the overall
performance and its outcome.
Depending on the MDR/IVDR classification criteria and the AI Act, the AI-driven
medical device falls into a certain risk category i.e. low–high, for which specific
validation requirements apply and are mandatory for the conformity assessment.
Any AI-system providing diagnosis, prediction, or prognosis of a disease or medical
condition, is subjected to the scope of MDR: accordingly, clinical evaluation/validation
requirements (e.g. clinical trial data) as well as the post-market surveillance obligations
are more explicit and strident: i.e. according to the life-cycle approach, manufacturers
and suppliers must meet further requirements, e.g. clinical evidence and technical
documentation, before and after placing their device on the market.
Setting standards and criteria in procurement and commissioning of AI-driven medical
devices, could facilitate the co-creation, purchase, integration, and maintenance of
high-quality, person-centred, and value-based solutions.
Reimbursement models for AI-based solutions are still in development usually forming
part of a product-service payment. The current trend is towards outcome- or
performance-based payment models: i.e., the AI-driven medical device is part of an
integrated process of care with an intended outcome and payment according to results
achieved.
Post-market surveillance requires monitoring and administration of use and, if needed,
actions addressing AI-system related issues: information on the use of AI-systems
could also facilitate innovation and continuous improvement.
5.
STANDARDS
This chapter describes standards and standardisation aspects relevant for medical devices
software and AI-systems in the context of healthcare. After an introduction to standardisation
an overview of past/published, ongoing, or in development, standardisation work follows.
Activities related to AI and medical devices is presented according to a standardisation
organisation and other national and regional initiatives. This inventory is the result of desk
research.
5.1
INTRODUCTION
Standardisation is the process of creating and implementing documented agreements
(standards) containing specifications and precise criteria to be used consistently as rules,
guidelines, or definitions of characteristics to ensure that materials, products, processes,
and services are fit for their purpose.
Most standards are developed in response to a need in the market. They are developed by
a panel of experts in the field (Technical Committees) represented by industry, consumer
associations, academia, non-governmental organisations, and government. The standards
are periodically reviewed and updated to reflect the latest technological developments
(Folmer, 2012, Folmer et al., 2010).
In a regulatory context, standards can be used as a means of a tool for demonstrating
compliance with legal or regulatory requirements. For example, in the EU, compliance with
certain harmonised standards can be used to demonstrate conformity with the essential
requirements of the MDR or IVDR. In addition, standards can provide guidance to
manufacturers on best practices for the design, development and evaluation of medical
devices and AI-systems168.
Standardisation of AI is needed to ensure that AI-systems are developed, tested,
implemented, and used consistently and safely across different applications and domains in
healthcare169. Standardisation is essential for making AI-systems reliable, interoperable,
and trustworthy while it can facilitate the harmonisation of regulatory requirements and
reduce barriers to the adoption of AI technology.
The harmonisation of regulatory requirements is important to establish transparency,
accountability and ethical considerations in the design and use of AI-systems and ensure
168
European Commission. (2019). Standards for the Fourth Industrial Revolution. Retrieved from
https://ec.europa.eu/growth/content/standards-fourth-industrial-revolution_en
169
European Commission. (2021). Artificial Intelligence. Retrieved from https://ec.europa.eu/digital-singlemarket/en/artificial-intelligence
106
that AI-systems are aligned with societal values and expectations across the EU and
beyond170.
It is necessary that relevant harmonised standards are updated or refined while the potential
new standards are needed to assess the conformity of current and future medical devices
used with or connected to AI-systems, with relevant rules and legislation. Hence,
standardisation could provide technical and qualitative specifications to identify existing or
future products or processes, to which existing or future products, production processes, or
services can comply (Sood et al., 2022).
Besides technical and safety standards, additional regulatory standards might be
developed and implemented in order to help protect the interests and needs of the local
communities and aim to increase community-based research and engagement (Collins and
Moons, 2019).
5.1.1
STANDARDS
There are distinct types of standards, for example technical standards for specific
requirements and specifications for products; performance standards which define specific
performance criteria for processes; safety and quality standards; management and
governance standards as well as ethical standards for organisations and individuals. But
also, standards for test methods and terminology.
Vertical and horizontal standards
Vertical and horizontal standards are terms used in the context of standardisation, including
in areas such as safety regulations, quality management, and product specifications 171 .
Vertical standards apply to a specific industry or to operations, practices, conditions,
processes, means, methods, equipment, or installations. They are sometimes also referred
to as application standards. Horizontal standards more general standards that apply across
multiple industries. They contain fundamental principles, concepts, definitions, terminology,
and similar general information applicable over a broad subject area. Note: when a vertical
standard that applies to a particular industry (or employer in that industry), that particular
standard might take precedence over a horizontal standard172.
Harmonised standards to support the EU internal market
Standards are defined and developed at a national, European, and international level.
European standards are adopted by the European standardisation organisations, namely
CEN, CENELEC and ETSI. European standards play a key role in the internal market, for
example using harmonised standards for the presumption of conformity of products placed
170
International Organization for Standardization (ISO). (2019). The role of standards in artificial intelligence. Retrieved
from https://www.iso.org/files/live/sites/isoorg/files/news/magazine/ISO-FOCUS+_AI_EN.pdf
171
https://blog.ansi.org/2017/05/vertical-and-horizontal-standards-lia-z136/
172
https://www.osha.gov/enforcement/directives/cpl-02-00-163/chapter-4
107
on the market are presented with the essential requirements for those products as laid down
in the relevant EU harmonisation legislation173.
Medical device manufacturers and suppliers are required to meet several regulatory
requirements to bring their products to market. These regulations are focused on ensuring
that the devices are safe to operate in a patient setting e.g. at home or hospital. The
regulations may vary from country to country but are primarily aligned on the quality and
maintenance of the device. If a medical device represents a new type of device and/or
emerging technology, standards may not yet exist or be in draft form. If a standard is in draft
form, it may be worth communicating with the relevant regulatory authority to determine
the approach to conforming with the standard as it will not yet be known whether the
regulatory authority will recognise the standard.
Integrated solutions
If a medical device is classified as an accessory (intended to support, supplement, and/or
augment the performance of another device), the accessory and parent device may need to
be tested together to validate that, when used together, the whole system functions as
intended: this is according to the product-service combination and integrated care
principles discussed in the previous chapters.
5.2
INTERNATIONAL ORGANISATION FOR STANDARDISATION
(ISO) & INTERNAT IONAL ELECTROTECHNICAL
COMMISSION (IEC)
Below follows a list of standards relevant for medical devices software and AI-systems
organised by standardisation organisation, as well as other national and regional initiatives.
The International Organisation for Standardisation (ISO) and the International
Electrotechnical Commission (IEC) are both international standards organisations, but
they focus on different areas. The ISO is an independent, non-governmental international
organisation that brings together experts from around the world to develop international
standards. These standards cover almost all aspects of technology, management, and
manufacturing174. The IEC is the international organisation for the preparation and publication
of international standards for all electrical, electronic, and related technologies175.
ISO/IEC JTC 1 is a Joint Technical Committee of the ISO and the IEC. Its purpose is to
develop, maintain, and promote standards in the fields of information technology (IT) and
information and communications technology (ICT). Both ISO and IEC (and hence JTC1)
173
European Commission. (2020). European Standardisation for the Single Market. Retrieved from
https://ec.europa.eu/growth/single-market/european-standards_en
174
https://www.iso.org/about
175
https://www.iec.ch/who-we-are
108
work on a national delegation principle; for example, in the Netherlands, NEN 176 is
organising and facilitating this work.
The international committee evolved from pre-existing work around big data and is under
the leadership of the US. The first meeting of this international committee with an AI focus
took place in April 2018. This ISO/IEC committee is known via the designation ISO/IEC JTC1
SC42 and the group has 29 countries actively involved (and 12 observing). Over 20 ISO,
IEC or ISO/IEC committees have also been confirmed as liaisons to this committee. Standard
and/or project under the direct responsibility of ISO/IEC JTC 1/SC 42 Secretariat (44)
Artificial Intelligence.
5.2.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
176
PUBLISHED (ALL 35.020)177
ISO/IEC TS 4213:2022 - Assessment of machine learning classification performance
ISO/IEC 20546:2019 - Big data - Overview and vocabulary 01.040.35178
ISO/IEC TR 20547-1:2020 - Big data reference architecture - Part 1: Framework and
application process
ISO/IEC TR 20547-2:2018 - Big data reference architecture - Part 2: Use cases and
derived requirements
ISO/IEC 20547-3:2020 - Big data reference architecture - Part 3: Reference
architecture
ISO/IEC TR 20547-5:2018 - Big data reference architecture - Part 5: Standards
roadmap
ISO/IEC 22989:2022 - Artificial intelligence concepts and terminology 01.040.35
ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine
Learning (ML)
ISO/IEC 23894:2023 - Guidance on risk management
ISO/IEC TR 24027:2021 - Bias in AI-systems and AI aided decision making
ISO/IEC TR 24028:2020 - Overview of trustworthiness in artificial intelligence
ISO/IEC TR 24029-1:2021 - Assessment of the robustness of neural networks - Part 1:
Overview
ISO/IEC TR 24030:2021 - Use cases
ISO/IEC TR 24368:2022 - Overview of ethical and societal concerns
ISO/IEC TR 24372:2021 - Overview of computational approaches for AI-systems
ISO/IEC 24668:2022 - Process management framework for big data analytics
ISO/IEC 38507:2022 - Governance of IT - Governance implications of the use of
artificial intelligence by organisations
https://www.nen.nl/normcommissie-artificial-intelligence-en-big-data
35.020 = ISO Standards catalogue Information technology (IT) in general Including general aspects of IT equipment
178
01.040.35 = ISO Standards catalogue Information technology (Vocabularies)
177
109
5.2.2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
179
IN DEVELOPMENT
ISO/IEC CD 42006 Information technology - Requirements for bodies providing audit
and certification of artificial intelligence management systems 35.020 03.120.20179
ISO/IEC AWI 42005 - AI system impact assessment
ISO/IEC DIS 42001 - Management system 35.020 03.100.70180
ISO/IEC AWI TS 29119-11 - Software and systems engineering - Software testing Part 11: Testing of AI-systems
ISO/IEC PRF 25059 - Software engineering - Systems and software Quality
Requirements and Evaluation (SQuaRE) - Quality model for AI-systems 35.080181
ISO/IEC WD TS 25058 - Software and systems engineering - Systems and software
Quality Requirements and Evaluation (SQuaRE) - Guidance for quality evaluation of
AI-systems
ISO/IEC CD TR 24030 - Information technology - Artificial intelligence (AI) - Use cases
35.020
ISO/IEC FDIS 24029-2 - Assessment of the robustness of neural networks - Part 2:
Methodology for the use of formal methods 35.020
ISO/IEC AWI TR 21221 - Information technology – Artificial intelligence – Beneficial AIsystems
ISO/IEC AWI TR 20226 - Environmental sustainability aspects of AI-systems
ISO/IEC AWI TR 17903 - Overview of machine learning computing devices
ISO/IEC AWI TS 17847 - Verification and validation analysis of AI-systems
ISO/IEC AWI 12792 - Transparency taxonomy of AI-systems
ISO/IEC CD TS 12791 - Treatment of unwanted bias in classification and regression
machine learning tasks 35.020
ISO/IEC WD TS 8200 - Controllability of automated artificial intelligence systems
ISO/IEC FDIS 8183 - Data life cycle framework 35.020
ISO/IEC AWI TS 6254 - Objectives and approaches for explainability of ML models
and AI-systems
ISO/IEC CD TR 5469 - Functional safety and AI-systems 35.020
ISO/IEC DIS 5392 - Reference architecture of knowledge engineering 35.020
ISO/IEC DIS 5339 - Guidance for AI applications 35.020
ISO/IEC DIS 5338 - AI system life cycle processes 35.020
ISO/IEC CD TR 5259-6 - Data quality for analytics and machine learning (ML) - Part 6:
Visualisation framework for data quality
ISO/IEC CD 5259-5 - Data quality for analytics and machine learning (ML) - Part 5:
Data quality governance
ISO/IEC CD 5259-4 - Data quality for analytics and machine learning (ML) - Part 4:
Data quality process framework 35.020
03.120.20 = Product and company certification. Conformity assessment: Including laboratory accreditation and audit
programmes and auditing
180
03.100.70 = Management systems: Including environmental management systems (EMS), road traffic management
systems, energy management systems, health care management systems, etc.
181
35.080 = Software: Including software development, documentation and use
110
25
ISO/IEC CD 5259-3 - Data quality for analytics and machine learning (ML) - Part 3:
Data quality management requirements and guidelines 35.020
26 ISO/IEC CD 5259-2 - Data quality for analytics and machine learning (ML) - Part 2:
Data quality measures 35.020
27 ISO/IEC CD 5259-1 - Data quality for analytics and machine learning (ML) - Part 1:
Overview, terminology and examples 35.020 01.040.35
5.2.3
QUANTITATIVE AND QUALITATIVE ANALYSIS OF STANDARDS
In April 2021, the AI Watch, the European Commission knowledge service for monitoring the
development, uptake, and impact of AI, published a high-level landscape of the significant AI
standards onto the AI Act requirements. The study presents ongoing standardisation
activities on AI performed by ESOs (European Standards Organisations), and international
Standards Development Organisations (SDOs). The study investigated the alignment
between AI-related standards published or in development and the requirements proposed
in the Draft AI Act (April 2021), please see table 1.
The aim was to identify possible gaps and underdeveloped areas in the current
standardisation activities and to provide a contribution to the definition of a European
standardisation roadmap for implementing the AI Act (Nativi and De Nigris, 2021). A
differentiation in suitability and operationalisation level as well as their relative importance to
implementing the AI Act was made. The report concluded that many relevant standards
already exist (published or in the pipeline).
However, the report also concluded that specific gaps existed e.g. regarding data
management and data governance practices, documentation, and age definition. Based on
the report conclusions, the following recommendations were made:




111
the need for vertical standards in priority areas;
the consideration of compliance management instruments based on specific risk and
management system requirements;
the need for extensive standardisation of activities regarding technical
documentation, considering the level of detail of the related provisions in the AI Act
and the experience in product legislation;
the need for surveys and pre-standardisation activities, where existing subrequirements gaps are recognised.
Table 1 Quantitative analysis of standards for presence key words
Data and data
governance
Risk
management
system
ISO and
ISO/IEC JTC1
ISO/IEC 25024;
ISO/IEC 5259;
ISO/IEC 24668;
ISO/IEC
4213;
ISO/IEC
25059;
ISO/IEC
24029-2
ISO/IEC 5338;
ISO/IEC 5469;
ISO/IEC 24368;
ISO/IEC 24372;
ISO/IEC 24668
ISO/IEC 24027;
ISO/IEC
24028;
ISO/IEC 5338;
ISO/IEC 24368;
ISO/IEC 24372;
ISO/IEC 24668;
ISO/IEC 4213
IEEE
ECPAIS Bias;
IEEE P7002;
IEEE P7003;
IEEE P7004;
IEEE P7005;
IEEE P7006;
IEEE P7009;
IEEE P2801;
IEEE P2807;
IEEE P2863
IEEE
P7009;
IEEE
P2807;
IEEE P2846
ECPAIS
Transparency.
IEEE P7000;
IEEE P7001;
IEEE P7006;
IEEE P2801;
IEEE P2802;
IEEE P2807;
IEEE P2863;
IEEE P3333.1.3
ECPAIS Bias;
ECPAIS
Transparency.
ECPAIS
Accountability
; IEEE P7000;
IEEE P7001;
IEEE P7003;
IEEE P7004;
IEEE P7005;
IEEE P7007;
IEEE P7008;
IEEE P7009;
IEEE P7011;
IEEE P7012;
IEEE P7014;
IEEE P2863;
IEEE P3652.1
ETSI
DES/eHEALTH008 ; GR CIM
007 ; GS CIM
009; ENI GS
001; GR NFVIFA 041; DGR
SAI 002; TR
103 674; TR
103 675; TS
103 327; TS
103 194; TS
103 195.2,
GS ARF
003 ;
GR CIM
007 ;
ENI GS
005;
GR NFVIFA 041;
DGS SAI
003;
EG 203
341;
TS 103
194;
TS 103
195.2;
TR 103
821;
DES/eHEALTH008 ; ENI GS
005 ; DGR SAI
002, SAREF
Ontologies; GR
CIM 007; GS
CIM 009
DES/eHEALTH
-008 ; GS CIM
009 ; DGR SAI
002; SAREF
Ontology
Requirements
Technical data Transparency and
information to
and Record
users
keeping
Human
oversight
Accuracy,
robustness,
and cybersecurity
Quality
management
system
SDO
ETSI - SAREF
Ontologies
ITU-T
112
ITU-T Y.3170;
ITU-T Y.MecTaML ; ITU-T
ITU-T
Y.qos-mlarc ;
ITU-T Y.4470 ;
ISO/IEC
24027;
ISO/IEC
24028;
ISO/IEC
24029;
ISO/IEC 5469
ISO/IEC
23894;
ISO/IEC
38507;
ISO/IEC
42001;
ISO/IEC
25059
ECPAIS
Accountability.
ECPAIS
Transparency.
IEEE P7000;
IEEE P7006;
IEEE 7010;
IEEE P7014;
IEEE P2863
ECPAIS
Transparency.
IEEE P7007;
IEEE P7009;
IEEE P7011;
IEEE P7012;
IEEE P2802;
IEEE P2807;
IEEE P2846;
IEEE P2863;
IEEE
P3333.1.3
IEEE 2801;
IEEE P2863;
IEEE P7000
DES/eHEALTH008 ; DGR SAI
005
GS ARF 003 ;
GR CIM 007 ;
ENI GS 001;
ENI GR 007 ;
DGR SAI 001;
DGR SAI 002;
DGS SAI 003;
GR SAI 004;
GS ZSM 002;
TR 103 674;
TR 103 675;
TS 103 327;
GS 102 181,
GS 102 182
ITU-T Y.3170;
ITU-T Y.qosml-arc;
ml-arc;
ITU-T
Y.MecTa-ML ;
ITU-T Y.3531 ;
ITU-T Y.3172 ;
ITU-T
H.CUAV-AIF ;
ITU-T F.VSAIMC ; ITU-T
Y.4470
A similar methodology was used for analysis of standards related to AI in the context of
medical devices and healthcare. As far as possible, a quantitative analysis has been
performed on standards documents for the use of the following key words: Humancentredness, Equity, Appropriateness, Transparency, Information to Users,
Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness,
Accuracy, Performance, Safety, Human Oversight and Cybersecurity according to the
criteria: present = Yes, not explicit = NE or Absent = A, Unknown = ?, please see table 2 for
results.
The results are presented in the table below. Further, the analytical framework for a human
centric approach to AI, i.e. by the High-Level Expert Group (HLEG) defined Assessment
List for Trustworthy AI (ALTAI)182 with 7 requirements, as defined in the EU AI policy
context and which includes not only the legal but also ethical considerations183. In addition,
presence of environmental impact considerations in the standards were assessed.
Cybersecurity
Human Oversight
Safety
Accuracy
Robustness
Quality
Effectiveness
Efficiency
Performance
Documentation
Information to Users
Transparency
IEC 62304
Y
A
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
NE
ISO 13485
Y
A
Y
NE
Y
Y
A
A
A
Y
A
A
A
NE
A
ISO/IEC 62366-1
Y
A
Y
NE
Y
Y
A
A
A
Y
A
A
Y
NE
A
ISO/IEC TS 4213
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
ISO/IEC 20546
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 20547-1
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 20547-2
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 20547-3
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 20547-5
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC 22989
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
182
183
113
Appropriateness
Standard
Equity
Human-centredness
Table 2 Quantitative analysis of standards for presence key words: present = Yes, not explicit = NE
or Absent = A, Unknown = ?
https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai
https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
ISO/IEC 23053
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC 23894
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 24027
A
A
A
A
A
A
A
A
A
A
A
A
Y
A
A
ISO/IEC TR 24028
A
A
A
A
A
A
A
A
A
A
A
A
Y
A
A
ISO/IEC TR 24029-1
A
A
A
A
A
A
A
A
A
A
A
A
Y
A
A
ISO/IEC TR 24030
A
A
A
A
A
A
A
A
A
A
A
A
Y
A
A
ISO/IEC TR 24368
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC TR 24372
A
A
A
NE
A
NE
NE
NE
NE
NE
NE
NE
A
A
NE
ISO/IEC 24668
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
ISO/IEC 38507
Y
A
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
NE
ISO/IEC CD 42006
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC AWI 42005
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC DIS 42001
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC PRF 25059
A
A
A
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
A
ISO/IEC AWI TR 21221
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC AWI TR 20226
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
ISO/IEC AWI TR 17903
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
ISO/IEC AWI TS 17847
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
ISO/IEC AWI 12792
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
NE
Qualitative analysis and elaboration
In the standard IEC 62304, which specifies the lifecycle requirements for medical device
software, the concepts ‘Human-centredness, Equity, Appropriateness, Transparency,
Information to Users, Documentation, Performance, Efficiency, Effectiveness, Quality,
Robustness, Accuracy, Performance, Safety, Human Oversight, and Cybersecurity, in
relation to AI, are not explicitly addressed. The standard primarily focuses on software
development processes, risk management, and software lifecycle activities for medical
devices.
However, it is important to note that while the standard may not mention these specific terms
or Artificial Intelligence, many of the principles and concepts underlying these terms are
implicitly considered within the standard:


114
While not explicitly mentioned, IEC 62304 emphasises the importance of considering
the needs and characteristics of the users i.e. ‘Human-centredness’ (including
healthcare professionals and patients) during the software development process. It
requires activities such as usability engineering, user interface design, and validation of
user requirements.
The standard does not directly address ‘Equity, Appropriateness, Transparency,
Information to Users’. However, it does emphasise the importance of providing
appropriate information to users through labelling, instructions, and documentation,
ensuring transparency in the software development process and considering the
specific context of use for the software.





IEC 62304 places significant emphasis on ‘Documentation’ requirements throughout
the software lifecycle. It specifies documentation related to software development
plans, software requirements, architecture, design, implementation, verification,
validation, and risk management.
‘Performance, Efficiency, Effectiveness, Quality, Robustness, and Accuracy’ are not
explicitly mentioned in IEC 62304. However, the standard emphasises the need for
software verification and validation activities to ensure that the software performs its
intended functions correctly, reliably, and safely.
Safety is a central concern in IEC 62304. The standard provides requirements for
identifying and mitigating software-related hazards, conducting risk management
activities and ensuring the overall safety of the medical device software.
Although ‘Human Oversight’ is not explicitly mentioned, IEC 62304 emphasises the
need for appropriate processes, controls, and responsibilities to ensure the quality and
safety of medical device software. These include activities such as management
responsibility, organisational structure, and competent personnel.
IEC 62304 does not explicitly address cybersecurity. However, it does recognise the
importance of security considerations, including protecting the software from
unauthorised access, ensuring data privacy, and addressing potential security risks as
part of the overall risk management process.
While IEC 62304 provides a comprehensive framework for the development of medical
device software, it may be necessary to consider additional standards, guidelines, or
regulatory requirements specific to certain aspects like equity, cybersecurity, and human
oversight to ensure a holistic approach.
In the standard ISO 13485, which specifies the requirements for a quality management
system for medical devices. ISO 13485 primarily focuses on quality management system
requirements for the design, development, production, and servicing of medical devices but
it does not mention AI explicitly.
However, similar to the previous elaboration regarding IEC 62304, many of the principles
and concepts underlying the terms ‘Human-centredness‘, ‘Equity‘, ‘Appropriateness‘,
‘Transparency‘, ‘Information to Users‘, ‘Documentation‘, ‘Performance‘, ‘Efficiency‘,
‘Effectiveness‘, ‘Quality‘, ‘Robustness‘, ‘Accuracy‘, ‘Performance‘, ‘Safety‘, ‘Human
Oversight‘, and ‘Cybersecurity‘ can be considered within the context of ISO 13485:


115
ISO 13485 recognises the importance of considering the needs and expectations of
users, patients, and other stakeholders i.e., ‘Human-centredness’. It emphasises
customer focus, including understanding user requirements and feedback and
maintaining effective communication with relevant parties.
‘Equity‘, ‘Appropriateness‘, ‘Transparency‘, ‘Information to Users‘: While not explicitly
mentioned, ISO 13485 requires the establishment of processes to ensure that medical
devices are appropriate for their intended purpose and meet regulatory requirements. It
also emphasises the need for clear and effective communication with users and
stakeholders, including providing appropriate information, instructions, and labelling.





‘Documentation‘: ISO 13485 places significant emphasis on documentation
requirements. The standard requires the establishment and maintenance of
documented processes, procedures and records related to quality management,
including design and development, risk management, and production.
‘Performance‘, ‘Efficiency‘, ‘Effectiveness‘, ‘Quality‘, ‘Robustness and Accuracy‘ relating
to AI are not explicitly mentioned in ISO 13485. However, the standard emphasises
the need for effective planning, resource management, and process control to ensure
the quality and reliability of medical devices.
‘Safety‘ is a critical aspect of ISO 13485. The standard requires the identification and
control of risks associated with the design, development, production, and use of
medical devices. It highlights the need for risk management activities, including hazard
identification, risk assessment and implementation of appropriate mitigation measures.
‘Human Oversight‘: ISO 13485 recognises the importance of management
responsibility and the need for an effective organisational structure to ensure the quality
and safety of medical devices. It requires the establishment of a quality management
system that includes appropriate processes, responsibilities, and competent personnel.
ISO 13485 does not explicitly address cybersecurity. However, the standard
emphasises the need to identify and control external influences that could impact the
quality and safety of medical devices. This can include considering potential security
risks and implementing appropriate measures to protect the integrity and confidentiality
of data.
While ISO 13485 provides a comprehensive framework for quality management systems in
the medical device industry, it may be necessary to consider additional standards,
guidelines, or regulatory requirements specific to certain aspects such as equity,
cybersecurity, and human oversight to ensure a holistic approach.
Note: regarding labelling, ISO/TS 82304-2 Health software - Part 2 184 , defines a set of
questions and supporting evidence that can be used to clarify the quality and reliability of a
health app. A health app quality label is in development to summarise information in an
inclusive, easy understandable and visually appealing way. A related EU funded project
Label2Enable185 promotes health app quality and reliability label based on ISO/TS 82304-2.
IEC 62366-1 is a standard that specifies usability engineering requirements for the
development of medical devices123. It aims to design a device's user interface to minimise
the risk of use errors and ensure safety186. It is a process that involves analysis, specification,
development, and evaluation of usability. It was created in response to major incidents
resulting from use error. It cancels and replaces the previous edition of IEC 623665. In
ISO/IEC 62366-1, some of the concepts are addressed more explicitly:
184
https://www.iso.org/obp/ui/#iso:std:iso:ts:82304:-2:ed-1:v1:en
https://label2enable.eu/
186
It is also known as human factors engineering in the US
185
116







117
ISO/IEC 62366-1 is centred around human factors and usability engineering. It
emphasises the importance of considering the characteristics, abilities, and limitations
of users throughout the design and development of medical devices. The standard
provides guidance on user-research, user-interface design, and user-feedback to
ensure the device is safe, effective, and easy to use for its intended users. Note:
Definition 3.25 ‘user group’ has been rewritten to emphasise that user groups are
subsets of users who are differentiated by “factors that are likely to influence their
interactions with the medical device”: this might have implications to the (continuous)
algorithm development and use of machine learning. Although the standard does not
provide any examples, it is obvious that factors such as professional status (e.g. lay
user versus healthcare professional), age (e.g. child versus adult) and disease type
(e.g. asthma versus COPD) are likely to define distinct user groups.
ISO/IEC 62366-1 emphasises the importance of providing appropriate information to
users regarding the safe and effective use of medical devices. It addresses the need for
clear and unambiguous labelling (see above), instructions for use and user interfaces
that provide necessary information, warnings, and cautions. The standard also
highlights the significance of transparency in design decisions, risk assessment and
usability evaluation.
ISO/IEC 62366-1 requires the creation of documentation to support usability
engineering activities. This includes documentation of user needs and requirements,
usability engineering plans and reports, usability validation and verification activities
and documentation of human factors considerations throughout the device lifecycle.
While ‘Performance, Efficiency, Effectiveness, Quality, Robustness, Accuracy’ in
relation to AI is not explicitly mentioned, ISO/IEC 62366-1 focuses on ensuring the
performance, effectiveness, and quality of medical devices through usability
engineering. It emphasises the need to identify and mitigate use-related hazards,
design intuitive user interfaces and conduct usability testing and evaluation to optimise
device performance and user experience. Note: it was updated with a reference to ISO
14971, meaning that manufacturers should follow the risk management procedures
according to ISO 14971.
ISO/IEC 62366-1 places a strong emphasis on safety as it relates to usability. The
standard guides manufacturers in identifying potential use errors, hazards and risks
associated with the use of medical devices and provides methodologies for mitigating
those risks through appropriate design and labelling.
ISO/IEC 62366-1 does not explicitly mention human oversight. However, the
standard does require involving appropriate human factors and usability engineering
expertise throughout the design and development process to ensure the device meets
the needs and expectations of the users.
ISO/IEC 62366-1 does not explicitly address cybersecurity. However, the standard
encourages manufacturers to consider potential cybersecurity risks during the usability
engineering process. It emphasises the need to assess the impact of cybersecurity on
device usability and to ensure that appropriate security measures are implemented to
protect patient safety and device functionality.
Note: The summative evaluation (section 5.7.3) introduces several new requirements:




Explicitly state how the participants in the summative evaluation are representative of
the intended user profiles which could be used in (continuous) algorithm development.
This underlines the importance of involving representative users in the evaluation.
Describe how the test environment and conditions of use are adequately representative
of the intended use environment.
Define correct use for each hazard-related use scenario: this is an important addition
because it should be used to define success and failure for each task that is evaluated.
Description of how data will be collected during the test.
In addition, the purpose of the summative evaluation is to gather objective evidence that
the residual use-related risk is acceptable. With the added requirement for defining correct
use for each use scenario evaluated, this means that the success or failure of summative
evaluation is directly measured by the extent to which use-related risk is avoided187.
While ISO/IEC 62366-1 focuses primarily on usability engineering, it does address several
aspects related to human-centred design, safety, information to users and documentation.
For comprehensive coverage of other aspects such as equity, efficiency, accuracy, and
human oversight, it may be necessary to consider additional standards, guidelines, or
regulations specific to those areas188.
There might be two other relevant ISO standards for human centredness (Lachman et al.,
2020). The first one is ISO 27500 which explains the seven principles that characterise a
human-centred organisation 189 . The second one is ISO 27501 The human-centred
organisation — Guidance for managers190 which outlines managers’ responsibilities ranging
from organisational strategy to development of procedures and processes enabling human
centredness. In addition, ISO 13485 promotes the adoption of a process approach when
developing, implementing, and improving the effectiveness of a quality management system,
with the objective of meeting customer and regulatory requirements and providing medical
devices that meet customer and regulatory requirements.
ISO/IEC TS 4213 is a standard which describes approaches and methods to ensure the
relevance, legitimacy, and extensibility of machine learning classification performance
assertions. It provides methodological controls for assessing machine learning performance
to ensure that results are mathematically and statistically representative. Although the
introduction page refers to ‘Fair’ in relation to performance, no requirements and criteria are
in terms of ethics or any of the key words are provided. Accordingly, ‘Fair’ must be understood
as in accordance with the ISO/IEC TS 421 mathematical and statistical rules for machine
learning classification performance.
187
https://www.emergobyul.com/news/2020-amendments-iec-62366-implications-medical-device-usability-engineering
https://www.iso.org/obp/ui/es/#iso:std:iso:tr:14969:ed-1:v1:en
189
https://www.iso.org/obp/ui/#iso:std:iso:27500:ed-1:v1:en
190
https://www.iso.org/obp/ui/#iso:std:iso:27501:ed-1:v1:en
188
118
ISO/IEC 20546 is a standard that provides a conceptual overview of the field of big data and
its relationship to other technical areas and big data-related standards. It consists of a set of
terms and definitions which could be used to improve communication and understanding
about big data related aspects. According to ISO/IEC 20546 standard, the term Big Data
implies datasets that are so extensive in volume, velocity, variety, and/or variability that they
can no longer be handled using existing data processing systems.
There is no single analysis and interpretation of Big Data for healthcare as it covers many
domains and a wide range of categories. Machine learning is central to the processing and
interpretation of all these datasets to develop AI tools appropriate for each category and,
within these, applicable to individual cases. However, the standard does not mention
‘Human-centredness‘, ‘Equity‘, ‘Appropriateness‘, ‘Transparency‘, ‘Information to Users‘,
‘Documentation‘, ‘Performance‘, ‘Efficiency‘, ‘Effectiveness‘, ‘Quality‘, ‘Robustness‘,
‘Accuracy‘, ‘Performance‘, ‘Safety‘, or HLEG ALTAI requirements explicitly.
Note: the area of ‘big data’ and related concepts is a rapidly evolving in terms of technologies
and applications. This creates two challenges for developers, implementers, and users of the
big data related technology: 1), there is a lack of standard definitions and 2), there is no
consistent approach to describe a big data architecture and implementation. The first aspect
is addressed by ISO/IEC 20546 (above). ISO/IEC TR 20547 addresses the second aspect.
ISO/IEC TR 20547 is a five-part series provides a big data reference architecture and
framework which organisations can use to effectively and consistently describe their
architecture and its implementations i.e. it establishes an AI and ML framework for a generic
AI system using ML technology and describes the big data reference architecture and the
process for how a user can apply the framework to their domain and solution of interest.
Various healthcare related use-cases that provide guidance on how to apply big data
technologies to these use-cases are presented: 5.5.1 Use case 16: Electronic Medical
Record Data 5.5.2 Use case 17: Pathology Imaging/Digital Pathology 5.5.3 Use case 18:
Computational Bioimaging 5.5.4 Use case 19: Genomic Measurements 5.5.5 Use case 20:
Comparative Analysis for Metagenomes and Genomes 5.5.6 Use case 21: Individualised
Diabetes Management 5.5.7 Use case 22: Statistical Relational Artificial Intelligence for
Health Care 5.5.8 Use case 23: World Population-Scale Epidemiological Study 5.5.9 Use
case 24: Social Contagion Modelling for Planning, Public Health and Disaster Management
5.5.10 Use case 25: Biodiversity and LifeWatch. For example, use-case 21 ‘Individualised
Diabetes Management’ describes how big data can be used to provide personalised diabetes
management by analysing data from various sources such as electronic health records
(EHRs), medical devices and wearables. The use-cases are an illustration of how data could
be merged, processed and could be presented. While Cybersecurity in relation to security
and privacy is addressed, the standard does not mention aspects of Human-centredness,
Equity, Appropriateness, Transparency, Information to Users, Documentation, Performance,
Efficiency, Effectiveness, Quality, Robustness, Accuracy, Performance, Safety and Human
Oversight or HLEG ALTAI requirements explicitly.
119
ISO/IEC 22989 provides definitions of concepts and terminology to help AI technology to be
better understood and used by various stakeholders, including experts and non-practitioners.
The following sections addressing aspects related to Human-centredness, Equity,
Appropriateness, Transparency, Information to Users, Performance, Efficiency,
Effectiveness, Quality, Robustness, Accuracy, Performance, Safety, Human Oversight, and
Cybersecurity as well as HLEG ALTAI requirements: 5.15 Trustworthiness 5.15.1 General
5.15.2 AI robustness 5.15.3 AI reliability 5.15.4 AI resilience 5.15.5 AI controllability 5.15.6
AI explainability 5.15.7 AI predictability 5.15.8 AI transparency 5.15.9 AI bias and fairness
5.16 AI verification and validation 5.17 Jurisdictional issues and 5.18 Societal impact.
Documentation requirements are not explicitly mentioned or referenced.
ISO/IEC 23053 is a standard that provides a framework for the description of AI-systems that
use Machine Learning and it establish common terminologies and a common set of concepts
and considerations for the use application of ML. It covers AI model development and the
use of ML, tools, different ML methods and data handling191. While ISO/IEC 23053 does not
specify aspects Human-centredness, Equity, Appropriateness, Transparency, Information to
Users, Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness,
Accuracy, Performance, Safety, Human Oversight, and Cybersecurity or HLEG ALTAI
requirements explicitly, the standard is essential for the development of AI models through
ML in medical device related software.
ISO/IEC 23894 is a standard that provides guidance on how organisations that develop,
produce, deploy, or use products, systems and services that utilise AI, can manage risk
specifically related to AI. The guidance also aims to assist organisations to integrate risk
management into their AI-related activities and functions: it offers concrete examples of
effective risk management implementation and integration throughout the AI development
lifecycle and provides detailed information on AI-specific risk sources. In addition, ISO/IEC
23894 references ISO Guide 73:2009 Risk Management Vocabulary and ISO/IEC 22989
(Information Technology – Artificial Intelligence – Concepts & Technology),192. The standard
covers main (human factor related), aspects including fairness except main technical topics
as Robustness, Accuracy, Performance and Cybersecurity or in little detail.
ISO/IEC TR 24027 is a technical report that addresses bias in relation to AI-systems,
especially with regards to AI-aided decision-making. The document provides insight about
the different types of bias, measurement techniques and methods for assessing bias, with
the aim to address and treat bias-related vulnerabilities: through describing terminology and
language related to bias, the different sources of bias and methods and techniques that can
be used to mitigate bias-related issues. All AI system lifecycle phases are included but not
limited to data collection, training, continual learning, design, testing, evaluation, and use.
ISO/IEC TR 24029-1 is a technical report similar to the previous ISO/IEC TR 20547, which
serves a more technical purpose. It provides an overview of the existing methods to assess
191
192
120
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:23053:ed-1:v1:en
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:23894:ed-1:v1:en
the robustness of neural networks: i.e. an artificial neural network can be used to make
predictions or classifications on new, unseen data and it can manage complex, nonlinear
relationships in data = an essential functional component of machine learning. Although
report TR 20547 is addressing aspects of Human-centredness, Equity, Appropriateness,
Transparency, Information to Users, Documentation, Performance, Efficiency, Effectiveness,
Quality, Robustness, Accuracy, Performance, Safety, Human Oversight Cybersecurity and
HLEG ALTAI requirements. Both of these technical reports do not provide an elaboration of
their meaning and possible implications beyond the technical context.
ISO/IEC TR 24030 is a technical report that provides a collection of representative use cases
of AI applications in a variety of domains including healthcare. Sections 7.7 and 7.8 contain
28 Healthcare and 3 Home Robotics use-cases respectively: 7.7 Healthcare: 7.7.1
Explainable artificial intelligence for genomic medicine (use case 1), 7.7.2 Improve clinical
decision-making and risk assessment in mental healthcare (use case 2), 7.7.3 Computeraided diagnosis in medical imaging based on machine learning (use case 6), 7.7.4 AI solution
to predict post-operative visual acuity for LASIK surgeries (use case 24), 7.7.5 Chromosome
segmentation and deep classification (use case 44), 7.7.6 AI solution for quality control of
electronic medical records (EMR) in real time (use case 50), 7.7.7 Dialogue-based social
care services for people with mental illness, dementia and the elderly living alone (use case
63), 7.7.8 Pre-screening of cavity and oral diseases based on 2D digital images (use case
67), 7.7.9 Real-time patient support and medical information service applying spoken
dialogue system (use case 68) 7.7.10 Integrated recommendation solution for prosthodontic
treatments (use case 69), 7.7.11 Sudden infant death syndrome (SIDS) (use case 74), 7.7.12
Discharge summary classifier (use case 79)7.7.13 Generation of clinical pathways (use case
80) 7.7.14 Hospital management tools (use case 81), 7.7.15 Predicting relapse of a dialysis
patient during treatment (use case 87), 7.7.16 Instant triaging of wounds (use case 89),
7.7.17 Accelerated acquisition of magnetic resonance images (use case 101), 7.7.18 AI
based text to speech services with personal voices for people with speech impairments (use
case 103), 7.7.19 AI platform for chest CT-scan analysis (early stage lung cancer detection)
(use case 105), 7.7.20 AI-based design of pharmacologically relevant targets with target
properties (use case 107), 7.7.21 AI-based mapping of optical to multi-electrode catheter
recordings for atrial fibrillation treatment (use case 108), 7.7.22 AI solution for end-to-end
processing of cell microscopy images (use case 115), 7.7.23 Generation of computer
tomography scans from magnetic resonance images (use case 116 ), 7.7.24 Improving the
knowledge base of prescriptions for drug and non-drug therapy and its use as a tool in
support of medical professionals (use case 117), 7.7.25 Neural network formation of 3Dmodel orthopaedic insoles (use case 121), 7.7.26 Search for undiagnosed patients (use case
127), 7.7.27 Support system for optimization and personalization of drug therapy (use case
129), 7.7.28 Syntelly - computer aided organic synthesis (use case 130), 7.7.29 WebioMed
clinical decision support system (use case 131), 7.8 Home/service robotics: 7.8.1 Robot
consciousness (use case 61), 7.8.2 Social humanoid technology capable of multi-modal
context recognition and expression (use case 65), 7.8.3 Application of strong artificial
intelligence (use case 111).
121
Most use-cases provide information on challenges and issues as well as societal concerns,
although aspects of Human-centredness, Equity, Appropriateness, Transparency,
Information to Users, Documentation, Performance, Efficiency, Effectiveness, Quality,
Robustness, Accuracy, Performance, Safety, Human Oversight Cybersecurity and HLEG
ALTAI requirements are not consistently and coherently addressed such as ISO/IEC PRF
TR 24368 describes193.
ISO/IEC TR 24368 presents a high-level overview of AI ethical and societal concerns and
information in relation to principles, processes, and methods to various audiences and
provides guidance on how to address ethical issues in AI. In the current study, there are no
references to ISO/IEC TR 24368 found in documentation related to medical devices. Most
aspects of Human-centredness, Equity, Appropriateness, Transparency, Information to
Users, Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness,
Accuracy, Performance, Safety, Human Oversight, and Cybersecurity are covered in the
standard.
ISO/IEC TR 24372 is a standard that describes the main computational characteristics of AIsystems and the main algorithms and approaches used in AI-systems. Section 6.2
addressing Explainability (6.2.5.) as part of Transparency while Performance,
Appropriateness Efficiency, Effectiveness, Quality and Accuracy are only mentioned in a
computational technical sense.
ISO/IEC 24668 specifies management and the assessment of processes for big data
analytics such as organisation stakeholder processes, competency development processes,
data management processes, analytics development processes, and technology integration
processes as well as the processes to acquire, describe, store, and process data within
organisations who provide big data analytics services (Cybersecurity).
While this standard is not a specifically applicable to healthcare or medical devices, it might
be helpful for suppliers and providers working with AI-systems to align their organisational
processes with the implications of regulations related to the AI Act and GDPR. The standard
does minimally refer to aspects of Human-centredness, Equity, Appropriateness,
Transparency, Information to Users, Documentation, Performance, Efficiency, Effectiveness,
Quality, Robustness, Accuracy, Performance, Safety, and Human Oversight or HLEG ALTAI
requirements.
ISO/IEC 38507 is a standard that provides governance guidance for organisations (public
and private companies, government entities and not-for-profit organisations) to enable and
govern the use of AI, to ensure its effective, efficient, and acceptable use. The standard aims
at reaching a wide audience such as executive managers, technical specialists, legal and
accounting specialists, associations, professional bodies, public authorities, policymakers,
internal and external service providers, assessors, and auditors194.
193
194
122
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:tr:24368:ed-1:v1:en
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:38507:ed-1:v1:en
The standard addresses the nature and mechanisms of AI necessary to understand the
governance implications of their use: e.g. maintaining governance and accountability when
introducing AI are addressed in respectively in 4.2 and 4.3. The emphasis is on governance
of the organisation’s use of AI and not on the technology underlying AI-systems: e.g. policies
to address use of AI in 6.2 Governance oversight of AI, 6.3 Governance of decision-making,
6.4 Governance of data use, 6.5 Culture and values, 6.6 Compliance, and 6.7 Risk.
Cybersecurity aspects are addressed in A.3 Governance of data use and A.2 Governing
body guidance over management decisions.
In general, the standard addresses Human-centredness, Appropriateness, Transparency,
Information to Users, Documentation, Performance, Efficiency, Effectiveness, Quality,
Robustness, Accuracy, Performance, Safety, Human Oversight, and HLEG ALTAI
requirements. Note: Equity is not mentioned, while in the context of healthcare further
requirements for governance in relation to AI-systems might be needed. For example:






123
Patient-centredness, governance should prioritise patient-centred care, ensuring that
the needs, preferences, and rights of patients are central to decision-making
processes. The focus is on delivering quality care, promoting patient safety, and
involving patients in their own care through informed consent, shared decision-making,
and respect for their autonomy.
Healthcare governance must adhere to ethical principles, such as beneficence, nonmaleficence, justice, and respect for autonomy. Ethical considerations involve
balancing the interests of patients, healthcare providers. It includes protecting patient
privacy and confidentiality, respecting cultural diversity, and ensuring equitable access
to healthcare.
Healthcare governance operates within a complex regulatory framework due to the
potential risks associated with healthcare delivery. There are specific regulations and
standards to ensure patient safety, quality of care and professional standards of
healthcare providers. Regulatory bodies oversee licensing, accreditation, and
monitoring of healthcare facilities and professionals.
Healthcare governance requires a strong emphasis on evidence-based decision
making. Policies, practices, and interventions should be grounded in scientific evidence
and best practices to ensure that healthcare services are effective, safe, and efficient.
Research, evaluation and continuous quality improvement are integral to healthcare
governance.
Healthcare governance necessitates clear mechanisms for accountability and
transparency. This includes financial accountability, ensuring responsible use of
healthcare resources and transparent reporting of outcomes. Governance structures
should foster transparency in decision making, information sharing, and engaging
stakeholders in the healthcare system.
Healthcare governance considers the continuum of care, encompassing primary,
secondary, and tertiary levels of healthcare. It focuses on coordination and integration
across different healthcare providers, ensuring seamless transitions and continuity of
care for patients.

Healthcare governance deals with the complexity of health systems, which involve
multiple stakeholders, including government bodies, healthcare providers, insurers, and
patient organisations. Effective governance requires collaboration, partnerships, and
coordination among these diverse stakeholders to achieve common goals.
ISO/IEC CD 42006 is a standard that specifies the requirements for bodies providing audit
and certification of Artificial Intelligence management systems: it contains requirements for
assessment or conformance of auditability. It will also provide guidance for establishing,
implementing, maintaining, and continually improving an Artificial Intelligence management
system within the context of an organisation195. The standard does not mention Humancentredness, Equity, Appropriateness, Transparency, Information to Users, Documentation,
Performance, Efficiency, Effectiveness, Quality, Robustness, Accuracy, Performance,
Safety, Human Oversight, and Cybersecurity or HLEG ALTAI requirements explicitly.
ISO/IEC AWI 42005 is a standard that provides guidance for organisations performing AI
system impact assessments for individuals and societies that can be affected by an AI system
and its intended and foreseeable applications. The standard does not mention explicitly
Human-centredness, Equity, Appropriateness, Transparency, Information to Users,
Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness, Accuracy,
Performance, Safety, Human Oversight, and Cybersecurity or HLEG ALTAI requirements.
ISO/IEC DIS 42001 specifies the requirements and provides guidance for establishing,
implementing, maintaining, and continually improving an AI management system within the
context of organisations (i.e. public or private organisations providing or using products or
services that utilise AI-systems). It is a management system standard that sets out the
processes that an organisation needs to follow to meet its objectives and provides a
framework of good practice: it helps organisations develop or use AI-systems responsibly in
pursuing its objectives and meet applicable regulatory requirements, obligations related to
interested parties, and expectations from them196. The standard does not mention explicitly
Human-centredness, Equity, Appropriateness, Transparency, Information to Users,
Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness, Accuracy,
Performance, Safety, Human Oversight, and Cybersecurity or HLEG ALTAI requirements.
ISO/IEC AWI TS 29119-11 is a standard in development for software and systems
engineering. It is part of the ISO/IEC 29119 series of standards for software testing and
describes testing methodologies (including those described in ISO/IEC/IEEE 29119-4)
applicable for AI-systems in the context of the AI system life cycle model stages defined in
ISO/IEC 22989. Hence, it describes how AI and ML assessment criteria and metrics can be
used in the context of those testing methodologies. It also maps testing processes, including
those described in ISO/IEC/IEEE 29119-2, to the verification and validation stages in the AI
system life cycle197. The standard does not mention explicitly Human-centredness, Equity,
Appropriateness, Transparency, Information to Users, Documentation, Performance,
195
https://www.iso.org/standard/44546.html
https://www.iso.org/standard/81230.html
197
https://www.iso.org/standard/84127.html
196
124
Efficiency, Effectiveness, Quality, Robustness, Accuracy, Performance, Safety, Human
Oversight, and Cybersecurity or HLEG ALTAI requirements.
ISO/IEC PRF 25059 is a quality model for AI-systems and is a specific extension to the
standards related to Software product Quality Requirements and Evaluation SQuaRE 198
within the ISO/IEC 25000 series of standards like the previous standard. The ISO/IEC 25000
series provides guidelines and specifications for software product quality requirements and
evaluation. The characteristics detailed in the model provide a consistent terminology for
specifying, measuring and evaluating AI system quality. Aspects of Transparency,
Performance, Efficiency, Effectiveness, Robustness, Accuracy, Safety, Human Oversight,
and Performance are addressed in section 5 Product quality model: 5.2 User controllability,
5.3 Functional adaptability, 5.4 Functional correctness, 5.5 Robustness, 5.6 Transparency
and 5.7 Intervenability. Quality and HLEG ALTAI requirements are addressed in section 6
Quality in use model: 6.1 General, 6.2 Societal and ethical risk mitigation, 6.3 Transparency
as well as in annex A. SQuaRE divisions and annex B. How a risk-based approach relates
to a quality-based approach and quality models. Human-centredness, Equity,
Appropriateness, and Cybersecurity are not mentioned or not mentioned explicitly.
ISO/IEC AWI TR 21221 aims to describe a conceptual framework to articulate the benefits
of AI-systems as perceived by a variety of stakeholders based on value and impact199. The
dimensions of benefits of AI-systems include, but are not limited to functional, economic,
environmental, social, societal, and cultural. The stakeholders include participants in the
development of AI International Standards, users of International Standards, users, and
subjects of AI-systems. Illustrations include use cases of applications of AI-systems from
various industry sectors. While aspects of Human-centredness, Equity, Appropriateness,
Transparency, Information to Users, Documentation, Quality, Safety, Human Oversight and
HLEG ALTAI requirements are addressed, this is not (yet) according to the principles and
concepts on how value and impact is perceived in the healthcare context. Aspects of
Performance, Efficiency, Effectiveness, Robustness, Accuracy, Performance, and
Cybersecurity are not mentioned explicitly or in a limited sense.
ISO/IEC AWI TR 20226 is a technical report that provides guidelines for environmental
sustainability aspects of AI-systems 200 which is in development at the time of writing.
Although aspects of Human-centredness, Equity, Appropriateness, Transparency,
Information to Users, Documentation, Performance, Efficiency, Effectiveness, Quality,
Robustness, Accuracy, Performance, Safety, Human Oversight, and Cybersecurity as well
as HLEG ALTAI requirements are addressed in this standard, no specific information is
currently available.
ISO/IEC CD TR 17903 is a technical report that provides an overview of machine learning
computing devices which is under development and has not yet been published201. Currently
198
a series of standards that provides a framework for evaluating software quality.
https://www.iso.org/standard/86690.html
200
https://www.iso.org/standard/86177.html
201
https://www.iso.org/standard/85078.html
199
125
there is no specific information available on aspects of Human-centredness, Equity,
Appropriateness, Transparency, Information to Users, Documentation, Performance,
Efficiency, Effectiveness, Quality, Robustness, Accuracy, Performance, Safety, Human
Oversight, and Cybersecurity or HLEG ALTAI requirements.
ISO/IEC AWI TS 17847 is a technical specification that describes approaches and provides
guidance on processes for the verification and validation analysis of AI-systems (comprising
AI system components and the interaction of non-AI components with the AI system
components), including formal methods, simulation, and evaluation. It is currently under
development and has not yet been published202. Currently there is no specific information
available on aspects of Human-centredness, Equity, Appropriateness, Transparency,
Information to Users, Documentation, Performance, Efficiency, Effectiveness, Quality,
Robustness, Accuracy, Performance, Safety, Human Oversight, and Cybersecurity or HLEG
ALTAI requirements.
ISO/IEC AWI 12792 is a standard in development for transparency taxonomy of AI-systems.
It defines a taxonomy of information elements to assist AI stakeholders with identifying and
addressing the needs for transparency of AI-systems. The document describes the
semantics of the information elements and their relevance to the various objectives of
different AI stakeholders 203 . Although aspects of Human-centredness, Equity,
Appropriateness, Transparency, Information to Users, Documentation, Performance,
Efficiency, Effectiveness, Quality, Robustness, Accuracy, Performance, Safety, Human
Oversight, and Cybersecurity as well as HLEG ALTAI requirements are addressed in this
standard, no specific information is currently available.
The ISO/IEC JTC 1/SC 42 subcommittee is responsible for standardisation in Artificial
Intelligence. The committee has started with foundational standards that include AI concepts
and terminology that anticipate the necessity a common vocabulary, taxonomies, and
definitions. Most of the published standards are of technical and general nature in which
aspects of Human-centredness, Equity, Appropriateness, Transparency, Information to
Users, Documentation, Performance, Efficiency, Effectiveness, Quality, Robustness,
Accuracy, Performance, Safety, Human Oversight, and Cybersecurity as well as HLEG
ALTAI requirements are partially addressed i.e. to a more or lesser degree.
5.2.4
RESULTS OF QUANTITATIVE AND QUALITATIVE ANALYSIS OF
STANDARDS SO FAR
Regarding medical devices and healthcare, AI Act developments and human-centric context
of this study, additional standardisation activities might be needed. This applies to both
specific aspects such as preventing bias in development, implementation and use of
algorithms for specific disease areas and/or healthcare echelons such as primary care,
rehabilitation or child care, as well as for approaches of AI technologies at a health system
level: e.g. person-centred services, health and social care, data and technology
202
203
126
https://www.iso.org/standard/85072.html
https://www.iso.org/standard/84111.html
interoperability, cross-organisational governance, and management, as well as financial
aspects such as sustainable investment, procurement, and reimbursement.
Environmental aspects are not yet part of the existing AI oriented standards, but a specific
standard is currently in development204.
The European Committee for Standardisation and European Committee for Electrotechnical
Standardisation (CEN-CENELEC) Joint Technical Committee 21 ‘Artificial Intelligence’205
CEN and CENELEC have established the new CEN-CENELEC Joint Technical
Committee 21 ‘Artificial Intelligence’, based on the recommendations presented in the
CEN-CENELEC response to the EC White Paper on AI206, the CEN-CENELEC Focus Group
Road Map on Artificial Intelligence207 and the German Standardisation Roadmap for Artificial
Intelligence 208 . Currently, there are several AI-related standards under development at
CENELEC JTC 21, including:
1
2
3
4
5
6
204
EN 50600-6-10:2021 - Data centres - Part 6-10: Security management for data centres
using artificial intelligence and machine learning.
CLC/TS 50634:2021 - Electromagnetic fields and wireless communication
technologies - Human exposure assessment in relation to electromagnetic fields from
wireless communication devices - Application of the finite element method to the
assessment of specific absorption rate (SAR) in human tissues exposed to
radiofrequency fields from wireless communication devices using a generic model of
the human head and body - Part 2: Models for exposure to radiofrequency fields from
wireless communication devices using artificial intelligence.
CLC/TS 50643:2021 - Electromagnetic fields and wireless communication
technologies - Human exposure assessment in relation to electromagnetic fields from
wireless communication devices - Application of the finite element method to the
assessment of specific absorption rate (SAR) in human tissues exposed to
radiofrequency fields from wireless communication devices using a generic model of
the human head and body - Part 3: Artificial intelligence-based parameterization of the
generic model of the human head and body.
CLC/TS 50701:2021 - Risk management for IT networks incorporating artificial
intelligence and machine learning.
CLC/TS 50702:2021 - Artificial intelligence and data analytics - Vocabulary and
reference architecture.
CLC/TS 50703:2021 - Artificial intelligence - Trustworthiness and ethically aligned
design - Part 1: Principles and guidelines.
https://www.iso.org/standard/86177.html
https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/
206
https://www.cencenelec.eu/media/CEN-CENELEC/Areas%20of%20Work/Position%20Paper/cen-clc_ai_fg_whitepaper-response_final-version_june-2020.pdf
207
https://www.cencenelec.eu/media/CEN-CENELEC/AreasOfWork/CENCENELEC_Topics/Artificial%20Intelligence/Quicklinks%20General/Documentation%20and%20Materials/cenclc_fgreport_roadmap_ai.pdf
208
https://www.din.de/en/innovation-and-research/artificial-intelligence
205
127
7
CLC/TS 50704:2021 - Artificial intelligence - Trustworthiness and ethically aligned
design - Part 2: The implementation of principles and guidelines.
5.3
INSTITUTE OF ELECTRICAL AND ELECTRONICS
ENGINEERS (IEEE)
The Institute of Electrical and Electronics Engineers (IEEE) has been developing
standards around AI for a few years, primarily referring to them as autonomous system
standards. IEEE has several working groups and committees dedicated to developing
standards related to AI209. Some of the AI-related standards currently under development at
IEEE include:
1
2
3
4
5
6
7
8
9
209
128
P2841 - Framework and Process for Deep Learning Evaluation.
P7000 - Standard Model Process for Addressing Ethical Concerns during System
Design.
P7001 – D4 Draft Standard for Transparency of Autonomous Systems.
P7003 - Standard for Algorithmic Bias Considerations. P7003 provides a framework for
addressing ethical concerns in the design and development of systems, including
those involving AI. While it does not specifically address medical devices, it offers
general principles that can be applicable in the context of AI-enabled medical devices.
P7004 - Standard for Child and Student Data Governance. P7004 provides guidance
on the development and deployment of AI agents that handle personal data. While it
does not specifically address medical devices, it offers general principles that can be
applicable in the context of AI-enabled medical devices.
P7005 - Standard for Transparent and Explainable AI-systems. P7005 focuses on the
process of managing data privacy throughout the lifecycle of a system or product.
While it does not specifically address medical devices, it provides general principles
that can be applicable in the context of AI-enabled medical devices.
P7006 - Standard for Personal Data Artificial Intelligence (AI) Agent. This is not a
standard, but a set of principles based on the ethical and privacy considerations for the
design and development of personal data AI agents. While it does not specifically
address medical devices, it provides general principles that can be applicable in the
context of AI-enabled medical devices.
P7007 - Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous
Systems. P7007 provides guidance on ethical considerations in the design and
development of robotics and automation systems. While it does not specifically focus
on medical devices and AI, it establishes general principles that can be applicable in
the context of AI-enabled medical devices.
P7008 - Standard for Ethical Considerations in Emulated Empathy and Compassion in
Artificial Intelligence and Robotic Systems. P7008 focuses on personalised health
informatics and the modelling and management of population health. While this
https://standards.ieee.org/initiatives/autonomous-intelligence-systems/
10
11
12
13
14
15
standard does not directly address aspects related to medical devices and AI, it
provides guidance and requirements for managing health data and population health
informatics. Therefore, the aspects typically associated with medical devices and AI,
such as accuracy, safety, transparency, efficiency, human-centredness, and
cybersecurity, may not be explicitly covered in the context of P7008.
P7009 - Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems. P7009 provides guidance on ethical considerations in the design and
development of autonomous systems. While it does not specifically focus on medical
devices and AI, it establishes general principles that can be applicable in the context of
medical devices powered by AI.
P7010 - Well-being Metrics Standard for Artificial Intelligence and Autonomous
Systems.
P7011 - Standard for the Process of Identifying and Rating the Trustworthiness of
News Sources.
P7012 - Standard for Machine Readable Privacy Terms.
P7013 - Standard for Organisational Governance of AI Ethics.
P7014 - Standard for Adoption of a General Principle of Artificial Intelligence Ethics.
A recent report ‘AI Watch: Artificial Intelligence Standardisation Landscape Update Analysis of IEEE standards in the context of the European AI Regulation’ provided an indepth examination of the content of IEEE standards, providing a comparison with existing
ISO/IEC work and identifying areas that may require adaptation to European needs, to
facilitate their potential integration within European standardisation work for the AI Act210.
This analysis has been performed by a group of experts in the field regarding trustworthy
AI from the European Commission's Joint Research Centre with the objective to assess the
degree to which these specifications cover European standardisation needs in the context of
the AI Act.
The in-depth analysis of several AI standards and certification criteria stems from the IEEE
Standards Association. The documents reviewed have been found to provide relevant
technical detail that could support providers of high-risk AI-systems in complying with the
requirements defined in the legal text. Some of the reviewed specifications in technical areas
of AI-systems have been indicated as standardisation gaps by previous analyses, making
them potentially valuable sources for European standardisation actions.
Building on existing international work on AI is expected to be an efficient way to develop the
standards in alignment with the AI Act, avoiding duplication of efforts, and facilitating their
broad adoption by AI providers.
210
129
Soler Garrido, J., Tolan, S., Hupont Torres, I., Fernandez Llorca, D., Charisi, V., Gomez Gutierrez, E., Junklewitz, H.,
Hamon, R., Fano Yela, D. and Panigutti, C., AI Watch: Artificial Intelligence Standardisation Landscape Update,
Publications Office of the European Union, Luxembourg, 2023, doi:10.2760/131984, JRC131155
5.3.1
1
2
3
4
5
6
7
STANDARDS APPLICABLE TO MEDICAL DEVICES
IEC 62304 - Medical device software - Software life-cycle processes: This standard
specifies life cycle requirements for the development of medical device software and
includes requirements for the development of software that incorporates AI ensuring its
safety and performance.
IEC 62366 - Medical devices - Application of usability engineering to medical devices:
This standard provides a framework for applying human factors and usability
engineering to medical devices, including those that incorporate AI.
IEC 80001-1 - Application of risk management for IT-networks incorporating medical
devices - Part 1: Roles, responsibilities, and activities: This standard provides
guidance for managing risks associated with the integration of medical devices with IT
networks, including those that use AI.
IEC 60601-1-12: This standard provides guidance on the safety and effectiveness of
medical electrical equipment that incorporates software. It includes requirements for
software development and validation, as well as risk management.
ISO/IEC 27001 - Information technology - Security techniques - Information security
management systems: This standard outline best practices for information security
management systems and is relevant for medical devices that incorporate AI.
ISO 13485 - Medical devices - Quality management systems - Requirements for
regulatory purposes: This standard specifies requirements for quality management
systems used by medical device manufacturers and includes requirements for the
development of software, including software that incorporates AI.
ISO 14971 - Medical devices - Application of risk management to medical devices:
This standard provides a process for managing risks associated with medical
devices, including those that incorporate AI.
5.4
OTHER RELEVANT STANDARDISATION INITIATIVES
5.4.1
WORLD HEALTH ORGANISATION
The World Health Organisation (WHO) was the first organisation to issue a global report on
AI in health, which provides a framework through which regulators, AI developers, and health
institutions would engage with AI-based medical technologies. The WHO sets global
governance guidelines for AI in healthcare. These guidelines contribute to the various
dimensions of an evolving regulatory paradigm (Zhou and Gattinger, 2024).
5.4.2
INTERNATIONAL MEDICAL DEVICE REGULATORS FORUM
The International Medical Device Regulators Forum (IMDRF) is actively working on the
management of AI-based medical devices. The project of the working group covers machine
learning-based medical devices and adaptive algorithms representing AI technology
130
applied to medical devices and further standardised terminology for machine learningbased medical devices among member jurisdictions211.
This group seeks to prioritise consensus in the AI/ML sector, where rapid technological
advancements and an influx of manufacturers from sectors beyond medical devices is seen.
Regulatory consensus for AI/ML has a close interplay with Software as a Medical Device
(SaMD) for many jurisdictions, it is therefore also a priority to maintain alignment with broader
software guidance. The goal of the AI/ML WG is to develop new documentation on the topic
of Good Machine Learning Practice (GMLP) to provide internationally harmonised principles
to help promote the development of safe and effective AI/ML-enabled medical devices212.
1
2
3
4
Software as a Medical Device - Clinical Evidence (N55) - Guidance to all those
involved in the generation, compilation, and review of clinical evidence sufficient to
support the marketing of medical devices.
Software as a Medical Device - Clinical Investigation (N57) - Guidance focusing on
the activities needed to clinically.
Evaluation Software as a Medical Device.
Software as a Medical Device - Clinical Evaluation (N56) - Guidance outlining
general principles of clinical evaluation; how to identify relevant clinical data to be used
in a clinical evaluation; how to appraise and integrate clinical data into a summary; how
to document a clinical evaluation in a clinical evaluation report.
5.4.3
EUROPEAN COORDINATION COMMITTEE OF THE RADIOLOGICAL,
ELECTROMEDICAL AND HEALTHCARE IT INDUSTRY
The European Coordination Committee of the Radiological, Electromedical and Healthcare
IT Industry COCIR, has made recommendations on the AI Act alignment with the MDR.
They ask for effective alignment mechanisms between the AI Board, Medical Device
Coordination Group, and stakeholders in the upcoming implementation of AI Act which
ensure safety, performance, and effectiveness of AI-enabled Medical Devices213.
5.4.4
STANDING TOGETHER
The STANDING Together initiative aims to ensure that inclusivity and diversity are
considered when developing health dataset214. The STANDING Together consortium was
established in 2021 as part of the NHS AI Lab’s AI Ethics initiative, it is a partnership
between over 30 academic, regulatory, policy, industry, and charitable organisations
worldwide. STANDING Together is funded by the NHS AI Lab at the NHS Transformation
Directorate and The Health Foundation and managed by the National Institute for Health and
Care Research (AI_HI200014). The consortium have formulated recommendations, through
an international consensus process, which provide guidance on transparency around 'who'
211
https://www.imdrf.org/working-groups/artificial-intelligence-medical-devices
https://www.imdrf.org/working-groups/artificial-intelligencemachine-learning-enabled
213
https://www.cocir.org/latest-news/position-papers/article/cocir-recommendations-on-the-artificial-intelligence-act-aias-alignment-with-the-medical-devices-regulation-mdr
214
Standing Together. Draft recommendations for healthcare dataset standards supporting diversity, inclusivity, and
generalisability. Green Paper. 2023. https://www.datadiversity.org/draft-standards
212
131
is represented in the data, 'how' people are represented and how data is used when
developing AI technologies for healthcare.
5.4.5
EQUATOR
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network
is an international initiative that seeks to improve the reliability and value of published health
research literature by promoting transparent and accurate reporting.





DECIDE-AI - Guidelines for developmental and exploratory clinical investigations for
decision support systems driven by AI (human factors and early clinical evaluation).
STARD-AI - Reporting guidelines for diagnostic accuracy studies assessing AI
Interventions.
TRIPOD-ML - Reporting standards for ML based predictive models.
CONSORT-AI - Reporting standards for studies incorporating AI-based Interventions.
SPIRIT-AI - Study protocol standards for AI-based Interventions.
5.5
OVERVIEW REGIONAL AND NATIONAL
STANDARDISATION INITIATIVES
The following section describes an (inexhaustive) overview of standardisation and related
initiatives at a national and regional level which might have implications for future standards
development. These standardisation initiatives could be related to existing standards
development mentioned in the previous section such as CEN or ISO initiatives.
5.5.1
UNITED KINGDOM
The UK government has launched an initiative to shape global technical standards for AI.
The Alan Turing Institute, supported by the British Standards Institution and the National
Physical Laboratory, will pilot this initiative. The new AI Standard Hub will create practical
tools for businesses, bring the UK’s AI community together through a new online platform,
and develop educational materials to help organisations develop and benefit from global
standards215.
AI Standards Hub
The AI Standards Hub has been pursuing a program of work focused on fostering
international stakeholder networks dedicated to knowledge sharing, capacity and community
building, and collaborative research in relation to AI standardisation216.
British Standard Institute (BSI)

215
Medicines and Medical Devices Act 2021217;
New UK initiative to shape global standards for Artificial Intelligence. https://www.gov.uk/government/news/new-ukinitiative-to-shape-global-standards-for-artificial-intelligence
216
https://aistandardshub.org/events/international-collaboration-on-ai-standardisation-key-lessons-and-opportunities
217
https://www.legislation.gov.uk/ukpga/2021/3/contents
132



BSI White Paper – Overview of Standardisation landscape in Artificial Intelligence;
Position paper - The emergence of Artificial Intelligence and machine learning
algorithms in healthcare: Recommendations to support governance and regulation218;
BS 8611:2016 on ethical design of robots and robotic devices which includes the
principle of ‘human in command’.
Medicines and Healthcare products Regulatory Agency (MHRA)


Guiding principles that can inform the development of Good Machine Learning
Practice (GMLP)219;
Human factors and usability engineering guidance for medical devices - Standards
for usability evaluation, post-market surveillance, and monitoring summative testing.
Adapted from the FDA’s Applying human factors and usability engineering to medical
devices 2016.
Other

Interim guidance for those wishing to incorporate AI into the National Breast
Screening Programme - Draft Guidance to start discussions on evidence
requirements for AI in Breast Cancer Screening Programme, includes incorporating and
piloting and research governance submission committee – National Screening
committee NSC (UK)220.
5.5.2
UNITED STATES OF AMERICA
Food and Drug Administration (FDA) and American National Standards Institute
Within the US, National Institute of Standards and Technology (NIST), a part of the US
government, regularly works to test out new areas of technological Standardisation alongside
research. In July 2019, NIST released a Draft Plan for Federal Engagement in AI Standards
Development221, which states ‘America’s success and prospects as the global AI leader
demands that the Federal government play an active role in developing AI standards’.
The FDA has issued several guidance documents on the use of AI in medical devices. These
guidance documents are important for medical device manufacturers that are developing
products that incorporate AI algorithms, as they provide guidance on the regulatory
requirements and expectations for these devices. The most important guidance documents:

218
"Artificial Intelligence/Machine Learning-based Software as a Medical Device (SaMD)
Action Plan": This action plan, released in 2019, outlines the FDA's approach to
regulating AI/ML-based software used in medical devices. The plan emphasises the
need for transparency, explainability, and robustness in these algorithms and provides
https://www.bsigroup.com/globalassets/localfiles/en-gb/about-bsi/nsb/innovation/mhra-ai-paper-2019.pdf
https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-developmentguiding-principles
220
https://www.gov.uk/government/publications/artificial-intelligence-in-the-nhs-breast-screening-programme/interimguidance-on-incorporating-artificial-intelligence-into-the-nhs-breast-screening-programme
221
https://www.nist.gov/news-events/news/2019/07/nist-releases-draft-plan-federal-engagement-ai-standardsdevelopment
219
133





guidance on the regulatory pathways for bringing AI/ML-based medical devices to
market.
"Changes to Existing Medical Software Policies Resulting from Section 3060 of the 21st
Century Cures Act": This guidance document, released in 2017, provides information
on how the FDA will regulate software used in medical devices, including those that
incorporate AI algorithms. The guidance document emphasises the importance of a
risk-based approach to regulation and outlines the criteria for determining whether
software is a medical device.
"Clinical Decision Support Software": This guidance document, released in 2017,
provides guidance on the regulation of clinical decision support (CDS) software. CDS
software often incorporates AI algorithms, and the guidance document provides
information on how such software will be regulated by the FDA.
"Content of Premarket Submissions for Management of Cybersecurity in Medical
Devices": This guidance document, released in 2014, provides guidance on how
medical device manufacturers should address cybersecurity risks in their products. The
guidance document is relevant for medical devices that use AI algorithms, which may
process sensitive patient data.
ASTM F3109 is a standard developed by ASTM International, which provides guidance
on the validation of AI algorithms used in medical devices. The standard was first
published in 2018 and is titled "Standard Practice for Verification and Validation of AI
and Machine Learning Based Software for Medical Device Applications." The purpose
of ASTM F3109 is to provide a framework for the validation of AI algorithms used in
medical devices, with the goal of ensuring that these algorithms are safe and effective
for their intended use. The standard outlines a set of procedures and requirements for
the development and validation of AI algorithms, including requirements for data
quality, algorithm development and clinical validation. ASTM F3109 is particularly
relevant for medical devices that use AI algorithms, which are becoming increasingly
common in the healthcare industry. These algorithms may be used for a variety of
applications, such as image analysis, diagnosis, and treatment planning. By following
the guidelines set forth in ASTM F3109, medical device manufacturers can ensure that
their products are validated according to a recognised standard, which can help to
improve patient safety and promote regulatory compliance.
Researchers at Stanford University proposed a new set of standards for reporting AI
solutions in healthcare, entitled MINMAR (MINimum Information for Medical AI
Reporting). The MINMAR standards describe the minimum information necessary to
understand intended predictions, target populations, model architecture, evaluation
processes and hidden biases (Hernandez-Boussard et al., 2020).
5.5.3
CHINA
Since 2009, China’s AI policy has undergone five development stages. China’s AI policy
mainly focuses on “made in China”, innovation-driven development, IoT, next generation
Internet, big data, scientific and technological R&D.
134
China has strategic focus on AI in healthcare. Primarily, to solve its domestic problems
related to the unbalanced distribution of healthcare resources and rise in noncommunicable diseases. The municipal government in Shanghai plans to invest $15 billion,
more than many national governments, demonstrating the strong drive for innovation China
has also been actively involved in developing standards related to standards in AI and
healthcare. Key aspects related to healthcare and medical devices are:






222
135
To standardise the digital services provided by healthcare organisations China
implemented in March 2019, a smart medical service grading system for the
development of “smart hospitals,”. A typical “smart hospital” features informationbased service systems including intelligent equipment and devices, medical records, a
hospital navigation system, and a logistics management system.
Artificial Intelligence in Medical Imaging and Medical Devices222 is undertaken by the
National Health Commission (NHC) of China who has issued the “Technical Guidelines
for the Clinical Application of Medical Imaging Artificial Intelligence” which provide
guidance on the development, validation and clinical application of AI-based medical
imaging technologies. These guidelines outline the requirements for data quality,
algorithm validation, clinical evaluation, and ethical considerations, with the aim of
ensuring the accuracy, reliability, and safety of AI-driven medical systems.
China has regulations in place to protect data privacy and security in healthcare,
including AI applications. The Cybersecurity Law of China and the General Data
Protection Regulation (GDPR) for Personal Health Information (PHI) govern the
collection, use, and disclosure of personal health information, including when used in AI
applications. These regulations outline the requirements for obtaining patient consent,
protecting personal information, and ensuring secure data handling practices when
using AI in healthcare settings.
China has also been actively discussing and developing ethical guidelines for the use
of AI in healthcare. The Chinese Medical Association (CMA) has issued the “Ethical
Principles for Medical Artificial Intelligence” which provide guidance on topics such as
fairness, transparency, explainability, and accountability in the development and use of
AI in healthcare. These guidelines aim to promote ethical and responsible use of AI
technologies in healthcare settings.
China has been promoting interoperability and interchangeability of medical data to
facilitate the use of AI in healthcare. The National Health Commission (NHC) has
launched initiatives such as the “National Health Information Standards Framework”
and the “National Health Information Sharing Platform” which aim to standardise
medical data formats, terminologies, and interfaces to enable seamless integration of
AI technologies into healthcare workflows.
China has also been actively participating in international efforts to develop standards
for AI in healthcare. For example, China has been involved in the activities of the ISO
https://chinameddevice.com/guideline-on-artificial-intelligence-medical-devices/
and the IEC in developing standards related to health informatics and AI applications in
healthcare.
In October 2021, the FDA, Health Canada and the UK's Medicines and Healthcare products
Regulatory Agency MHRA have jointly identified 10 guiding principles that can inform the
development of Good Machine Learning Practice (GMLP). These guiding principles aim to
promote safe, effective, and high-quality medical devices that use AI and ML223.
5.5.4
JAPAN
Japan has been proactive in developing standards and guidelines related to AI in
healthcare 224 and medical devices 225 , focusing on ensuring patient safety, data privacy,
ethical considerations, and interoperability. The Japanese government has called for greater
use of AI and robotics as part of the government’s economic growth strategy, urging
businesses to invest more into researching new technologies:




223
The Ministry of Health, Labour and Welfare (MHLW) in Japan has established the
“Guidelines on Clinical Evaluation of Computer-Aided Medical Devices” which provide
standards and procedures for evaluating the clinical performance of AI-based medical
devices. These guidelines outline the requirements for the development, validation, and
use of AI algorithms in medical devices, including the need for appropriate clinical trials
and evaluations to ensure their safety and efficacy.
Japan has strict regulations for data privacy and security in healthcare, including AI
applications. The Act on the Protection of Personal Information (APPI) and the Act on
the Anonymity of Health Information for the Promotion of the Use of Health Information
(Anonymity Act) govern the collection, use and disclosure of personal health
information, including AI-driven healthcare data. These regulations outline the
requirements for obtaining patient consent, protecting personal information and
ensuring secure data handling practices when using AI in healthcare settings.
Japan has also been actively discussing and developing ethical guidelines for the use
of AI in healthcare. The Cabinet Office of Japan has established the “AI R&D
Principles” which emphasise ethical considerations in AI development, including
healthcare applications. Additionally, the Japan Society for Artificial Intelligence (JSAI)
has developed the “Ethical Guidelines for AI in Healthcare” which provide
recommendations on topics such as transparency, fairness, accountability, and
human oversight in the use of AI in healthcare.
Interoperability and Interchangeability: Japan has been promoting interoperability and
interchangeability of medical data to facilitate the use of AI in healthcare. The Ministry
of Economy, Trade, and Industry (METI) has established the Japan Medical Data
Center JMDC initiative which aims to create a platform for data sharing among medical
https://www.fda.gov/news-events/press-announcements/fda-brief-fda-collaborates-health-canada-and-uks-mhrafoster-good-machine-learning-practice
224
https://www.mhlw.go.jp/english/
225
https://www.pmda.go.jp/english/index.html
136

institutions, including the development of common data standards and APIs to enable
seamless integration of AI technologies into healthcare workflows.
International Standards: Japan has also been actively involved in international efforts to
develop standards for AI in healthcare. For example, ISO has established the Technical
Committee (TC) 215 on Health Informatics, which includes representatives from Japan,
to develop standards related to health informatics, including AI applications in
healthcare.
5.5.5
INDIA
India has been actively working on developing standards related to AI and healthcare to
ensure the safe and effective use of AI technologies in the healthcare industry. NITI Aayog
(the National Institution for Transforming India) has developed a document to propose a
National Strategy for AI with a sector focus on Healthcare, Agriculture, Education, Smart
Cities and Infrastructure, and Smart Mobility and Transport. The organisation is focused on
existing international standards to meet challenges identified in the strategy. India has
launched a Living Lab for international collaboration for experimentation to address societal
challenges around the contribution of AI to the Future of Work (Bessariya, 2022).

The Central Drugs Standard Control Organisation (CDSCO) in India has issued the
"Guidance for Regulatory Approval of Medical Devices226: Conformity Assessment of
AI-based Software as a Medical Device (SaMD)", in close collaboration with Japan,
which provides guidance on the regulatory approval process for AI-based medical
devices. These guidelines outline the requirements for data quality, algorithm
validation, clinical evaluation, and risk management for AI-driven medical devices,
with the aim of ensuring their safety, efficacy, and performance.
5.5.6
BRAZIL
To date, there are no specific national standards related to AI and healthcare in Brazil.
However, Brazil has been actively discussing and exploring the use of AI technologies in
healthcare and efforts are underway to develop guidelines and regulations to ensure their
safe and effective use.


226
137
The Brazilian government has initiated discussions and consultations to establish a
regulatory framework for the use of AI in healthcare. The Brazilian National Agency of
Health Surveillance (ANVISA) has been engaging in public consultations and pilot
projects to assess the impact of AI technologies in healthcare and gather input from
stakeholders. These efforts aim to lay the foundation for future regulations related to AI
in healthcare in Brazil.
Brazil has also been focusing on ethical considerations in the use of AI in healthcare.
The Brazilian Association of Health Informatics (SBIS) has issued the "Manifesto of
Health Informatics Ethics" which highlights ethical principles and guidelines for the
development and use of digital health technologies, including AI, in healthcare settings.
ttps://cdsco.gov.in/opencms/opencms/en/Medical-Device-Diagnostics/Medical-Device-Diagnostics/


These guidelines aim to promote responsible and ethical use of AI technologies in
healthcare in Brazil.
Brazil has been working on interoperability and interchangeability of medical data to
enable the use of AI in healthcare. The Brazilian government has launched the "Digital
Health Brazil Strategy" which aims to create a national digital health ecosystem,
including the development of common data standards, health information exchanges
and digital health records to facilitate the integration of AI technologies into healthcare
workflows.
Brazil has also been participating in international efforts to develop standards for AI in
healthcare. Brazil is a member of ISO and has been involved in the activities related to
health informatics and AI applications in healthcare, contributing to the development of
international standards in this field.
While Brazil does not currently have specific national standards related to AI and healthcare,
efforts are underway to establish guidelines and regulations to ensure the responsible and
effective use of AI technologies in healthcare settings. These initiatives aim to provide a
framework for the development and use of AI in healthcare in Brazil, taking into consideration
ethical considerations, interoperability, and international standards.
5.5.7
THE NETHERLANDS
NEN (the Dutch national standards body) is participating in the European Union funded
H2020 called SHERPA. The project will investigate, analyse, and synthesize our
understanding of the ways in which smart information systems (SIS; the combination of AI
and big data analytics) impact ethics and human rights issues. In addition, several
universities have well established AI programmes at both bachelor and postgraduate levels.




138
NEN 7510 is a Dutch standard for information security in healthcare organisations.
While not specifically focused on AI, it provides guidelines for protecting sensitive
patient data, which is relevant in the context of AI applications in healthcare that involve
data collection, storage, and processing.
The AVG is the Dutch implementation of the General Data Protection Regulation
(GDPR) of the European Union, which includes provisions related to the processing of
personal data in the context of AI in healthcare. It sets requirements for obtaining
consent, ensuring transparency, and protecting the rights of individuals whose data is
processed by AI-systems.
Code of Conduct for Responsible Data Sharing: The Netherlands Federation of
University Medical Centres (NFU) has developed a Code of Conduct for Responsible
Data Sharing, which provides guidelines for the responsible use and sharing of health
data, including those used in AI applications.
The Dutch Ministry of Health, Welfare and Sport has developed ethical guidelines for
the use of AI in healthcare, which provide principles for responsible and ethical use of
AI in areas such as data collection, data privacy, transparency, and accountability.
5.6
KEY CONCLUSIONS ABOUT STANDARDISATION
In general,











In the design and use of AI-enabled medical devices it should be ensured that their
functions and performance are aligned with societal values and principles across the
EU through application of standards and regulation.
Harmonised standards will support the EU internal market in the development,
implementation, and upscaling of safe, transparent, accountable, and responsible AIenabled medical devices while it would strengthen its international competitiveness.
Many relevant standards already exist, but a variety of new standards and
enhancement of existing standards are needed to cover the identified gaps: i.e.
technical standards, performance standards, safety and quality standards,
management and governance standards as well as ethical standards.
Gaps and shortcomings were identified regarding aspects of human-centredness,
equity, appropriateness, transparency, information to users, documentation,
performance, efficiency, effectiveness, quality, robustness, accuracy, performance,
safety, human oversight, and cybersecurity: existing or published standards are
predominantly covering foundational aspects such as vocabularies, taxonomies, and
definitions related to AI-systems.
There is a need for new standards and adoption of existing standards for medical
devices which ensures alignment with the provisions as articulated in the AI Act and
addressing the aforementioned aspects as well as HLEG ALTAI requirements.
Available standards are general and not well suited for health and social care
application: usually covering specific technical and quality management aspects.
Dedicated standards related to AI-driven medical devices, health, and social care are
limited.
It is difficult to oversee the landscape of AI standardisation, particularly for the
application in healthcare.
The fragmentated standardisation landscape related to AI, points to the need of more
embracing and guiding standards framework for medical devices.
There is a need for an efficient compliance management instrument tailored to the
specific risks and management system requirements of AI-driven medical device
solutions.
A dedicated overarching standardisation approach is warranted, considering the
comprehensiveness and complexity of development, testing, validation, and application
of AI-driven medical device solutions.
While international collaborations and standardisation initiatives are essential for the
successful integration and regulation of AI-based medical devices and IVDs, they also
present several challenges. Manufacturers of devices with machine learning face the
challenge of having to demonstrate compliance of their devices with the regulations. Even if
they know the relevant regulation and legislation, they must consider the standards and best
practices to provide the evidence and speak to authorities and notified bodies on the
139
same level 227 . AI models are seldomly programmed ‘line by line’. Instead, many AI
applications, particularly in the field of machine learning, are trained and assessed using
large data sets. This approach makes them difficult to validate and verify using existing
standards228. Different stakeholders such as manufacturers, software developers, clinicians,
regulators and global standards organisations are facing several challenges pertaining to
patient safety, effectiveness, transparency, accountability, and explainability of
software and AI-based medical devices. The increasing cybersecurity breaches and limited
sectoral data governance frameworks indicate the necessity to ensure the safety, quality
and integrity of medical services and ultimately patient trust (Mkwashi and Brass, 2022). The
'black box' challenge, where the workings of AI algorithms are not easily understandable,
poses challenges for evaluating real-world safety, effectiveness, and equitable performance
post-deployment. More work is needed to develop standards that address this issue229
European and international collaborations related to standards for AI in healthcare have
made significant improvements in recent years. While stakeholders in the EU have pioneered
proactive oversight initiatives, regulatory regimes remain fragmented across most nations
(Dolatkhah Laein, 2024).
227
Regulatory requirements for medical devices with machine learning. https://www.johnerinstitute.com/articles/regulatory-affairs/and-more/regulatory-requirements-for-medical-devices-with-machinelearning/
228
AI in Medical Devices: Key Challenges and Global Responses. https://www.mhc.ie/latest/insights/ai-in-medicaldevices-key-challenges-and-global-responses
229
The Impact of Artificial Intelligence on Health Outcomes https://health.ec.europa.eu/system/files/202304/policy_20230419_co04-2_en.pdf.
140
6.
STRATEGY
This chapter describes the potential implications for (future) standard development. First, the
strategic context is described with the state-of-affairs and a standpoint from an EU
perspective. Next, essential aspects for the development of a strategy are presented as
content for the formulation of a shared vision on standardisation which should translate into
goals, objectives and a related action plan. The importance of a framework for
implementation, monitoring, and evaluation of a strategy as well as the establishment of a
coordination and governance model are highlighted.
A European strategic infrastructure for AI-based solutions is proposed for better alignment
and a more effective development, testing, validation, and application of AI-based solution in
healthcare and related standardisation activities. Various existing initiatives and
infrastructures across Europe are described that could be integrated in a coherent network
of innovation-ecosystems and Living Labs. These will serve as real-world settings for the
development, testing, validation, and application of AI-systems with relevant end-users and
other stakeholders.
Through integration of existing innovation-ecosystems and Living Labs and the
establishment of new ones, a European-wide network could be created. Synergy is created
through enhanced coordination and alignment from local to centralised European level.
Support actions for these infrastructures and the role of horizon scanning are discussed
as well as the importance of capacity building.
The healthcare-specific standardisation gaps are presented as actions of interest for this
European-wide network for or the development, testing, validation, and application of AIsystems.
This chapter ends with conclusions and recommendations.
6.1
STRATEGIC CONTEXT
Harnessing the capabilities of person-centred, digitally enabled (and AI-enabled) integrated
health and social care necessitates careful governance that prioritises ethical standards over
political and economic obstacles. This will improve access to health and social care for all
EU citizens, improving quality, and efficiency of services while enhancing the health system
sustainability.
Adoption of AI-systems in health and social care can be responsibly facilitated through
pragmatic policies informed by citizens, both public and private organisations, as well as
multidisciplinary research from academia.
While AI has the potential to revolutionise healthcare, it is essential that its implementation
is accompanied by transparency and enforceable accountability measures regarding said
transparency, accountability, fairness, and its enforcement. Healthcare providers, patients,
141
and regulators need to understand how AI systems make their decisions. This includes
understanding the data used to train these systems and the logic behind their
recommendations. Manufactures, researchers, and suppliers who develop and supply AI
systems should be held accountable. This means that there should be mechanisms in place
to check the accuracy of AI predictions and recommendations and to hold them accountable
when they develop malfunctioning algorithms. Systems and related service should be
designed and used in a way that is fair and does not discriminate. This includes ensuring that
AI does not reinforce existing biases in healthcare delivery
Recommendations for transparency, accountability, and fairness are not enough on their
own. There needs to be strict enforcement of these principles, including regulation and
oversight.
The European Commission’s commitment to standardisation is a key part of its strategy to
ensure the safety, sustainability, and competitiveness of Europe230. This commitment
extends to the use of AI in healthcare and medical devices. Standardisation can assist
Europe in competing on the global stage by ensuring that European institutions and industry
lead in AI technology and innovation. It can also help ensure that Europe’s values and
principles, such as respect for privacy and data protection, as well as that AI technologies
are inclusive, safe, effective and reliable (ECA, 2024).
The European Commission’s commitment to standardisation aligns with the broader vision
of leveraging AI to revolutionise healthcare, improve patient care, and ensure system
sustainability by promoting the uniform model for standard development, implementation,
and maintenance throughout the European Single Market. This is a key step towards
realising the full potential of AI in healthcare and taking a progressive, leading role in the
global standardisation community231.
The specific strategic context of AI and medical devices with important trends and sector
dynamics are briefly addressed in the sections Healthcare, Medical Technology, Algorithms,
and Artificial Intelligence, and Standardisation. This should translate into the development of
a standardisation strategy which considers the needs, challenges, and opportunities raised
in this document. Accordingly, this document provides input for all relevant stakeholders to
discuss scenarios for standards development in relation to AI-driven medical devices.
6.2
STANDARDISATION STRATEGY DEVELOPMENT
Developing a strategy for standards development, particularly for the use of AI in medical
devices, requires an integrated and multifaceted approach that considers various
aspects. Aspects related to services, technology, management, governance, legislation,
regulation, ethics, and socioeconomics.
230
231
142
https://digital-strategy.ec.europa.eu/en/library/excellence-and-trust-ai-brochure
https://www.cencenelec.eu/media/CEN-CENELEC/Publications/cen-clc_strategy2030.pdf
The first step in developing a strategy is to establish a dedicated vision for the future. This
vision acts as a guiding principle, providing a clear and inspiring goal for the future. It
articulates the standardisation aspirations and directs the strategic planning process
(Thornton et al., 2024). The European Commission could establish such a joint vision on
AI-based solutions with relevant representatives from patient organisations, healthcare,
public health, academia, industry, NGO’s, civil societies, payers, and investors as well as
National Standards Bodies and committees of 34 countries232.
In the context of developing a strategy for standards in AI for use in medical devices and
healthcare, a tentative vision statement could read as follows: “To leverage AI-driven
medical devices (including IVD’s) to significantly improve patient outcomes, enhance
service delivery, ensure patient safety, and reduce healthcare costs.” This would be
consistent with the needs, challenges, and opportunities as described in the section
Healthcare, for example:






Integrated person-centred digitally enabled services - AI can assist in developing
personalised treatment plans based on a patient's unique health status, medical history,
and social context.
Community-based health and social care – AI can support hospital-to-home and
point-of-care services and streamlining processes across providers.
Supporting the healthcare workforce - AI can help by improving efficiency and
productivity of care delivery through reducing administrative burden.
Pro-active healthcare - remote monitoring for population health management and
preventive strategies.
Reduced healthcare costs - AI can assist in optimise resource allocation, automate
repetitive tasks and predictive maintenance.
Improving patient safety - AI can help with reducing the likelihood of errors to improve
the accuracy of diagnoses by analysing complex medical data and identifying patterns
that might be missed by humans.
The previous sections are a situational analysis of the four key healthcare domains, medical
technology, AI, and standardisation. Understanding the current state and the context of these
four domains provides a direction for setting strategic goals and related objectives in line
with an overall vision. This should translate into concrete actions with related planning to
achieve the objectives. It should combine with an implementation plan, a governance
structure, to coordinate the actions, as well as an evaluation, and monitoring process.
Given that development of AI-systems typically happens at a local level, the governance
and facilitation of the standard development process could be organised along the existing
levelled structure: CEN/CNELEC, National Standards Bodies and relevant international
organisations should connect with stakeholders to work according to the action plans.
232
143
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
As discussed in the Chapter 4.9 Development, Validation, and Implementation of AI for
Medical Devices: person-centred and trustworthy development, validation, and use of AIsystems, require an appropriate setting with a suitable infrastructure.
6.2.1
A EUROPEAN STRATEGIC INFRASTRUCTURE FOR AI-BASED
SOLUTIONS
The EU and its legal-regulatory and economic landscape provide a unique context for the
development, use, and improvement of AI for health and social care. The development and
use of AI-based solutions should be performed with dedicated use-cases with involvement
of primary, secondary, and tertiary end-users 233 within a relevant context of appropriate
infrastructure for data collection and facilities such as home care, primary care, and hospitals
i.e., typically a local or regional health system.
The EU has a comprehensive network of regions and local settings in which end-users and
stakeholders work together on the development and implementation of integrated personcentred, digitally enabled services and related innovations such as AI and medical devices.
These settings provide a suitable environment for public-private collaboration and cocreation of standards that tackle the challenges related to the development and deployment
of AI-systems in health and social care234.
These settings are commonly known as innovation-ecosystems (ECA, 2024) or Living
Labs235 and are real-life open innovation systems (Rosa et al., 2024) using iterative feedback
for users throughout a lifecycle approach of an innovation to create sustainable impact. Living
Labs are used in the development of standards236. They provide a real-world context for
the development, testing, and validating of solutions such as AI-based medical devices as
well the evaluation for market access, procurement, and monitoring of use by patients and
healthcare professionals (Béjean et al., 2021).
In the context of AI in healthcare, Living Labs can play a crucial role in shaping regulations
and legislation237. Living Labs, also referred to as “sandboxes” in the context of shaping
regulations and legislation, including development of standards, can effectively address the
unique challenges, and opportunities presented by AI, while also ensuring patient safety and
data security238.
Living Labs provide valuable insights into how regulations and legislation work in practice
with specific use-cases (see also Chapter 4.9 Development, Validation, and
Implementation of AI for Medical Devices and figure 14) and how they impact healthcare
233
https://www.aal-europe.eu/ageing-well-universe/i-am-a-user-2/
https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/
235
https://enoll.org/about-us/what-are-living-labs/
236
Living lab to test and validate healthcare technologies. https://een.ec.europa.eu/partnering-opportunities/living-labtest-and-validate-healthcare-technologies
237
Regulatory sandboxes and experimentation clauses as tools for better regulation: Council adopts conclusions
https://www.consilium.europa.eu/en/press/press-releases/2020/11/16/regulatory-sandboxes-and-experimentationclauses-as-tools-for-better-regulation-council-adopts-conclusions/pdf
238
https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/
234
144
providers, patients, and other stakeholders such as industry. Living Labs promote
collaboration between different stakeholders, including healthcare providers, technology
developers, regulators, and patients. This collaborative approach can lead to more effective
and acceptable AI solutions239.
Additionally, the EC announced the launch of AI Factories together with an AI Innovation
Package bringing together all necessary components (including funding) for the
development and use of AI: computing power, data, algorithms, and talent. These AI
Factories will serve as a one-stop shop for Europe’s AI start-ups, enabling them to develop
the most advanced AI models and industrial applications. We are making Europe the best
place in the world for trustworthy AI240.
Accordingly, these innovation-ecosystems, Living Labs and AI Factories are very suitable for
the development of solutions with end-users, which keeps European industry engaged and
motivated to participate (Zipfel et al., 2022).
Actions to support innovation-ecosystems, Living Labs, and AI Factories
To capitalise on the European network of innovation-ecosystems and Living Labs for the
development, validation, and implementation of AI driven solutions, a strategy for standards
development should strengthen these initiatives via:




Establishing a harmonised infrastructure with procedures for data collection and data
exchange with the EHDS as a connecting platform.
Training and capacity-building for managing public-private partnerships for joint
innovation projects for the development and evaluation of AI-based solutions (including
IVD’s and medical devices).
Support mutual collaboration and knowledge sharing in the network of Living Labs
through use-cases and best practices (centralised database).
Allocate dedicated funds for development and evaluation of AI-based solutions in
Living Lab settings.
Horizon scanning for proactive standardisation
Such a dedicated network of innovation eco-systems or Living Labs could serve as a valuable
source of information for horizon scanning of AI-based solutions and provide insights into
the latest developments in AI to also identify emerging trends and highlight potential
opportunities and challenges for further facilitate proactive policymaking241.
Insight in the early research and development of AI solutions and medical devices could be
used by the Horizon Intellectual Property Scan service to support European academia,
239
https://research-and-innovation.ec.europa.eu/strategy/support-policy-making/shaping-eu-research-and-innovationpolicy/new-european-innovation-agenda/new-european-innovation-agenda-roadmap/flagship-2-enabling-deep-techinnovation-through-experimentation-spaces-and-public-procurement_en
240
https://techcrunch.com/2024/01/24/eu-supercomputers-for-ai-2
241
https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/horizon-ip-scan-successful-ipadvisory-service-european-smes-involved-collaborative-ri-projects-2024-02-21_en
145
start-ups, and SMEs to manage and valorise their IP242. The results of horizon scanning could
inform the High-Level Expert Group on Artificial Intelligence with further refinement of ethics
guidelines to proactively help the development of human-centric and trustworthy AIsystems243.
Better alignment and use of existing standards
The regulatory landscape for AI-systems in healthcare is still relatively new and evolving
(Durlach et al., 2024). Several existing standards related to AI-systems, IT, and medical
technology are already in place for various healthcare applications. However, the opportunity
lies in effectively combining and aligning these standards throughout the lifecycle of
AI-systems, particularly in the context of medical devices and In-vitro diagnostics244.
Better alignment of existing standards throughout the medical device lifecycle as well as
the holistic integration of AI-systems in person-centred care processes, takes a
comprehensive and coordinated approach. This necessary improvement requires a
collaborative effort involving patient representatives, regulators, healthcare providers,
technology developers, and other stakeholders. These should be organised operationally at
the level of innovation-ecosystems, Living Labs, and AI Factories, while the coordination,
knowledge sharing, and information exchange, should be organised at the national and
European level.
Capacity building
Besides better alignment, education, and training, capacity building is needed to make proper
use of existing AI-systems, IT, and medical technology standards as well as their application
in the different healthcare domains. Education and training in the application of standards
pertaining to AI-systems can help relevant actors understand the capabilities and
limitations of AI. This can enable them to more effectively develop and improve AI-systems
along the lifecycle and make informed decisions about their application. Training and
education in standards may also help professionals to navigate the complex regulatory
landscape of AI-systems in healthcare, including data privacy laws, ethical guidelines, and
technical standards.
Building capacity in terms of multidisciplinary collaboration as well as infrastructure,
resources, and personnel is essential for the effective development and use of AI in
healthcare. Considering the rapid pace of AI development, continuous learning is essential.
The regulatory landscape for AI in healthcare is still evolving. Organisations should stay
updated with the latest developments and continuously refine their practices accordingly.
Professionals need to stay updated with the latest advancements in the integrated use of AIsystems, medical technology, healthcare standards, and their applications in the different
242
https://cordis.europa.eu/article/id/429522-launch-of-new-horizon-ip-scan-support-service-to-help-smes-manage-andvalorise-intellectual-p
243
https://altai.insight-centre.org/
244
https://healthaipartnership.org/alignment-with-standards
146
domains. Fostering a culture of continuous learning through multidisciplinary
collaboration, knowledge sharing, and information exchange is required for sectors involved.
6.3
HEALTHCARE-SPECIFIC STANDARDISATION ACTIONS
Following the analysis of existing standards in relation to AI-systems and medical
devices/IVD’s, there are various gaps which need to be filled and existing aspects to be
improved to make them human-centric and trustworthy. Accordingly, standardisation
activities (ideally jointly coordinated with use-cases in the network of innovationecosystems, Living Labs, and AI Factories) could address the topics below.
Safety and Efficacy
As AI becomes more prevalent in medical devices, ensuring the safety and efficacy of these
devices becomes crucial. Standards will need to be established to evaluate and validate the
performance, reliability, and accuracy of AI algorithms used in medical devices. This includes
assessing the risks associated with AI-based decision-making and ensuring that devices
meet the new regulatory requirements. Especially, the combination and interconnectedness
of multiple AI-systems and devices e.g. such as virtual agents, is a critical point.
Interoperability and Integration
Medical devices and AI-systems should seamlessly integrate with existing healthcare
infrastructure and workflows, especially to enable person-centred integrated care.
Standardisation is essential to ensure interoperability between different devices, AIsystems, and services, allowing them to communicate, exchange data, and work together
effectively. This involves developing common protocols, data formats, and interfaces that
enable seamless integration and information exchange across organisations and
stakeholders at least on a regional level. Hence:
Artificial Intelligence development can benefit from the sharing of data across institutions
and organisations. However, data sharing poses challenges due to privacy concerns, data
format variations, and interoperability issues. Standards should provide guidelines for secure
data sharing, harmonised data formats, and interoperability protocols, enabling seamless
exchange of data while maintaining privacy and security.
Validation Methods
Robust (ecological) validation and evaluation processes are essential to ensure the safety,
effectiveness, and performance of AI algorithms: ideally in a real-world context such in a
Living lab setting. Standards should provide clear guidelines for conducting clinical studies,
defining appropriate evaluation metrics, and determining the evidence required for
regulatory approval. Harmonisation of these standards across regulatory bodies can
facilitate efficient and consistent evaluation processes.
Traditional validation methods used for medical devices may not fully capture the unique
characteristics of AI algorithms. AI-systems can continuously learn and evolve, making it
challenging to conduct traditional validation studies. There is a need for standards that
147
outline appropriate methodologies for the validation and assessment of AI-based medical
devices that take the dynamic and evolving nature of AI in consideration. This includes
defining evaluation metrics, study designs, and statistical approaches tailored to the nature
of AI algorithms.
Ethical and Regulatory Considerations
AI in medical devices raises ethical and regulatory concerns related to data privacy, patient
safety, and algorithmic transparency.
Existing ethical and regulatory frameworks often lack specificity when it comes to
addressing AI in healthcare. There is a need to develop more specific standards and
tailored guidelines that account for the nuances of AI technology regarding explainability,
transparency and bias mitigation in healthcare settings.
Artificial Intelligent algorithms used in healthcare often operate as "black boxes," making it
challenging to understand how they arrive at their decisions or predictions. Ethical and
regulatory standards should focus on promoting explainability and transparency, enabling
healthcare professionals and patients to understand and trust the reasoning behind AIgenerated recommendations. Current regulatory frameworks may not adequately address
the need for explainability and transparency in AI-based medical devices. Gaps exist in
defining the level of transparency required, documentation standards, and the extent to which
algorithmic reasoning should be made available to users and regulators.
Artificial Intelligent algorithms can inadvertently perpetuate biases present in the data used
to train them. There is further need for standards that address bias and fairness in AI-systems
to ensure that healthcare algorithms do not discriminate against certain populations or
perpetuate existing healthcare disparities. This includes guidelines for data collection, preprocessing, and algorithmic design to minimise bias and promote equitable outcomes.
Artificial Intelligence in healthcare relies on the collection and analysis of vast amounts of
sensitive patient data. Standards should provide comprehensive guidance on data
privacy, security, and consent to protect patient information throughout the AI and device
lifecycle. This includes ensuring compliance with relevant data protection regulations and
establishing best practices for data anonymisation, storage, sharing, and access controls.
Artificial Intelligent algorithms may evolve and adapt over time, based on real-world
feedback. Standards should address the challenges of continuous monitoring, performance
assessment and updates for AI-systems to ensure ongoing safety and effectiveness. This
includes defining mechanisms for post-market surveillance, monitoring for unexpected
errors, biases, or performance degradation, and establishing processes for timely updates
and notifications. Standards should address the challenges associated with the lifelong
learning and adaptability of AI-based medical devices. This includes defining frameworks for
continuous monitoring, performance assessment, and updates while ensuring transparency
and accountability.
148
Data Quality and Security
High-quality data is crucial for training and validating AI algorithms. Future standard
development should consider further specification of data quality, integrity, and security,
to ensure that medical devices collect, store, and process data in a standardised and secure
manner. This involves establishing more specific guidelines for data acquisition, preprocessing, anonymisation, and protection against cyber threats.
Hence, AI algorithms rely heavily on training data to make accurate predictions and
recommendations. However, healthcare datasets are mostly biased and not fully
representative of diverse populations. Data quality standards should address issues related
to bias, inclusivity, and representativeness, ensuring that training datasets are balanced,
diverse, and accurately reflect the patient populations they aim to serve.
Maintaining data integrity is crucial for the reliability and trustworthiness of AI-systems.
Standards should focus on ensuring that healthcare data used for training and inference are
complete, accurate, and free from errors or inconsistencies. Data management guidelines
for various settings and actors should be further disseminated.
Artificial Intelligence systems related to medical devices are vulnerable to cybersecurity
threats, including unauthorised access, data breaches, or malicious manipulation of AI
algorithms. Data security standards should incorporate robust cybersecurity measures,
including secure software development practices, network security, and intrusion detection
systems. Continuous monitoring, threat assessment, and incident response plans should
also be part of the standards to mitigate risks effectively.
Regulatory Approval
To ensure patient safety and effectiveness, medical devices incorporating AI algorithms must
undergo rigorous clinical validation and regulatory approval processes. Future standards
should provide guidance on study design, endpoints, statistical methodologies, and evidence
requirements specific to AI-based medical devices. Standardisation can help streamline
regulatory pathways, fostering innovation while maintaining appropriate scrutiny.
Traditional validation and evidence requirements for medical devices may not fully capture
the complexities of AI algorithms. Current regulatory approval processes often rely on premarket clinical trials and may not adequately address the dynamic nature of AI-systems.
There is a need to establish specific validation methods and evidence requirements for AIbased medical devices, including guidelines for study designs, endpoints, statistical
methodologies, and performance assessment in real-world settings.
Given the relatively new nature of AI-based medical devices, there may be a lack of
precedent and predictability in the regulatory approval process. Manufacturers may face
challenges in understanding the expectations of regulatory bodies, resulting in uncertainty
and delays. Regulatory agencies can play a crucial role in providing clearer guidance, sharing
best practices, and establishing predictable pathways for regulatory approval.
149
User Interface and Human Factors
Designing intuitive user interfaces and considering human factors is crucial for the effective
use of AI-based medical devices. Standards can provide guidance on the design principles,
usability testing and user-centred approaches to ensure that devices are user-friendly,
reduce cognitive load, and facilitate efficient decision-making. While progress has been
made, there are still gaps that need to be addressed:
Medical devices should be designed with a focus on usability and user experience to ensure
that professionals can effectively interact with the AI system. Gaps exist in terms of intuitive
user interfaces, efficient workflows, and clear presentation of information. User-centred
design principles should be better incorporated into the development process with iterative
feedback and usability testing to identify and address usability issues.
Artificial Intelligent algorithms can generate complex outputs and information. It is crucial for
professionals to understand and interpret the results. Gaps exist in designing user interfaces
that provide (visualised) explanations and insights into the AI's decision-making process.
User interfaces should facilitate the transparent presentation of results, including highlighting
notable features, displaying confidence levels, likelihood ratios, and providing contextspecific/sensitive explanations to enhance the trust and interpretability of AI-generated
outputs.
User interface design should aim to prevent errors and mitigate risks associated with the use
of AI-based medical devices. Gaps exist in identifying potential use errors and providing
appropriate error prevention mechanisms. User interfaces should include error messages,
warnings and alerts that effectively communicate critical information to users and enable
them to take appropriate actions to prevent or mitigate errors.
Effective feedback and communication between the AI system and the user are essential
for the successful use of AI-based medical devices. Gaps exist in providing timely and
accurate feedback to users regarding system status, progress, and potential issues. User
interfaces should facilitate clear communication channels, including appropriate
notifications, alerts, and status updates, to ensure that users are adequately informed and
can provide feedback to improve the system's performance.
Different healthcare settings and user preferences may require customisation and
adaptability in AI-based medical devices. Gaps exist in providing flexible user interfaces
that can be tailored to specific user needs, clinical contexts, and workflow requirements.
User interfaces should allow for customisation, including adjustable settings, interface
personalisation and adaptability to accommodate diverse user profiles and clinical
scenarios.
Adequate training and education are essential for healthcare professionals to understand
and effectively use AI-based medical devices. Gaps exist in providing comprehensive training
programs and educational resources specific to AI technology. User interface designs should
support user training and onboarding, including interactive tutorials, contextual help, and
150
user-friendly documentation that explains the underlying principles and limitations of the AI
system.
Artificial Intelligence based medical devices should seamlessly integrate with existing
healthcare systems and workflows to facilitate adoption and maximise efficiency. Gaps exist
in interoperability and compatibility with electronic health record systems, medical imaging
platforms, or other healthcare information systems. User interfaces should be designed to
ensure smooth integration, data exchange, and interoperability to streamline the use of AI
technology within existing clinical workflows.
Post-market Surveillance
Post-market Surveillance is essential to monitor the long-term safety and efficacy of AI-based
medical devices. Standards should define clear requirements for ongoing monitoring,
including strategies for collecting and analysing real-world data, detecting potential issues,
and conducting necessary updates or recalls ensuring patient safety and device
effectiveness.
Artificial Intelligent algorithms often encounter new scenarios and varied patient
populations after deployment in real-world healthcare settings. Real-world performance
monitoring of AI algorithms is crucial to detect potential safety or efficacy issues. Standards
should outline approaches for continuous monitoring, data collection, and performance
assessment of AI-systems in real-world healthcare environments.
International Harmonisation
The development of international standards is essential to promote EU-wide and global
interoperability to facilitate the adoption of AI-based medical devices across different regions.
Collaborative efforts among regulatory bodies, standards organisations, and industry
stakeholders can ensure harmonisation of standards and avoid duplicative requirements.
However, several gaps exist in current international harmonisation efforts. Different countries
and regions have varying regulatory frameworks and requirements for AI in healthcare.
These differences can create barriers to the EU-wide and global deployment of AI
technologies. Gaps in international harmonisation include the lack of alignment in
regulatory standards, approval processes, and post-market surveillance requirements,
making it challenging for manufacturers to navigate multiple regulatory systems.
Ethical considerations play a significant role in the development and deployment of AI in
healthcare. However, there is a lack of international consensus on ethical principles and
guidelines for AI applications. Harmonisation gaps exist in terms of addressing issues such
as transparency, explainability, fairness, privacy, and the responsible use of AI technology.
Developing internationally recognised ethical frameworks and guidelines is crucial to ensure
ethical practices across borders.
Artificial Intelligence in healthcare relies on vast amounts of patient data and data
governance practices can vary across countries. Harmonisation gaps exist in terms of data
sharing, data protection, and patient privacy regulations. Establishing common data
151
governance principles and frameworks can facilitate secure and responsible data sharing,
fostering global collaboration and enabling multinational research and development efforts.
Harmonised technical standards are essential for interoperability, data exchange, and
compatibility of AI-systems across different regions. However, there are gaps in international
harmonisation of standards for AI in healthcare. These gaps include variations in data
formats, interoperability protocols, and performance evaluation methodologies.
Collaborative efforts are needed to develop and adopt common technical standards, ensuring
seamless integration and interoperability of AI technologies in healthcare.
Artificial Intelligence technologies must consider cultural and societal differences when
deployed in healthcare settings. There are gaps in understanding and addressing these
differences in international harmonisation efforts. AI algorithms that perform well in one
cultural or ethnic group may not generalise to other populations. Harmonisation should
consider the cultural, linguistic, and societal factors that impact the development,
deployment, and acceptance of AI.
Healthcare resource availability and infrastructure can vary across countries. Gaps in
international harmonisation include addressing the disparities in access to AI technologies,
expertise, and infrastructure. Efforts should be made to ensure that harmonisation efforts
consider the unique resource constraints and capabilities of different regions, fostering
equitable access to AI-driven healthcare solutions.
Intellectual property rights and patent laws vary across jurisdictions, posing challenges to
international harmonisation. These gaps can hinder the global deployment of AI
technologies. Harmonisation efforts should include addressing intellectual property
challenges, facilitating fair and transparent sharing of AI innovations, and promoting
collaboration while protecting intellectual property rights.
6.4
CONCLUSIONS AND RECOMMENDATIONS
A shared European vision on standardisation development for AI-systems in health and
social care is warranted. This vision should inspire a strategy based on the values and
guiding principles discussed in this report:



152
AI-systems in health and social care should primarily be human and person-centred.
The primary focus should always be on improving patient outcomes and enhancing the
quality of care. AI-systems should be designed and implemented according to the
needs and preferences of patients.
AI-systems should meet the highest standards of safety and efficacy. Rigorous testing
and validation processes, guided by clear and coherent standards, should be in place
to ensure that these systems perform as intended.
It is essential that healthcare providers and patients understand how AI-systems make
decisions. Efforts should be made to make these systems as transparent and
explainable as possible with the support of standards and guidelines.



AI-systems should be designed and used in a way that respects fundamental ethical
principles, including autonomy, beneficence, non-maleficence, and justice.
Given the sensitive nature of health and social care data, robust measures should be in
place to protect data privacy and ensure security.
Artificial Intelligence techniques and solutions are rapidly evolving. It is important to
foster a culture of continuous learning and improvement, where feedback is actively
sought and used to enhance AI-systems.
For the development strategy and a shared European vision on standardisation, the
engagement and involvement of public and private representatives across the technology,
medical, life sciences, health, and social care sector is needed.
This should include representatives from patient organisations, healthcare, public health
organisations, academia, industry, NGO’s, civil societies, payers, investors, regulators, and
policymakers as well as National Standards Bodies and other relevant committees.
This community should work together on:







153
The development and implementation of a concrete action plan with strategic goals and
related objectives in line with an overall vision.
The integration of existing innovation-ecosystems and Living Labs and the
establishment of new ones to create a European-wide network for development,
testing, validation, and application of AI-based solutions in healthcare and related
standardisation activities.
Strengthening the coordination and governance of such a network to better align
standardisation activities from local to central European level.
Capacity building, knowledge sharing, and continuous learning, as well as financial
support.
Implementation, consolidation, and up-scaling of best practices.
Filling healthcare-specific standardisation gaps as actions of interest.
The development of new standards and adoption of existing standards as well as
methods, tools, and frameworks that improve the process of standardisation and
harmonisation.
ANNEX: GLOSSARY
Accountability - this involves establishing mechanisms for accountability in the use of AI
algorithms, including defining responsibilities, roles, and processes for addressing potential
errors, biases, or adverse effects.
Accuracy - refers to the correctness and precision of the AI system's outputs compared to
ground truth or desired outcomes. It measures the system's ability to make correct
predictions, classifications, or decisions. Accuracy requirements define the desired level of
correctness and guide the evaluation, validation, and continuous improvement of the AI
system's performance.
Auditing - this includes establishing mechanisms to track and monitor access to patient data
and conducting regular privacy audits.
Bias and Fairness - this includes addressing potential biases in AI algorithms, ensuring
fairness in the representation and treatment of diverse patient populations, and avoiding
discrimination or unfair treatment based on sensitive attributes.
Consent and User Control - this involves obtaining informed consent from patients for the
collection and use of their data and providing them with options to control and manage their
data.
Data Access and Sharing - this involves defining access controls and mechanisms to
ensure that patient data is only accessed and shared with authorised individuals or entities.
Data Minimisation - this involves minimising the collection and storage of patient data to
limit privacy risks.
Data Privacy and Security - ensuring the protection of patient data, maintaining
confidentiality, and implementing appropriate security measures are essential considerations
when designing AI-enabled medical devices.
Data Quality - this includes ensuring the accuracy, completeness, and reliability of the data
used to train and operate the AI algorithms.
Data Retention and Disposal - this includes defining appropriate retention periods for
patient data and ensuring secure disposal methods when the data is no longer needed.
Data Security - this includes implementing appropriate cybersecurity measures to safeguard
patient data.
Documentation - refers to the process of creating and maintaining comprehensive and
accurate records of the AI system, including its design, development, and operational
aspects. This documentation should include information on the system's architecture,
algorithms used, data sources, data pre-processing techniques, model training processes,
and any other relevant details.
154
Effectiveness - refers to the ability of an AI system to achieve its intended goals and
objectives in real-world scenarios and under varying conditions. It encompasses the system's
ability to generalise well to unseen data, handle edge cases, adapt to changing environments
and produce reliable and meaningful results. Effectiveness requirements ensure that the AI
system performs well in practical applications and meets user expectations.
Efficiency - pertains to the optimal utilisation of resources, including computational power,
memory, and energy consumption, to accomplish the desired tasks. Efficient AI-systems aim
to minimise resource usage while maintaining or improving performance. Efficiency
requirements drive the development of algorithms and architectures that can deliver highquality results with minimal resource requirements.
Ethical Considerations - this includes considering ethical principles such as beneficence,
non-maleficence, autonomy and justice in the development and deployment of AI algorithms.
Ethical Decision-Making - the use of ethical frameworks and guidelines to guide decisionmaking in the design and use of robotics and automation systems. It encourages considering
the ethical implications of AI-enabled medical devices and aligning their design and operation
with ethical principles, professional standards, and legal requirements.
Human-Centredness - promotes designing devices that prioritise the well-being and safety
of patients and healthcare providers.
Human Oversight - this involves maintaining human control and involvement in critical
decision-making processes, particularly in situations where the device's outputs may have
significant implications for patient health and well-being.
Performance - Performance refers to the ability of an AI system to achieve its intended
objectives and tasks accurately and efficiently. It encompasses several factors, such as the
system's accuracy, response time, throughput, computational efficiency, and resource
utilisation. Performance requirements set benchmarks for the system's expected behaviour
and guide the evaluation and optimisation of the AI system.
Privacy - this includes ensuring the confidentiality and secure handling of patient data
collected and processed by AI algorithms.
Privacy by Design - this includes considering privacy requirements from the initial stages of
design and implementing measures to protect patient data privacy.
Quality - refers to the overall excellence and reliability of the system's performance and
outputs. It includes factors such as accuracy, reliability, robustness, interpretability, fairness,
and safety. AI quality requirements address the system's ability to produce trustworthy
results, mitigate biases, manage uncertainties, and adhere to ethical and legal
considerations.
Reproducibility - refers to the ability to recreate and verify the results of an AI system or
experiment. It involves providing sufficient information and resources to enable others to
independently reproduce the findings or outcomes.
155
Robustness - refers to the ability of an AI system to perform well and maintain its
performance in the presence of perturbations, noise, or variations in the input data or
operating conditions. Robust AI-systems are resilient to changes, uncertainties, and
adversarial attacks, ensuring consistent performance across different scenarios and data
distributions. Robustness requirements aim to enhance the system's stability,
generalisability, and resilience.
Safety - safety of robotics and automation systems, which is particularly relevant for AIenabled medical devices. It addresses potential risks and encourages the implementation of
safety measures to mitigate harm to patients and healthcare providers.
Stakeholder Engagement - this involves engaging healthcare providers, patients, and other
relevant stakeholders to understand their perspectives, needs, and concerns regarding the
use of AI in medical settings.
Transparency and Explainability - this include designing AI algorithms in a way that their
outputs and decision-making processes are understandable and explainable to healthcare
professionals and patients.
156
LITERATURE
ABRAMOFF, M. D., ROEHRENBECK, C., TRUJILLO, S., GOLDSTEIN, J., GRAVES, A. S., REPKA, M. X.
& SILVA III, E. Z. 2022. A reimbursement framework for artificial intelligence in healthcare. NPJ
Digit Med, 5, 72.
AGHA, L. 2014. The effects of health information technology on the costs and quality of medical care.
Journal of Health Economics, 34, 19-30.
AHMED, N., WAHED, M. & THOMPSON, N. C. 2023. The growing influence of industry in AI research.
Science, 379, 884-886.
ALEXANDER, A., MCGILL, M., TARASOVA, A., FERREIRA, C. & ZURKIYA, D. 2019. Scanning the
Future of Medical Imaging. J Am Coll Radiol, 16, 501-507.
ALHARBI, H. 2023. Identifying Thematics in a Brain-Computer Interface Research. Comput Intell Neurosci,
2023, 2793211.
AUNG, Y. Y. M., WONG, D. C. S. & TING, D. S. W. 2021. The promise of artificial intelligence: a review of
the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull, 139, 4-15.
BAARTMANS, M. C. 2024. Patient Safety and Medical Devices - Interacting contributing factors leading to
unintended patient harm. Vrije Universiteit
BADNJEVIC, A., DEUMIC, A., SOFTIC, A. & POKVIC, L. G. 2023. A novel method for conformity
assessment testing of patient monitors for post-market surveillance purposes. Technol Health Care,
31, 327-337.
BAJGAIN, B., LORENZETTI, D., LEE, J. & SAURO, K. 2023. Determinants of implementing artificial
intelligence-based clinical decision support tools in healthcare: a scoping review protocol. BMJ
Open, 13, e068373.
BALAJI, P. G. & SRINIVASAN, D. 2010. An Introduction to Multi-Agent Systems. In: SRINIVASAN, D. &
JAIN, L. C. (eds.) Innovations in Multi-Agent Systems and Applications - 1. Berlin, Heidelberg:
Springer Berlin Heidelberg.
BATINI, C. & SCANNAPIECA, M. 2006. Data Quality: Concepts, Methodologies and Techniques.
BAXTER, S., JOHNSON, M., CHAMBERS, D., SUTTON, A., GOYDER, E. & BOOTH, A. 2018. The effects
of integrated care: a systematic review of UK and international evidence. BMC Health Serv Res, 18,
350.
BEAUCHEMIN, M., COHN, E. & SHELTON, R. C. 2019. Implementation of Clinical Practice Guidelines in
the Health Care Setting: A Concept Analysis. ANS Adv Nurs Sci.
BÉJEAN, M., PICARD, R. & BRÉDA, G. 2021. Living Labs, innovation collaborative et écosystèmes : le
cas de l’initiative « Concept Maturity Levels » dans les Medtech. Innovations, 65, 81-110.
BEN-TOVIM, D. I., DOUGHERTY, M. L., O'CONNELL, T. J. & MCGRATH, K. M. 2008. Patient journeys:
the process of clinical redesign. Med J Aust, 188, S14-7.
BENJAMENS, S., DHUNNOO, P. & MESKO, B. 2020. The state of artificial intelligence-based FDAapproved medical devices and algorithms: an online database. NPJ Digit Med, 3, 118.
BESSARIYA, R. 2022. AI Living Laboratory to Experiment Use Cases at the Workplace: AI Living Lab
Report. GPAI 2022. Tokyo: Global Partnership on AI.
BEZEMER, T., DE GROOT, M. C., BLASSE, E., TEN BERG, M. J., KAPPEN, T. H., BREDENOORD, A.
L., VAN SOLINGE, W. W., HOEFER, I. E. & HAITJEMA, S. 2019. A Human(e) Factor in Clinical
Decision Support Systems. J Med Internet Res, 21, e11732.
BISHOP, C. M. 2016. Pattern Recognition and Machine Learning, New York, NY.
BLIND, K., POHLISCH, J. & RAINVILLE, A. 2020. Innovation and standardization as drivers of companies’
success in public procurement: an empirical analysis. The Journal of Technology Transfer, 45, 664693.
BOHR, A. & MEMARZADEH, K. 2020. - The rise of artificial intelligence in healthcare applications. Artificial
Intelligence in Healthcare.
BOSMANS, H., ZANCA, F. & GELAUDE, F. 2021. Procurement, commissioning and QA of AI based
solutions: An MPE's perspective on introducing AI in clinical practice. Phys Med, 83, 257-263.
BROŻEK, B., FURMAN, M., JAKUBIEC, M. & KUCHARZYK, B. 2024. The black box problem revisited.
Real and imaginary challenges for automated legal decision making. Artificial Intelligence and Law,
32, 427-440.
CAROLAN, J. E., MCGONIGLE, J., DENNIS, A., LORGELLY, P. & BANERJEE, A. 2022. TechnologyEnabled, Evidence-Driven, and Patient-Centered: The Way Forward for Regulating Software as a
Medical Device. JMIR Med Inform, 10, e34038.
157
CEKADA, T. & STEINLECHNER, P. 2021. Artificial intelligence and medical devices in the EU: how the
new regulations are changing the game. Journal of Medical Device Regulation, 18, 6-11.
CELI, L. A., CELLINI, J., CHARPIGNON, M. L., DEE, E. C., DERNONCOURT, F., EBER, R., MITCHELL,
W. G., MOUKHEIBER, L., SCHIRMER, J., SITU, J., PAGUIO, J., PARK, J., WAWIRA, J. G., YAO,
S. & FOR, M. I. T. C. D. 2022. Sources of bias in artificial intelligence that perpetuate healthcare
disparities-A global review. PLOS Digit Health, 1, e0000022.
CHADWICK, P. E. 2007. Regulations and Standards for Wireless applications in eHealth. 2007 29th
Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 61706173.
CHEN, D., LIU, K., GUO, J., BI, L. & XIANG, J. 2023. Editorial: Brain-computer interface and its
applications. Front Neurorobot, 17, 1140508.
CHEN, Y., ATTRI, P., BARAHONA, J., HERNANDEZ, M., CARPENTER, D., BOZKURT, A. & LOBATON,
E. 2022. Robust Cough Detection with Out-of-Distribution Detection.
CHICOT, J. & MATT, M. 2018. Public procurement of innovation: a review of rationales, designs, and
contributions to grand challenges. Science and Public Policy, 45, 480-492.
CHOMUTARE, T., TEJEDOR, M., SVENNING, T. O., MARCO-RUIZ, L., TAYEFI, M., LIND, K.,
GODTLIEBSEN, F., MOEN, A., ISMAIL, L., MAKHLYSHEVA, A. & NGO, P. D. 2022. Artificial
Intelligence Implementation in Healthcare: A Theory-Based Scoping Review of Barriers and
Facilitators. Int J Environ Res Public Health, 19.
CINGOLANI, M., SCENDONI, R., FEDELI, P. & CEMBRANI, F. 2022. Artificial intelligence and digital
medicine for integrated home care services in Italy: Opportunities and limits. Front Public Health,
10, 1095001.
COLLINS, G. S. & MOONS, K. G. M. 2019. Reporting of artificial intelligence prediction models. Lancet,
393, 1577-1579.
COLNAR, S., PENGER, S., GRAH, B. & DIMOVSKI, V. 2020. Digital transformation of integrated care:
Literature review and research agenda. IFAC-PapersOnLine, 53, 16890-16895.
CONRAD, D. A. 2015. The Theory of Value‐Based Payment Incentives and Their Application to Health
Care. Health Services Research, 50, 2057 - 2089.
COURIVAUD, F. & SUYUTHI, A. 2022. AI Medical Device Software: considerations regarding automation
bias and performance conformity assessment in a changing regulatory landscape. Hovik, Norway:
DNV Group Research and Development.
CRESSWELL, K. & SHEIKH, A. 2013. Organisation al issues in the implementation and adoption of health
information technology innovations: An interpretative review. International Journal of Medical
Informatics, 82, e73-e86.
DAL MAS, F., MASSARO, M., RIPPA, P. & SECUNDO, G. 2023. The challenges of digital transformation
in healthcare: An interdisciplinary literature review, framework, and future research agenda.
Technovation, 123, 102716.
DANDOY, X. & COPIN, D. 2016. [Telemedicine in hospital at home care]. Soins; la revue de reference
infirmiere, 61 810, 48-50.
DAVENPORT, T. & KALAKOTA, R. 2019. The potential for artificial intelligence in healthcare. Future
Healthc J, 6, 94-98.
DE BRUIN, S. R., BAAN, C. A. & STRUIJS, J. N. 2011. Pay-for-performance in disease management: a
systematic review of the literature. BMC Health Services Research, 11, 272 - 272.
DE HOND, A. A. H., LEEUWENBERG, A. M., HOOFT, L., KANT, I. M. J., NIJMAN, S. W. J., VAN OS, H.
J. A., AARDOOM, J. J., DEBRAY, T. P. A., SCHUIT, E., VAN SMEDEN, M., REITSMA, J. B.,
STEYERBERG, E. W., CHAVANNES, N. H. & MOONS, K. G. M. 2022. Guidelines and quality
criteria for artificial intelligence-based prediction models in healthcare: a scoping review. NPJ Digit
Med, 5, 2.
DE ROSIS, S. & NUTI, S. 2018. Public strategies for improving eHealth integration and long-term
sustainability in public health care systems: Findings from an Italian case study. The International
Journal of Health Planning and Management, 33, e131-e152.
DESMEDT, M., VERTRIEST, S., HELLINGS, J., BERGS, J., DESSERS, E., VANKRUNKELSVEN, P.,
VRIJHOEF, H., ANNEMANS, L., VERHAEGHE, N., PETROVIC, M. & VANDIJCK, D. 2016.
Economic Impact of Integrated Care Models for Patients with Chronic Diseases: A Systematic
Review. Value Health, 19.
DOLATKHAH LAEIN, G. 2024. Global perspectives on governing healthcare AI: prioritising safety, equity
and collaboration. BMJ Leader, leader-2023-000904.
DONABEDIAN, A. 1988. The quality of care. How can it be assessed? Jama, 260, 1743-8.
158
DONOVAN, T., ABELL, B., FERNANDO, M., MCPHAIL, S. M. & CARTER, H. E. 2023. Implementation
costs of hospital-based computerised decision support systems: a systematic review.
Implementation Science, 18, 7.
DUBEY, A. & TIWARI, A. 2023. Artificial intelligence and remote patient monitoring in US healthcare
market: a literature review. J Mark Access Health Policy, 11, 2205618.
DUFFOURC, M. N. & GIOVANNIELLO, D. S. 2024. The Autonomous AI Physician: Medical Ethics and
Legal Liability. In: SOUSA ANTUNES, H., FREITAS, P. M., OLIVEIRA, A. L., MARTINS PEREIRA,
C., VAZ DE SEQUEIRA, E. & BARRETO XAVIER, L. (eds.) Multidisciplinary Perspectives on
Artificial Intelligence and the Law. Cham: Springer International Publishing.
DURAN, J. M. & JONGSMA, K. R. 2021. Who is afraid of black box algorithms? On the epistemological
and ethical basis of trust in medical AI. J Med Ethics.
DURLACH, P., FOURNIER, R., GOTTLICH, J., MARKWELL, T., MCMANUS, J., MERRILL, A. & RHEW,
D. 2024. The AI Maturity Roadmap: A Framework for Effective and Sustainable AI in Health Care.
NEJM AI.
DYE, C. 2022. One Health as a catalyst for sustainable development. Nature Microbiology, 7, 467 - 468.
ECA 2024. EU Artificial intelligence ambition - Stronger governance and increased, more focused
investment essential going forward. Luxemburg: EUROPEAN COURT OF AUDITORS
EIJKENAAR, F. 2011. Key issues in the design of pay for performance programs. The European Journal
of Health Economics, 14, 117 - 131.
FAN, Y. & LIU, X. 2022. Exploring the role of AI algorithmic agents: The impact of algorithmic decision
autonomy on consumer purchase decisions. Front Psychol, 13, 1009173.
FENG, J., EMERSON, S. & SIMON, N. 2021a. Approval policies for modifications to machine learningbased software as a medical device: A study of bio-creep. Biometrics, 77, 31-44.
FENG, J., EMERSON, S. & SIMON, N. 2021b. Rejoinder to Discussions on "Approval policies for
modifications to machine learning-based software as a medical device: A study of bio-creep".
Biometrics, 77, 52-53.
FLEISZER, A. R., SEMENIC, S. E., RITCHIE, J. A., RICHER, M.-C. & DENIS, J.-L. 2015. The
sustainability of healthcare innovations: a concept analysis. Journal of Advanced Nursing, 71, 14841498.
FOLMER, E. 2012. Quality of semantic Standards. PhD, Quality of semantic Standards.
FOLMER, E., KRUKKERT, D., OUDE LUTTIGHUIS, P. & VAN HILLEGERSBERG, J. 2010. Requirements
for a quality measurement instrument for semantic standards. 5th EURAS Annual Standardisation
Conference. Lausanne.
FRASER, A. G., BIASIN, E., BIJNENS, B., BRUINING, N., CAIANI, E. G., COBBAERT, K., DAVIES, R. H.,
GILBERT, S. H., HOVESTADT, L., KAMENJASEVIC, E., KWADE, Z., MCGAURAN, G.,
O'CONNOR, G., VASEY, B. & RADEMAKERS, F. E. 2023. Artificial intelligence in medical device
software and high-risk medical devices - a review of definitions, expert recommendations and
regulatory initiatives. Expert Rev Med Devices, 1-25.
FRISCH, N. C. & RABINOWITSCH, D. 2019. What's in a Definition? Holistic Nursing, Integrative Health
Care, and Integrative Nursing: Report of an Integrated Literature Review. J Holist Nurs, 37, 260272.
FU, K. & BLUM, J. 2014. Controlling for cybersecurity risks of medical device software. Biomed Instrum
Technol, Suppl, 38-41.
GAILLARD, G. & RUSSINOFF, I. 2023. Hospital at home: A change in the course of care. Journal of the
American Association of Nurse Practitioners, 35, 179-182.
GAO, B. & HUANG, L. 2019. Understanding interactive user behavior in smart media content service: An
integration of TAM and smart service belief factors. Heliyon, 5, e02983.
GENOVESE, S., BENGOA, R., BOWIS, J., HARNEY, M., HAUCK, B., PINGET, M., LEERS, M.,
STENVALL, T. & GULDEMOND, N. 2022. The European Health Data Space: a step towards digital
and integrated care systems. Journal of Integrated Care, 30, 363-372.
GERKE, S., BABIC, B., EVGENIOU, T. & COHEN, I. G. 2020. The need for a system view to regulate
artificial intelligence/machine learning-based software as medical device. NPJ Digit Med, 3, 53.
GILBERT, S., FENECH, M., HIRSCH, M., UPADHYAY, S., BIASIUCCI, A. & STARLINGER, J. 2021.
Algorithm Change Protocols in the Regulation of Adaptive Machine Learning-Based Medical
Devices. J Med Internet Res, 23, e30545.
GIORDANO, N., ROSATI, S., KNAFLITZ, M. & BALESTRA, G. 2022. Key Aspects to Teach Medical
Device Software Certification. Stud Health Technol Inform, 298, 159-160.
159
GODDARD, K., ROUDSARI, A. & WYATT, J. C. 2011. Automation bias: a systematic review of frequency,
effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19, 121–
127.
GRAY, C. S., GAGNON, D., GULDEMOND, N. & KENEALY, T. 2021. Digital Health Enabling Integrated
Care. In: KAEHNE, A. & NIES, H. (eds.) How to Deliver Integrated Care. Emerald Publishing
Limited.
GULDEMOND, N. 2024. What is meant by ‘integrated personalized diabetes management’: A view into the
future and what success should look like. Diabetes, 26, 14 - 29.
GULDEMOND, N. A. 2010. Medical Field Lab [Online]. Maastricht: Maastricht University Medical Centre.
Available: www.medicalfieldlab.nl [Accessed 2012 2010].
GULDEMOND, N. A. 2011. Position paper TU Delft on Co-creation and e-Health for ‘Active and Healthy
Ageing’. In: TECHNOLOGY, D. U. O. (ed.) www.tudelft.nl. Delft: Delft University of Technology.
GULDEMOND, N. A. 2013. Europe - European Commission putting patients in the driving seat: a digital
future for healthcare. International Journal of Health Care Quality Assurance, 26.
HABERS, M. & OVERDIEK, A. 2022. Towards a living lab for responsible applied AI. In: LOCKTON, D.,
LENZI, S., HEKKERT, P., OAK, A., SÁDABA, J. & LLOYD, P. (eds.) DRS2022. Bilbao: Design
Research Society.
HASELAGER, P., SCHRAFFENBERGER, H., THILL, S., FISCHER, S., LANILLOS, P., VAN DE GROES,
S. & VAN HOOFF, M. 2023. Reflection Machines: Supporting Effective Human Oversight Over
Medical Decision Support Systems. Camb Q Healthc Ethics, 1-10.
HASTIE, T., TIBSHIRANI, R. & FRIEDMAN, J. 2009. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction, Springer.
HAZARIKA, I. 2020. Artificial intelligence: opportunities and implications for the health workforce. Int
Health, 12, 241-245.
HEATH, I. 2013. Overdiagnosis: when good intentions meet vested interests--an essay by Iona Heath.
BMJ, 347, f6361.
HERNANDEZ-BOUSSARD, T., BOZKURT, S., IOANNIDIS, J. P. A. & SHAH, N. H. 2020. MINIMAR
(MINimum Information for Medical AI Reporting): Developing reporting standards for artificial
intelligence in health care. J Am Med Inform Assoc, 27, 2011-2015.
HIJAZI, R. R. & SUBHAN, A. 2020. Chapter 35 - Maintenance and repair of medical devices. In: IADANZA,
E. (ed.) Clinical Engineering Handbook (Second Edition). Academic Press.
HIROSHIMA, G. G. H. T. F. E. A. S. J. U.-T. A. J. 2023. Promote global solidarity to advance healthsystem resilience: proposals for the G7 meetings in Japan. Lancet, 401, 1319-1321.
HOSPITAL-AT-HOME 2021. Hospital-at-home oncology care may to save costs and resources in the
USA. PharmacoEconomics & Outcomes News, 879, 16 - 16.
HU, H., SU, J. & MA, J. 2022. Editorial: Smart Hospital Innovation: Technology, Service, and Policy. Front
Public Health, 10, 845577.
IQBAL, M. S., ABD-ALRAZAQ, A. & HOUSEH, M. 2022. Artificial Intelligence Solutions to Detect Fraud in
Healthcare Settings: A Scoping Review. Stud Health Technol Inform, 295, 20-23.
IRMA, K., SHANNON, R., ANDREW, B., CAMILA MICAELA, E.-L., ISOLDE, S., GERALD, G., DECLAN,
D. & SIW, W. 2023. Rapid reviews methods series: Guidance on literature search. BMJ EvidenceBased Medicine, bmjebm-2022-112079.
JIANG, F., JIANG, Y., ZHI, H., DONG, Y., LI, H., MA, S., WANG, Y., DONG, Q., SHEN, H. & WANG, Y.
2017. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol, 2, 230-243.
JOPLIN-GONZALES, P. & ROUNDS, L. 2022. The Essential Elements of the Clinical Reasoning Process.
Nurse Educ, 47, E145-e149.
KELLY, C. J., KARTHIKESALINGAM, A., SULEYMAN, M., CORRADO, G. & KING, D. 2019. Key
challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17, 195.
KHORASHAHI, S. & AGOSTINO, M. 2023. Strategic lifecycle approach to medical device regulation.
Regulatory Affairs Professionals Society.
KIM, J., KIM, Y. L., JANG, H., CHO, M., LEE, M., KIM, J. & LEE, H. 2020. Living labs for health: an
integrative literature review. Eur J Public Health, 30, 55-63.
KISELEVA, A., KOTZINOS, D. & DE HERT, P. 2022. Transparency of AI in Healthcare as a Multilayered
System of Accountabilities: Between Legal Requirements and Technical Limitations. Frontiers in
Artificial Intelligence, 5.
KLEINBERG, J., LUDWIG, J., MULLAINATHAN, S. & RAMBACHAN, A. 2018. Algorithmic Fairness. AEA
Papers and Proceedings, 108, 22-27.
160
KLUMPP, M., HINTZE, M., IMMONEN, M., RÓDENAS-RIGLA, F., PILATI, F., APARICIO-MARTÍNEZ, F.,
ÇELEBI, D., LIEBIG, T., JIRSTRAND, M., URBANN, O., HEDMAN, M., LIPPONEN, J. A.,
BICCIATO, S., RADAN, A. P., VALDIVIESO, B., THRONICKE, W., GUNOPULOS, D. &
DELGADO-GONZALO, R. 2021. Artificial Intelligence for Hospital Health Care: Application Cases
and Answers to Challenges in European Hospitals. Healthcare, 9.
KWON, J. M., LEE, Y., LEE, Y., LEE, S. & PARK, J. 2018. An Algorithm Based on Deep Learning for
Predicting In-Hospital Cardiac Arrest. J Am Heart Assoc, 7.
LACHMAN, P., BATALDEN, P. & VANHAECHT, K. 2020. A multidimensional quality model: an opportunity
for patients, their kin, healthcare providers and professionals to coproduce health. F1000Res, 9,
1140.
LAWAL, A. K., ROTTER, T., KINSMAN, L., MACHOTTA, A., RONELLENFITSCH, U., SCOTT, S. D.,
GOODRIDGE, D., PLISHKA, C. & GROOT, G. 2016. What is a clinical pathway? Refinement of an
operational definition to identify clinical pathway studies for a Cochrane systematic review. BMC
Med, 14, 35.
LEBCIR, R., HILL, T., ATUN, R. & CUBRIC, M. 2021. Stakeholders' views on the organisational factors
affecting application of artificial intelligence in healthcare: a scoping review protocol. BMJ Open, 11,
e044074.
LI, Y., LIANG, S., ZHU, B., LIU, X., LI, J., CHEN, D., QIN, J. & BRESSINGTON, D. 2023. Feasibility and
effectiveness of artificial intelligence-driven conversational agents in healthcare interventions: A
systematic review of randomized controlled trials. Int J Nurs Stud, 143, 104494.
LIAO, J. M., NAVATHE, A. S. & PRESS, M. J. 2018. Hospital-at-Home Care Programs-Is the Hospital of
the Future at Home? JAMA internal medicine, 178 8, 1040-1041.
LIN, A., GIULIANO, C. J., PALLADINO, A., JOHN, K. M., ABRAMOWICZ, C., YUAN, M. L., SAUSVILLE,
E. L., LUKOW, D. A., LIU, L., CHAIT, A. R., GALLUZZO, Z. C., TUCKER, C. & SHELTZER, J. M.
2019. Off-target toxicity is a common mechanism of action of cancer drugs undergoing clinical
trials. Science Translational Medicine, 11, eaaw8412.
LIU, X., RIVERA, S. C., FAES, L., FERRANTE DI RUFFANO, L., YAU, C., KEANE, P. A., ASHRAFIAN,
H., DARZI, A., VOLLMER, S. J., DEEKS, J., BACHMANN, L., HOLMES, C., CHAN, A. W.,
MOHER, D., CALVERT, M. J., DENNISTON, A. K., THE, C.-A. & GROUP, S.-A. S. 2019. Reporting
guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature
Medicine, 25, 1467-1468.
LOH, H. W., OOI, C. P., SEONI, S., BARUA, P. D., MOLINARI, F. & ACHARYA, U. R. 2022. Application of
explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).
Comput Methods Programs Biomed, 226, 107161.
LUO, J., SOLIMINI, N. L. & ELLEDGE, S. J. 2009. Principles of cancer therapy: oncogene and nononcogene addiction. Cell, 136, 823-37.
LUYENDIJK, M., VISSER, O., BLOMMESTEIN, H. M., DE HINGH, I., HOEBERS, F. J. P., JAGER, A.,
SONKE, G. S., DE VRIES, E. G. E., UYL-DE GROOT, C. A. & SIESLING, S. 2023. Changes in
survival in de novo metastatic cancer in an era of new medicines. J Natl Cancer Inst.
MA, B., YANG, J., WONG, F. K. Y., WONG, A. K. C., MA, T., MENG, J., ZHAO, Y., WANG, Y. & LU, Q.
2023. Artificial intelligence in elderly healthcare: A scoping review. Ageing Res Rev, 83, 101808.
MAHADEVAIAH, G., RV, P., BERMEJO, I., JAFFRAY, D., DEKKER, A. & WEE, L. 2020. Artificial
intelligence-based clinical decision support in modern medical physics: Selection, acceptance,
commissioning, and quality assurance. Medical Physics, 47, e228-e235.
MANTOVANI, A., LEOPALDI, C., NIGHSWANDER, C. M. & DI BIDINO, R. 2023. Access and
reimbursement pathways for digital health solutions and in vitro diagnostic devices: Current
scenario and challenges. Front Med Technol, 5, 1101476.
MARCUS, G. 2022. AI platforms like ChatGPT are easy to use but also potentially dangerous. Scientific
American, 19.
MARKUS, A. F., KORS, J. A. & RIJNBEEK, P. R. 2021. The role of explainability in creating trustworthy
artificial intelligence for health care: A comprehensive survey of the terminology, design choices,
and evaluation strategies. Journal of Biomedical Informatics, 113.
MCCUE, M. E. & MCCOY, A. M. 2017. The Scope of Big Data in One Medicine: Unprecedented
Opportunities and Challenges. Front Vet Sci, 4, 194.
MENSER, T. & MCALEARNEY, A. S. 2018. Value-Based Payment Models. In: DAALEMAN, T. P. &
HELTON, M. R. (eds.) Chronic Illness Care: Principles and Practice. Cham: Springer International
Publishing.
161
MESKÓ, B. & TOPOL, E. J. 2023. The imperative for regulatory oversight of large language models (or
generative AI) in healthcare. npj Digital Medicine, 6, 120.
MITTELMAN, M., MARKHAM, S. & TAYLOR, M. 2018. Patient commentary: Stop hyping artificial
intelligence-patients will always need human doctors. BMJ, 363, k4669.
MKWASHI, A. & BRASS, I. 2022. The Future of Medical Device Regulation and Standards: Dealing with
Critical Challenges for Connected, Intelligent Medical Devices. SSRN Electronic Journal.
MONTEITH, S., GLENN, T., GEDDES, J. R., WHYBROW, P. C., ACHTYES, E. & BAUER, M. 2024.
Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224, 33-35.
MOORE, J., STUART, S., MCMEEKIN, P., WALKER, R., CELIK, Y., POINTON, M. & GODFREY, A. 2023.
Enhancing Free-Living Fall Risk Assessment: Contextualizing Mobility Based IMU Data. Sensors
(Basel), 23.
MORROW, E., ZIDARU, T., ROSS, F., MASON, C., PATEL, K. D., REAM, M. & STOCKLEY, R. 2022.
Artificial intelligence technologies and compassion in healthcare: A systematic scoping review.
Front Psychol, 13, 971044.
MOYNIHAN, R., GLASZIOU, P., WOLOSHIN, S., SCHWARTZ, L., SANTA, J. & GODLEE, F. 2013.
Winding back the harms of too much medicine. BMJ, 346, f1271.
MUEHLEMATTER, U. J., DANIORE, P. & VOKINGER, K. N. 2021. Approval of artificial intelligence and
machine learning-based medical devices in the USA and Europe (2015–20): a comparative
analysis. The Lancet Digital Health, 3, e195-e203.
NADJ, M., MAEDCHE, A. & SCHIEDER, C. 2020. The effect of interactive analytical dashboard features
on situation awareness and task performance. Decis Support Syst, 135, 113322.
NAQA, I. E., HAIDER, M. A., GIGER, M. L. & HAKEN, R. K. T. 2020. Artificial Intelligence: reshaping the
practice of radiological sciences in the 21st century. The British Journal of Radiology, 93,
20190855.
NATIVI, S. & DE NIGRIS, S. 2021. AI Standardisation Landscape: state of play and link to the EC proposal
for an AI regulatory framework. Luxembourg: Joint Research Centre.
NAVEED, H., KHAN, A. U., QIU, S., SAQIB, M., ANWAR, S., USMAN, M., BARNES, N. & MIAN, A. S.
2023. A Comprehensive Overview of Large Language Models. ArXiv, abs/2307.06435.
NEPRASH, H. T., MCGLAVE, C. C., CROSS, D. A., VIRNIG, B. A., PUSKARICH, M. A., HULING, J. D.,
ROZENSHTEIN, A. Z. & NIKPAY, S. S. 2022. Trends in Ransomware Attacks on US Hospitals,
Clinics, and Other Health Care Delivery Organisation s, 2016-2021. JAMA Health Forum, 3,
e224873-e224873.
ONG, J., PARCHMENT, V. & ZHENG, X. 2018. Effective regulation of digital health technologies. Journal
of the Royal Society of Medicine, 111, 439-443.
ORJI, R. & MOFFATT, K. 2018. Persuasive technology for health and wellness: State-of-the-art and
emerging trends. Health Informatics J, 24, 66-91.
PENNELLO, G., SAHINER, B., GOSSMANN, A. & PETRICK, N. 2021. Discussion on "Approval policies
for modifications to machine learning-based software as a medical device: A study of bio-creep" by
Jean Feng, Scott Emerson, and Noah Simon. Biometrics, 77, 45-48.
PHATAK, A. A., WIELAND, F. G., VEMPALA, K., VOLKMAR, F. & MEMMERT, D. 2021. Artificial
Intelligence Based Body Sensor Network Framework-Narrative Review: Proposing an End-to-End
Framework using Wearable Sensors, Real-Time Location Systems and Artificial
Intelligence/Machine Learning Algorithms for Data Collection, Data Mining and Knowledge
Discovery in Sports and Healthcare. Sports Med Open, 7, 79.
PONATHIL, A., OZKAN, F., WELCH, B., BERTRAND, J. & CHALIL MADATHIL, K. 2020. Family health
history collected by virtual conversational agents: An empirical study to investigate the efficacy of
this approach. J Genet Couns, 29, 1081-1092.
PORTNEY, L. G. 2020. Foundations of Clinical Research Applications to Evidence-Based Practice, F. A.
Davis.
PRAKASH, S., BALAJI, J. N., JOSHI, A. & SURAPANENI, K. M. 2022. Ethical Conundrums in the
Application of Artificial Intelligence (AI) in Healthcare-A Scoping Review of Reviews. J Pers Med,
12.
PRICE, W. N. 2018. Big data and black-box medical algorithms. Sci Transl Med, 10.
RAJKOMAR, A., FARRINGTON, K., MAYER, A., WALKER, D. & BLANDFORD, A. 2014. Patients' and
carers' experiences of interacting with home haemodialysis technology: implications for quality and
safety. BMC Nephrol, 15, 195.
162
RICHARDSON, J. P., CURTIS, S., SMITH, C., PACYNA, J., ZHU, X., BARRY, B. & SHARP, R. R. 2022. A
framework for examining patient attitudes regarding applications of artificial intelligence in
healthcare. Digit Health, 8, 20552076221089084.
ROBINSON, K. A., BRUNNHUBER, K., CILISKA, D., JUHL, C. B., CHRISTENSEN, R. & LUND, H. 2021.
Evidence-Based Research Series-Paper 1: What Evidence-Based Research is and why is it
important? J Clin Epidemiol, 129, 151-157.
ROPER, J., LIN, M. H. & RONG, Y. 2023. Extensive upfront validation and testing are needed prior to the
clinical implementation of AI-based auto-segmentation tools. J Appl Clin Med Phys, 24, e13873.
ROSA, N., LEITE, S., ALVES, J., CARVALHO, A., OLIVEIRA, D., SANTOS, F., MACEDO, B. &
PRAZERES, H. 2024. Knowledge Innovation Ecosystem for the Promotion of User-Centre Health
Innovations: Living Lab Methodology and Lessons Learned Through the Proposal of Standard
Good Practices. bioRxiv, 2024.01.17.573578.
SAELAERT, M., MATHIEU, L., VAN HOOF, W. & DEVLEESSCHAUWER, B. 2023. Expanding citizen
engagement in the secondary use of health data: an opportunity for national health data access
bodies to realise the intentions of the European Health Data Space. Archives of Public Health, 81,
168.
SANDERS, E. & STAPPERS, P. 2008. Co-creation and the new landscapes of design, Taylor & Francis.
SCARDONI, A., BALZARINI, F., SIGNORELLI, C., CABITZA, F. & ODONE, A. 2020. Artificial intelligencebased tools to control healthcare associated infections: A systematic review of the literature. J
Infect Public Health, 13, 1061-1077.
SECHOPOULOS, I. & MANN, R. M. 2020. Stand-alone artificial intelligence - The future of breast cancer
screening? The Breast, 49, 254-260.
SECINARO, S., CALANDRA, D., SECINARO, A., MUTHURANGU, V. & BIANCONE, P. 2021. The role of
artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak, 21,
125.
SHACHAR, C. 2022. Medical and Legal Oversight of Medical Devices: Introduction. In: SHACHAR, C.,
ROBERTSON, C., COHEN, I. G., MINSSEN, T. & PRICE II, W. N. (eds.) The Future of Medical
Device Regulation: Innovation and Protection. Cambridge: Cambridge University Press.
SHAHZAD, R., AYUB, B. & SIDDIQUI, M. A. R. 2022. Quality of reporting of randomised controlled trials of
artificial intelligence in healthcare: a systematic review. BMJ Open, 12, e061519.
SHAKSHUKI, E. M. & REID, M. Multi-Agent System Applications in Healthcare: Current Technology and
Future Roadmap. ANT/SEIT, 2015.
SHAO, Z., ZHAO, R., YUAN, S., DING, M. & WANG, Y. 2022. Tracing the evolution of AI in the past
decade and forecasting the emerging trends. Expert Systems with Applications, 209, 118221.
SHARMA, K. & MANCHIKANTI, P. 2024. AI-Based Medical Devices and Regulations: A Cross-Country
Perspective. Artificial Intelligence in Drug Development: Patenting and Regulatory Aspects.
Singapore: Springer Nature Singapore.
SHARMA, L., CHANDRASEKARAN, A., BOYER, K. K. & MCDERMOTT, C. M. 2016. The impact of Health
Information Technology bundles on Hospital performance: An econometric study. Journal of
Operations Management, 41, 25-41.
SHELMERDINE, S. C., ARTHURS, O. J., DENNISTON, A. & SEBIRE, N. J. 2021. Review of study
reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care
Inform, 28.
SHEPPERD, S., GONÇALVES-BRADLEY, D. C., STRAUS, S. E. & WEE, B. 2016. Hospital at home:
home-based end-of-life care. The Cochrane database of systematic reviews, 2, CD009231.
SHIN, D. 2021. The effects of explainability and causability on perception, trust, and acceptance:
Implications for explainable AI. International Journal of Human Computer Studies, 146.
SIALA, H. & WANG, Y. 2022. SHIFTing artificial intelligence to be responsible in healthcare: A systematic
review. Soc Sci Med, 296, 114782.
SOHN, E. 2023. The reproducibility issues that haunt health-care AI. Nature, 613, 402-403.
SONI, H., IVANOVA, J., WILCZEWSKI, H., BAILEY, A., ONG, T., NARMA, A., BUNNELL, B. E. &
WELCH, B. M. 2022. Virtual conversational agents versus online forms: Patient experience and
preferences for health data collection. Front Digit Health, 4, 954069.
SOOD, S. K., RAWAT, K. S. & KUMAR, D. 2022. A visual review of artificial intelligence and Industry 4.0 in
healthcare. Comput Electr Eng, 101, 107948.
SPAHN, A. 2012. And lead us (not) into persuasion...? Persuasive technology and the ethics of
communication. Sci Eng Ethics, 18, 633-50.
163
STÅHL, T. & KOIVUSALO, M. 2020. Health in All Policies: Concept, Purpose, and Implementation. In:
HARING, R., KICKBUSCH, I., GANTEN, D. & MOETI, M. (eds.) Handbook of Global Health. Cham:
Springer International Publishing.
STÅHLBRÖST, A. 2008. Forming Future IT - The Living Lab Way of User Involvement. PhD, Luleå
University of Technology.
SUJAN, M., FURNISS, D., GRUNDY, K., GRUNDY, H., NELSON, D., ELLIOTT, M., WHITE, S., HABLI, I.
& REYNOLDS, N. 2019. Human factors challenges for the safe use of artificial intelligence in
patient care. BMJ Health Care Inform, 26.
SURI, A. 2022. Introduction to AI and Its Use Cases. Practical AI for Healthcare Professionals: Machine
Learning with Numpy, Scikit-learn, and TensorFlow. Berkeley, CA: Apress.
TARLOV, A. R. 1999. Public policy frameworks for improving population health. Socioeconomic status and
health in industrial nations: Social, psychological, and biological pathways. New York, NY, US: New
York Academy of Sciences.
TEKKESIN, A. I. 2019. Artificial Intelligence in Healthcare: Past, Present and Future. Anatol J Cardiol, 22,
8-9.
TEN HAKEN, I., BEN ALLOUCH, S. & VAN HARTEN, W. H. 2018. The use of advanced medical
technologies at home: a systematic review of the literature. BMC Public Health, 18, 284.
TERRY, A. L., KUEPER, J. K., BELENO, R., BROWN, J. B., CEJIC, S., DANG, J., LEGER, D., MCKAY,
S., MEREDITH, L., PINTO, A. D., RYAN, B. L., STEWART, M., ZWARENSTEIN, M. & LIZOTTE, D.
J. 2022. Is primary health care ready for artificial intelligence? What do primary health care
stakeholders say? BMC Med Inform Decis Mak, 22, 237.
THAKUR, C. & GUPTA, S. 2022. Multi-Agent System Applications in Health Care: A Survey.
THIRUNAVUKARASU, A. J., HASSAN, R., MAHMOOD, S., SANGHERA, R., BARZANGI, K., EL
MUKASHFI, M. & SHAH, S. 2023. Trialling a Large Language Model (ChatGPT) in General
Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and
Limitations in Primary Care. JMIR Med Educ, 9, e46599.
THORNTON, N., HARDIE, T., HORTON, T. & GERHOLD, M. 2024. Priorities for an AI in health care
strategy. The Health Foundation.
TYRVÄINEN, P., SILVENNOINEN, M., TALVITIE-LAMBERG, K., ALA-KITULA, A. & KUOREMÄKI, R.
Identifying opportunities for AI applications in healthcare — Renewing the national healthcare and
social services. 2018 IEEE 6th International Conference on Serious Games and Applications for
Health (SeGAH), 16-18 May 2018 2018. 1-7.
VAN DER MAADEN, T., DE BRUIJN, A. C. P., VONK, R., WEDA, M., KOOPMANSCHAP, M. A. &
GEERTSMA, R. E. 2018. Horizon scan of medical technologies: Technologies with an expected
impact on the organisation and expenditure of healthcare.: National Institute for Public Health and
the Environment.
VAN DER WALT, J. S., BUITENDAG, A. A. K., ZAAIMAN, J. J. & JANSEN VAN VUUREN, J. C. 2009.
Community Living Lab as a Collaborative Innovation Environment. Issues in Informing Science and
Information Technology, 6, 421-436.
VAN GEENHUIZEN, M., GULDEMOND, N., VAN GEENHUIZEN, M., HOLBROOK, J. A. & TAHERI, M.
2018. Cities and Sustainable Technology Transitions
Cities and Sustainable Technology Transitions: Leadership, Innovation and Adoption. Chapter 13: Living
labs in healthcare innovation: critical factors and potential roles of city governments. Edward Elgar
Publishing.
VAN NORMAN, G. A. 2016. Drugs, Devices, and the FDA: Part 1: An Overview of Approval Processes for
Drugs. JACC Basic Transl Sci, 1, 170-179.
VELAZQUEZ, G. L. 2021. New Challenges for Ethics: The Social Impact of Posthumanism, Robots, and
Artificial Intelligence. J Healthc Eng, 2021, 5593467.
VELLIDO, A. 2019. The importance of interpretability and visualization in machine learning for applications
in medicine and health care. Neural Computing and Applications.
VERBEEK, P. P. 2009. Ambient Intelligence and Persuasive Technology: The Blurring Boundaries
Between Human and Technology. Nanoethics, 3, 231-242.
VISRAM, S., LEYDEN, D., ANNESLEY, O., BAPPA, D. & SEBIRE, N. J. 2023. Engaging children and
young people on the potential role of artificial intelligence in medicine. Pediatr Res, 93, 440-444.
VOETS, M. M., VELTMAN, J., SLUMP, C. H., SIESLING, S. & KOFFIJBERG, H. 2022. Systematic Review
of Health Economic Evaluations Focused on Artificial Intelligence in Healthcare: The Tortoise and
the Cheetah. Value Health, 25, 340-349.
164
WALKER, L. E., ABUZOUR, A. S., BOLLEGALA, D., CLEGG, A., GABBAY, M., GRIFFITHS, A., KULLU,
C., LEEMING, G., MAIR, F. S., MASKELL, S., RELTON, S., RUDDLE, R. A., SHANTSILA, E.,
SPERRIN, M., VAN STAA, T., WOODALL, A. & BUCHAN, I. 2022. The DynAIRx Project Protocol:
Artificial Intelligence for dynamic prescribing optimisation and care integration in multimorbidity. J
Multimorb Comorb, 12, 26335565221145493.
WATCHARASRIROJ, B. & TANG, J. C. S. 2004. The effects of size and information technology on
hospital efficiency. The Journal of High Technology Management Research, 15, 1-16.
WELLNHOFER, E. 2022. Real-World and Regulatory Perspectives of Artificial Intelligence in
Cardiovascular Imaging. Frontiers in Cardiovascular Medicine, 9.
WEN, H., ZHANG, L., SHENG, A., LI, M. & GUO, B. 2022. From "Human-to-Human" to "Human-to-Nonhuman" - Influence Factors of Artificial Intelligence-Enabled Consumer Value Co-creation Behavior.
Front Psychol, 13, 863313.
WHALEY, C. M., SCHNEIDER CHAFEN, J. J., PINKARD, S., KELLERMAN, G. R., BRAVATA, D. M.,
KOCHER, R. P. & SOOD, N. 2014. Association between availability of health service prices and
payments for these services. JAMA, 312 16, 1670-6.
WHITEHEAD, D. C. & CONLEY, J. J. 2022. The Next Frontier of Remote Patient Monitoring: Hospital at
Home. Journal of Medical Internet Research, 25.
WHITEHEAD, M., CARROL, E., KEE, F. & HOLMES, C. 2023. Making the invisible visible: what can we do
about biased AI in medical devices? BMJ, 382, p1893.
WHO 2021. Generating Evidence for Artificial Intelligence Based Medical Devices: A Framework for
Training Validation and Evaluation. Geneva: World Health Organisation .
WOLF, J. A. 2016. All voices matter in experience design: A commitment to action in engaging patient and
family voice. Healthc Manage Forum, 29, 183-6.
WOLFF, R. F., MOONS, K. G. M., RILEY, R. D., WHITING, P. F., WESTWOOD, M., COLLINS, G. S.,
REITSMA, J. B., KLEIJNEN, J., MALLETT, S. & GROUPDAGGER, P. 2019. PROBAST: A Tool to
Assess the Risk of Bias and Applicability of Prediction Model Studies. Ann Intern Med, 170, 51-58.
YANG, J., LUO, B., ZHAO, C. & ZHANG, H. 2022. Artificial intelligence healthcare service resources
adoption by medical institutions based on TOE framework. Digit Health, 8, 20552076221126034.
ZHOU, K. & GATTINGER, G. 2024. The Evolving Regulatory Paradigm of AI in MedTech: A Review of
Perspectives and Where We Are Today. Therapeutic Innovation & Regulatory Science, 58, 456464.
ZHU, Y., WANG, P. & DUAN, W. 2022. Exploration on the Core Elements of Value Co-creation Driven by
AI-Measurement of Consumer Cognitive Attitude Based on Q-Methodology. Front Psychol, 13,
791167.
ZIPFEL, N., HORREH, B., HULSHOF, C. T. J., DE BOER, A. G. E. M. & VAN DER BURG-VERMEULEN,
S. J. 2022. The relationship between the living lab approach and successful implementation of
healthcare innovations: an integrative review. BMJ Open, 12, e058630.
165
Download