Article
Medical Devices with Artificial Intelligence
Medical Devices with AI System
In recent years, artificial intelligence (AI) tools have undergone exponential development across various application fields. Among these are medical devices with integrated functions based on artificial intelligence (AI) in its multiple facets. However, while design development has allowed and is experiencing particularly rapid progress, from a regulatory perspective, only recently has some clarity begun to emerge regarding how to ensure the safety of these products and maximize their potential.
Regulatory Context
A product that qualifies as a medical device must comply with Regulation (EU) 2017/745 (MDR). But what happens if the definition of “AI system” according to Regulation (EU) 2024/1689 – AI ACT also applies to a medical device?
An “AI system” is defined as: an automated system designed to operate with varying levels of autonomy and that may exhibit adaptability after deployment and which, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
If the scope of application of an “AI system” is also considered valid for a medical device, then in addition to the requirements set by the MDR, the relevant requirements of the AI ACT must be applied. Indeed, Regulation (EU) 2024/1689 provides for a division into product categories based on risk class, which results in the need to apply certain requirements. Among the categories applicable to medical devices, the one that stands out is “High-risk AI system”.
An AI system qualifies as high-risk if the following is met (Chapter III – Section 1, Article 6, Paragraph 1):
Regardless of whether it is placed on the market or put into service independently of the products referred to in points (a) and (b), an AI system is considered high-risk if both of the following conditions are met:
a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
b) the product, whose safety component under point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment for the purpose of placing on the market or putting into service that product pursuant to the Union harmonisation legislation listed in Annex I. […]

Following the prescribed flow, the first step (a) is to evaluate the product as a safety component or product that falls within those governed by one of the Union harmonisation legislations listed in Annex I. Annex I cites Regulation (EU) 2017/745 among the legislations.
Point (a) is considered satisfied.
The second step (b) concerns the presence of a third-party conformity assessment process for the purpose of placing on the market or putting into service (pursuant to the Union harmonisation legislation listed in Annex I).
If a medical device satisfies both points (a) and (b), then it is considered a high-risk AI system within the AI ACT.
A medical device with an AI system will need to simultaneously adopt both legislative acts. Only in this way is it possible to ensure the safety and risk management that characterize both worlds.
MDR and AI ACT
The AI ACT requires the implementation of a quality management system characterized by a series of procedures and instructions aimed at ensuring design control, quality control, testing and validation, data management, and surveillance activities.
At the same time, it requires the preparation of technical documentation for the medical device with artificial intelligence to demonstrate compliance with the required requirements. Among the contents, it is expected to indicate: purposes, interactions, descriptions of the system and development, metrics used, risk analysis, monitoring, as well as continuous management of the entire life cycle.
At a macroscopic level, the requirements for conformity assessment of an artificial intelligence system are similar to those of a medical device: strict control of the management process from design onwards and a high level of detail of product characteristics (from design to after-sales). Clearly, dedicated attention must be paid to the most critical and peculiar issues introduced by each of the two individual systems.
Literacy and Deployer
Regulation (EU) 2024/1689 emphasizes the importance of the concepts of “literacy” and “deployer” for high-risk AI systems.
“AI literacy” is defined as:
The skills, knowledge, and understanding that enable suppliers, deployers, and interested persons, taking into account their respective rights and obligations in the context of this regulation, to proceed with an informed deployment of AI systems, as well as to gain awareness of the opportunities and risks of AI and the possible harm it can cause.
The aim is to provide awareness to those involved in the operation and use of the AI system, conveying the necessary knowledge to make informed decisions regarding the system. Adequate literacy contributes to ensuring compliance and proper execution of the system.
The “deployer” is defined as:
A natural or legal person, public authority, agency, or other body that uses an AI system under its own authority, except when the AI system is used in the course of a personal non-professional activity.
The Deployer adopts specific roles and responsibilities, among which several themes that characterize high-risk AI Systems can be highlighted: literacy, human oversight, ensuring representativeness of input data, monitoring system operation, log retention, etc.
Requirements for High-Risk AI Systems
Regulation (EU) 2024/1689 dedicates Chapter III to High-Risk AI Systems, listing their requirements (Section 2):
- Compliance with requirements (Article 8)
- Risk management system (Article 9)
- Data and data governance (Article 10)
- Technical documentation (Article 11)
- Record-keeping (Article 12)
- Transparency and provision of information to deployers (Article 13)
- Human oversight (Article 14)
- Accuracy, robustness and cybersecurity (Article 15)
Compliance with Requirements
As anticipated, a medical device with an AI system will need to adopt both legislative acts simultaneously. Only in this way is it possible to ensure the safety and risk management that characterize both worlds. It is possible to manage the two legal acts in a single conformity assessment. In this specific case, integrating the requirements of the MDR with the requirements provided in Section 2 of Chapter III of the AI ACT.
Risk Management System
The risk management system must be a continuous iterative process, planned, executed, and maintained throughout the entire life cycle of a high-risk AI system. It should provide for constant and systematic review and updating. The process starts with risk identification and analysis, followed by the detection of control measures, as well as the assessment of any residual risks and post-market monitoring activities. The approach defined by the AI ACT is analogous to that provided for risk management according to MDR.
Data and Data Governance
A high-risk AI system involves the training of AI models, developed from training, validation, and test datasets. The selected data must be evaluated in compliance with:
- Design choices;
- Data collection and origin;
- Data processing (pre-processing), to prepare them for the intended purpose (annotation, labeling, cleaning, etc.);
- Data availability, quantity, and adequacy;
- Assessment of possible biases with negative effects and related control measures;
- Identification of gaps and how to manage them.
Particular attention must be paid to the analysis of data selection aspects, their representativeness and statistical properties, their use in the training, validation, and verification phases; for the purpose of achieving the intended goal.
Attention to data management is an inevitable element considering the characteristics of an AI system: the preliminary evaluations of data selection, processing operations, and final use must be appropriately described and tracked within the technical documentation.
Technical Documentation
The AI ACT requires the preparation of technical documentation for the high-risk AI system before placing it on the market or putting it into service. For a medical device with artificial intelligence to which MDR and AI ACT apply, a single technical documentation is provided that will contain the necessary information for both regulations.
At an operational level, in managing an MDR product with AI ACT application, particular emphasis should be placed on aspects of data management that lead to the expected performance: design choices, evaluations, and results.
Record-keeping
A high-risk AI system provides for the possibility of recording events (“logs”) for the entire life cycle of the system and according to the intended use, with the aim of:
- Identifying situations that may lead to a risk to people’s health or safety or a substantial modification;
- Facilitating post-market monitoring;
- Allowing monitoring of the system’s operation itself.
Transparency and Provision of Information to Deployers
A high-risk AI system must ensure “transparent” operation from design and development. Among the useful tools for this purpose are the instructions for use, necessary to allow correct interpretation and understanding of the results provided by the system. The IFUs (digital or paper) must be: concise, complete, correct, and clear, as well as relevant, accessible, and understandable.
The basic information to be provided for both a medical device and a high-risk AI system includes: traceability information, technical characteristics, capabilities and limitations (accuracy, robustness, data specifications, etc.). Any changes foreseen in the conformity assessment, measures adopted for surveillance, and methods for installation and maintenance of the medical device with artificial intelligence must be tracked.
Particular emphasis is placed on the importance of information that reaches the user: any information on how to manage output data (collect, store, and correctly interpret logs) or circumstances that could lead to health or safety risks.
Transparency and literacy become two key concepts for correct interpretation and use of the result, making informed decisions regarding the system.
Human Oversight
A high-risk AI system must allow for supervision of use by natural persons, from design and development and with the use of appropriate interface tools.
Human oversight aims to mitigate risks to health and safety during use.
Oversight is weighted based on risk analysis, level of autonomy, and context of use of the high-risk AI system. Such measures must be provided before placing on the market / putting into service.
Also in the case of human oversight, the concept of literacy comes into play again. Adequate oversight is possible only if the person in charge is aware of the following aspects:
- Capabilities;
- Limitations;
- System operation;
- Risk of excessive and automatic reliance on automation output;
- Correctly interpreting the output with available tools and methods;
- Possibility to ignore, cancel, or overturn the outcome of the AI system, for particular situations;
- Intervening on the operation by stopping the system in safe conditions.
Accuracy, Robustness and Cybersecurity
A high-risk AI system must ensure an adequate level of accuracy, robustness, and cybersecurity throughout its entire life cycle.
- Accuracy: the levels achieved and the metrics used must be included and described in the instructions for use.
- Robustness: adoption of redundant systems that allow for the mitigation of errors, failures, or inconsistencies.
- Cibersicurezza: adozione di soluzioni tecniche adeguate alle circostanze ed ai rischi possibili, per contrastare i tentativi di terzi non autorizzati alle vulnerabilità del sistema, con conseguente modifica di utilizzo, output o prestazione.
Le soluzioni tecniche devono prevedere le misure di prevenzione, accertamento, risposta, risoluzione e controllo degli attacchi:- Data poisoning (training data);
- Model poisoning;
- Adversarial examples or model evasion (manipulation of designed inputs to mislead the system);
- Confidentiality attacks;
- Model defects.
All these aspects are already addressed in the design of a medical device. Particular emphasis is placed on data and models, specific to an AI system.
When Does the AI Regulation Apply to High-Risk AI Systems?
Regulation (EU) 2024/1689 enters into force on August 2, 2024, and applies from August 2, 2026 (with some minor exceptions).
For high-risk AI systems, it applies from August 2, 2027.
If the high-risk AI system was placed on the market or put into service before August 2, 2026: the regulation applies if, as of this date, the systems undergo significant changes (or substantial) to their design or purpose. If the high-risk AI system is intended to be used by public authorities: the application of necessary measures to comply with the requirements and obligations are to be adopted by August 2, 2030.
Recent articles