Regulating Medical AI: Considerations for MedTech Innovators

Regulating Medical AI: Considerations for MedTech Innovators

The integration of AI into medical software is transforming healthcare globally. It enhances diagnosis, identifies high-risk patients, personalises treatments, and streamlines clinical workflows. This transformation relies on trust that AI outputs are correct, improve clinical value and keep patients safe. Regulation plays a critical role in building this trust, providing assurance to patients and clinicians, and offering long-term market viability.

 As AI capabilities rapidly advance, regulatory frameworks must evolve to keep pace, creating both challenges and opportunities for MedTech innovators.

The Evolving Landscape of Medical AI Regulation

In the UK, medical AI is still regulated under the broader framework for medical device software as dedicated legislation for AI has not yet been enacted.  The MHRA is developing a comprehensive approach through its Software and AI as a Medical Device Change Programme.  The EU is further ahead with its AI Act that was adopted in 2024 and will enter full application in 2026. This leaves manufacturers in the UK navigating a transitional space, where they must anticipate future legal requirements.

Emerging regulatory approaches in both the UK and EU are set to incorporate principles such as continuous performance monitoring, post-market evaluation, clinical oversight, and model explainability into their frameworks. For the UK, these concepts will first take shape through regulatory guidance before being formalised into enforceable legislation, ensuring that medical AI can develop safely, ethically, and at scale.

What Qualifies as Medical AI?

Medical AI sits within the broader category of medical device software that encompasses both standalone digital tools and software embedded within hardware devices. The determining factor in whether software qualifies as a medical device is its intended purpose. Software used solely for administrative, operational, or communication purposes within healthcare settings is not regulated as a medical device. However, when software directly or indirectly informs clinical care, it falls under medical device regulations. All software that falls under medical device regulations must be effective, safe, and operate as intended to avoid causing harm to patients. Medical device regulations are key to ensuring that this software meets these requirements.

Medical AI, whether part of medical device software or an accessory, must meet the same standards. However, AI systems present additional challenges including:

·       The algorithm may behave unexpectedly with new inputs

·       The algorithm’s performance may drift over time

·       The algorithm could be biased due to its training set

·       The ‘black box’ nature of some AI makes validation challenging

·       AI that learns from users’ inputs could derail and must be tightly controlled

The Role of Standards in Developing Medical AI

With dedicated UK legislation for medical AI still in development, wider medical device standards remain the most reliable compass for manufacturers seeking regulatory certainty. These frameworks form the backbone of compliance, ensuring that quality, safety, and accountability are embedded from the earliest stages of design.

The following standards should be observed when developing software: ISO 13485 for quality management systems, IEC 62304 for medical device software life cycle processes, and ISO 14971 for risk management. Working to these standards will ensure compliance with regional regulations such as UK MDR (Medical Device Regulation). Manufacturers must also register devices with MHRA and meet post-market surveillance and vigilance requirements. The combination of international standards and local regulations establishes a structured approach for manufacturers to ensure quality control, risk management, and continuous improvement throughout an AI’s lifecycle. Not only this, but manufacturers who follow these frameworks throughout the lifecycle of their product are more likely to avoid the extensive costs of attempting to become compliant later in development.

Data Quality, Bias and Explainability

Safety in medical AI begins with data. Poorly curated datasets can introduce bias, instability, and systemic risk, whilst well-designed data enables robust performance across diverse patient populations. A model trained predominantly on one demographic or clinical environment may underperform in others, leading to inequitable outcomes.

 Regulators now expect manufacturers to demonstrate clinical justification for any intentional skew in representation and to maintain a transparent audit trail of data provenance, lineage, and governance decisions. New standards such as ISO/IEC 5259 for AI data quality are essential for manufacturers. They define key data quality characteristics, including accuracy, completeness, and credibility.

Validation and Real-World Performance

AI systems must demonstrate that their performance holds up in the real-world settings for which they are intended. Regulators now expect manufacturers to evaluate how a model behaves when deployed in different clinical environments, used by diverse practitioner groups and when it is exposed to varying data quality and infrastructure constraints. For example, a model built using data from a large teaching hospital may behave differently in a small regional clinic and prove less accurate. Hence, regulatory guidance increasingly emphasises real-world testing, post-market surveillance, and safeguards against model drift.

To ensure models are validated at every stage of development, manufacturers must conduct risk assessments and put mitigation strategies in place to minimise potential risks and capture the output in risk registers. These risk registers must include traceability for each risk identified and action taken. To avoid problems due to differences between real-world data and the training set, great care should be taken in the early stages of engineering datasets, and the intended use should clearly state the circumstances in which the model should be used.

A Hypothetical Example

Consider a predictive model designed to identify factors associated with better neonatal outcomes. During development, the model appeared to suggest that certain comorbidities, such as diabetes, during pregnancy lead to improved outcomes for the baby. Without clinical oversight, this correlation could be misinterpreted as evidence that diabetes in pregnancy benefits babies when the opposite is true. This inference may in fact provide an insight into the differences in clinical care provided to patients according to their comorbidities. Patients with diabetes are closely monitored and receive additional interventions to minimise risk and prevent potential complications.

 This illustrates why collaboration with clinicians is essential during the validation of AI. AI systems identify patterns in data, but these patterns can be misleading at first glance. Clinical experts provide the contextual knowledge needed to ensure that model outputs are interpreted correctly and that AI experiments are correctly designed in the first place given the data and the intent.

Potential Hurdles for MedTech innovators

The greatest challenge for many innovators is not technical; it is to maintain compliance within the constraints of tight budgets and timelines. Early-stage companies are often under pressure to focus on delivering a functional prototype that can attract further investment or clinical interest. For this reason, they may decide to defer regulatory compliance. Unfortunately, this creates significant issues.

Building a compliant medical AI system requires quality management processes, risk assessments, and design justifications to be in place from the outset. Once development is underway or near completion, it becomes extremely difficult and costly to put in place the evidence that regulators expect for earlier stages. This will result in delays that can create funding bottlenecks as investors expect assurance that a pathway to medical device approval is viable.

Looking Ahead

Medical AI regulation is moving toward higher assurance, greater transparency and international alignment. Global regulators are seeking harmonisation to make it easier for safe systems to scale internationally. Clinical safety, human oversight, post-market monitoring, and data quality assurance are all becoming core components of compliance.

 The direction of travel is clear, medical software, including AI is facing more stringent regulation. This shift will benefit both patients and clinicians by raising confidence in reliability, fairness and clinical value. For MedTech innovators, the opportunity lies in recognising regulation not as a barrier to progress but as the foundation that will ultimately allow adoption at scale.