16 Feb The EU AI Act and Medical Devices: What MedTech Companies Must Do Before August 2026
Understanding high-risk AI classification, data governance requirements and lifecycle compliance for AI-enabled medical devices in Europe.
Artificial intelligence is no longer a future-facing concept in medical technology — it is already embedded in diagnostic imaging, clinical decision support, digital pathology, wearable monitoring systems and predictive modelling tools.
However, as AI capability has accelerated, so too has regulatory scrutiny. The introduction of the EU AI Act marks a significant shift in how AI-enabled medical devices are governed across Europe.
For MedTech companies already navigating the complexities of the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), the AI Act introduces an additional compliance layer — particularly for systems classified as “high-risk.”
With core obligations applying from August 2026, now is the time to prepare.
Why Healthcare AI is Classified as “High Risk”
Under the EU AI Act, AI systems are categorised according to risk:
- Unacceptable risk (prohibited)
- High risk
- Limited risk
- Minimal risk
Healthcare AI systems fall into the high-risk category. This includes AI embedded in medical devices and in vitro diagnostics that are subject to MDR or IVDR conformity assessment.
Why high risk? Because AI in healthcare directly influences:
- Diagnosis
- Treatment decisions
- Prognosis and monitoring
- Patient safety outcomes
The potential for harm — whether through algorithmic bias, inaccurate outputs, or degraded performance — places healthcare AI firmly in the high-risk bracket.
For manufacturers, this means enhanced obligations around:
- Data governance
- Transparency
- Risk management
- Human oversight
- Post-market monitoring
The AI Act does not replace MDR/IVDR — it adds a complementary layer focused specifically on AI system integrity and governance.
Key Compliance Requirements for MedTech Companies
While MDR already imposes stringent obligations on software and Software as a Medical Device (SaMD), the AI Act introduces several additional areas of focus.
Robust Data Governance and Bias Control
AI systems must be trained, validated and tested using datasets that are:
- Relevant
- Representative
- Free from systematic bias (as far as reasonably possible)
- Appropriately governed
Manufacturers must demonstrate:
- How datasets were sourced
- Why they are representative of the intended patient population
- How demographic bias (age, sex, ethnicity, socioeconomic factors) was assessed
- How training and validation datasets were kept strictly separate
For rare disease or niche indications, this may require particularly strong statistical justification and documentation of limitations.
In practical terms, your data strategy is now a regulatory strategy.
Technical Documentation Expansion
MDR already requires technical documentation under Annex II and III.
The AI Act expands expectations to include:
- Description of the AI model architecture
- Intended purpose of the AI component
- Description of training and validation methodologies
- Performance metrics
- Risk assessment specific to AI behaviours
- Monitoring and update mechanisms
Manufacturers must be able to explain:
- How the model reaches outputs (to the extent technically possible)
- What its limitations are
- Under what conditions performance may degrade
Opaque “black box” systems with no performance transparency will face increasing regulatory challenge.
Human Oversight (Human-in-the-Loop)
The AI Act emphasises meaningful human oversight.
For medical AI, this means:
- Clinicians must be able to understand outputs
- Outputs must not override clinical judgement
- Interfaces must avoid automation bias
- Users must be informed of system limitations
For example, if an AI system triages radiology images, clinicians should:
- Be aware of how risk categories are generated
- Be able to review flagged and non-flagged images
- Understand that low-risk classification does not eliminate clinical review
Human oversight is not optional — it is a core compliance requirement.
Risk Management Alignment
The AI Act requires a risk management system that operates throughout the AI system lifecycle.
For MedTech companies, this must integrate with:
- ISO 14971 (Risk Management)
- IEC 62304 (Software Lifecycle)
- Post-market surveillance processes
Additional AI-specific risks to consider include:
- Algorithm drift
- Dataset drift
- Overfitting
- Cybersecurity vulnerabilities
- Model degradation over time
Risk files must be dynamic — particularly where AI models are updated or retrained.
Post-Market Monitoring and Performance Tracking
Perhaps the most critical long-term requirement is ongoing performance monitoring.
High-risk AI systems must:
- Monitor real-world performance
- Detect deviations from expected accuracy
- Track adverse events linked to AI outputs
- Implement corrective actions where required
If models are retrained or updated, structured change control processes must be in place.
In practice, this means:
- Clearly defined update protocols
- Performance thresholds triggering review
- Validation prior to deployment of significant updates
AI is not static software — regulators now expect lifecycle governance.
Interaction with MDR and IVDR
One of the most important strategic considerations is how the AI Act interacts with existing medical device regulations.
In most cases:
- AI-enabled medical devices remain subject to MDR or IVDR conformity assessment
- If already requiring Notified Body review, the AI component will fall within that scope
- Additional AI Act documentation may be assessed during conformity review
The two frameworks are intended to be complementary, but companies must avoid siloed compliance efforts.
Your Quality Management System (QMS) must integrate:
- AI risk governance
- Data governance
- Software lifecycle controls
- Clinical evaluation
- AI-specific documentation
Treating AI compliance separately from MDR processes will create duplication and audit risk.
Timeline: What Happens When?
Understanding the staged implementation timeline is critical.
Already in Effect
- Governance structures and foundational principles established
August 2026
- Most high-risk AI obligations become fully applicable
- AI-enabled medical devices must demonstrate compliance
August 2027
- Transitional arrangements for legacy systems end
- Pre-existing AI systems must comply
For companies currently developing AI-enabled products targeting CE marking in 2026–2027, AI Act requirements must now be embedded into development plans.
Waiting until late 2026 will be too late.
What MedTech Companies Should Be Doing Now
To prepare for August 2026, manufacturers should:
-
Conduct an AI Compliance Gap Analysis
Assess current processes against AI Act requirements:
- Data governance
- Bias assessment
- Documentation depth
- Monitoring processes
-
Strengthen Data Documentation
Document:
- Dataset provenance
- Representativeness
- Limitations
- Separation of training and validation datasets
-
Review Software Lifecycle Controls
Ensure IEC 62304 processes cover:
- AI model development
- Model updates
- Change control
- Verification and validation
-
Update Risk Management Files
Expand ISO 14971 risk analysis to include:
- Algorithm drift
- Automation bias
- Data integrity risks
-
Formalise Post-Market AI Monitoring
Define:
- Performance KPIs
- Monitoring intervals
- Update validation triggers
Final Thoughts: AI Compliance Must Be Proactive, Not Reactive
The EU AI Act represents a fundamental evolution in AI governance.
For MedTech companies, it signals a shift from viewing AI as simply “advanced software” to recognising it as a regulated high-risk system requiring structured lifecycle oversight.
The companies that will succeed are those that:
- Treat AI governance as part of their QMS
- Invest in robust data strategy
- Build multidisciplinary teams
- Plan for lifecycle monitoring from day one
AI offers extraordinary clinical opportunity.
But regulatory maturity must keep pace with technological innovation. August 2026 is closer than it seems.
Written by Educo Life Sciences Expert, Richard Young
Richard Young has over 25 years of experience in the medical device and IVD industry with products such as class 3 implants to electromedical infusion systems. Richard has extensive experience in regulatory affairs, GMP (quality), GLP (laboratory testing) and clinical affairs. Richard has held various positions within the industry such as QA/RA Manager and Director, Quality Assurance and Regulatory Compliance at various companies including Biomet, Plasma Surgical and Zimmer Limited.
This article was written using materials from the course, Creating AI in Your Medical Devices: A Regulatory & Development Overview
Sign up for the Educo Newsletter
Stay up-to-date with the latest free trainer interviews, articles, training courses and more. We will also keep you updated on upcoming courses. Complete the form below.
View Our Range of Training Courses
Discover our range of online and classroom courses covering various topics within Pharmaceuticals (Regulatory Affairs), Biopharmaceuticals, Cell & Gene Therapies and Medical Devices & IVDs.