Skip to content

Global Regulatory Overhaul for Intelligent Medical Equipment in Harmony with International Standards

AI is playing a crucial role in the healthcare and life sciences industry, with smart medical devices set to reshape the healing process. The European Union's legislative initiatives on AI and product liability acknowledge the rapid growth of AI in various economic sectors. However, navigating...

Updated Medical Device Regulations in the European Union: Implications and Accountability for...
Updated Medical Device Regulations in the European Union: Implications and Accountability for Intelligent Medical Equipment in the Global Arena

Global Regulatory Overhaul for Intelligent Medical Equipment in Harmony with International Standards

In a significant shift for the European Union (EU), the regulatory landscape for AI-powered medical technologies is undergoing a transformation. This change is primarily driven by the updated Product Liability Directive (PLD) and the Artificial Intelligence Act (AI Act), which together create a tight liability framework for manufacturers of intelligent medical devices.

The PLD, originally planned to regulate liability for AI within a standalone tort regime, was withdrawn in February 2025. However, its updated version, which applies a technology-neutral product liability regime, has taken centre stage. The revised PLD includes expanded rules tailored for smart software and AI components in medical devices, reflecting special considerations for devices that can learn and be updated post-market.

Key features of the revised PLD relevant for smart medical devices include:

  • A lifecycle monitoring duty, requiring manufacturers to monitor the device throughout its entire lifecycle, which can extend up to 25 years.
  • Expanded compensable damages, now including verified psychological harm and data loss, which are particularly relevant for users of smart devices.
  • The removal of liability thresholds, and an extension of the injury claim period to 25 years.
  • Narrowed exculpation possibilities, meaning manufacturers cannot avoid liability if damage could have been prevented by software updates.
  • Procedural enhancements, with defendants required to disclose relevant evidence upon a plausible claim for compensation. Failure to do so leads to a presumption of defectiveness.

Regarding the AI Act, it establishes compliance obligations for providers of AI models, including those embedded in smart medical devices. From August 2025, providers must comply with AI Act obligations, with increasing enforcement and fines starting August 2026. The AI Act imposes systemic risk notifications and encourages adherence to codes of practice but leaves substantive liability primarily to the PLD and national laws.

Many smart medical devices fall into the "high-risk" category under the AI Act, requiring technical documentation, risk management, human oversight, transparency, and other obligations during development. This intersects with the Medical Device Regulation (MDR), creating a complex regulatory environment for manufacturers.

The revised PLD explicitly mentions software in its definitions of "products" and "components" of products, creating civil law liabilities for importers, distributors, authorized representatives, and anyone who substantially modifies the product outside the manufacturer's control.

The EU is balancing the potential of AI against its risks in the regulatory challenge. This tight liability framework for smart medical devices is part of this effort, aiming to ensure safety while fostering innovation. Compliance with EU standards may signal safety and help European smart medical devices to achieve an image of trustworthiness.

Meanwhile, the Food and Drug Administration (FDA) in the U.S. has authorized more than 850 AI/ML-enabled medical devices, primarily in radiology, and is developing a lifecycle-based oversight model through evolving guidance, real-world monitoring expectations, and quality systems enforcement.

In summary, the revised PLD and the AI Act establish a more stringent liability framework for manufacturers of smart medical devices in the EU. This framework addresses the challenges of AI and software-enabled devices by extending monitoring duties, broadening damage compensation, tightening exculpation, and enhancing procedural evidence rules. The AI Act complements this by setting AI-specific compliance and risk management obligations but does not itself impose separate liability provisions.

  1. The regulatory partner of the European Union (EU), such as corporate law firms, LLPs, or science and technology consultancies, are now facing a shift in their practice as the international landscape for AI-powered medical technologies is undergoing transformation.
  2. This change is driven by regulatory updates, specifically the Product Liability Directive (PLD) and the Artificial Intelligence Act (AI Act), which together create a tight liability framework for manufacturers of intelligent medical devices.
  3. The updated PLD, in contrast to its original version, applies a technology-neutral product liability regime, with key features relevant for smart medical devices, including lifecycle monitoring duty, expanded compensable damages, and procedural enhancements.
  4. Regarding the AI Act, it establishes compliance obligations for providers of AI models, including those embedded in smart medical devices, with increasing enforcement and fines starting August 2026.
  5. Many smart medical devices fall into the "high-risk" category under the AI Act, requiring technical documentation, risk management, human oversight, transparency, and other obligations during development.
  6. The revised PLD explicitly mentions software in its definitions, creating civil law liabilities for importers, distributors, authorized representatives, and anyone who substantially modifies the product outside the manufacturer's control.
  7. Adherence to these regulatory compliance services and legal counsel addressing health-and-wellness, finance, technology, and medical-conditions can help European manufacturers navigate this complex regulatory environment and achieve an image of trustworthiness.
  8. Meanwhile, the Food and Drug Administration (FDA) in the U.S. is authorizing AI/ML-enabled medical devices and developing a lifecycle-based oversight model, which intersects with the challenges of AI and software-enabled devices in both regions.
  9. Balancing the potential of AI against its risks, this tight liability framework for smart medical devices in the EU is part of the effort to ensure safety while fostering innovation and emphasizing the importance of cross-border regulatory partnerships in the international arena.

Read also:

    Latest