
11 june 2025
How is AI Transforming Healthcare Today?
On May 20, 2025, the Master in Regulatory Affairs for Health Industries (ARIS) of Paris-Saclay University organized a roundtable titled “AI Revolution – Transforming the Future of Healthcare”. The event brought together regulatory science, digital health, and legal practice experts to explore how artificial intelligence (AI) is reshaping healthcare and the urgent need for adaptive regulatory strategies.
Moderated by Deborah ESKENAZY, director of the ARIS Master’s program, the panel featured Gabriele PIATON, director of research and innovation at ProductLife Group (PLG); Lorraine MAISNIER-BOCHE and Anne-France MOREAU, legal counsels at McDermott Will & Emery, respectively specialized in digital health and data protection law, and in regulatory strategy and transactions for R&D-driven companies; and Anca PETRE, doctor of pharmacy and co-founder of MedShake Studio.
Exploring generative AI applications in healthcare
The conversation opened with Anca, who examined how generative AI transforms clinical workflows, from chatbots and virtual assistants to AI models like AlphaFold, which accelerate drug discovery. While these tools enhance efficiency and support early-stage insights, they challenge traditional notions of accountability, transparency, and the human-machine relationship.
Are We Ready for the Regulatory Challenges of AI in Medicine?
Following Anca’s presentation of several AI applications in healthcare, Anne-France and Gabriele emphasized that the first key regulatory question is whether the application qualifies as a medical device. This initial determination is essential before addressing AI systems’ specific regulatory challenges under existing frameworks. Under the EU Medical Device Regulation (MDR), classification depends on the intended use. However, AI often serves multiple roles simultaneously—diagnostic, monitoring, and administrative, blurring traditional regulatory boundaries. This ambiguity hinders certification and complicates cross-border scaling.
Gabriele emphasized that the recently adopted Artificial Intelligence Act (AI Act) adds an additional layer of requirements for high-risk systems, creating overlapping obligations that challenge both industry and regulators. Lorraine questioned whether the accumulation of multiple regulatory frameworks truly promotes a safe and coherent framework or merely an accumulation of overlapping rules that lead to overregulation.
How Do Global Regulatory Approaches Differ When It Comes to AI?
Gabriele added an international perspective by comparing the European approach with U.S. regulatory tools. She pointed to the U.S. Food and Drug Administration’s (FDA) use of Predetermined Change Control Plans (PCCPs), which allow predefined algorithm updates post-market while preserving oversight, flexibility not yet available in the EU. She also referenced the FDA’s new AI-Assisted Review Program, announced in May 2025, which will deploy generative AI tools across all FDA centers by June 30. This initiative is intended to reduce administrative workload and enable staff to focus on scientific analysis while ensuring compliance, traceability, and data integrity.
What Are the Risks and Rewards of Data Governance in Healthcare? Is There Legal Consistency Across EU Regulations?
Lorraine extended this comparison to Asia, noting how jurisdictions such as China and South Korea are implementing centralized data governance, cybersecurity requirements, and data localization obligations. These strategies aim to strengthen national control and ensure data security, but may also hinder international collaboration and data sharing. She referenced France’s Health Data Hub as part of ongoing efforts to facilitate access to health data for research while ensuring compliance with the GDPR, under the French Data Protection Authority (CNIL) oversight. However, she warned that excessive centralization could limit the creation of interoperable, high-quality datasets essential for developing trustworthy AI systems.
Lorraine continued with data governance challenges within the EU. She noted that despite national initiatives, the interplay between the GDPR, the Data Act, and the European Health Data Space (EHDS) remains inconsistent. Even standard terms like “validation” or “transparency” vary in interpretation, further complicating legal certainty across countries.
Anne-France then turned to the issue of explainability. Unlike rule-based software, she explained that generative AI models often produce non-reproducible outputs, challenging the regulatory need for traceability and accountability. She stressed the importance of data provenance and transparency, particularly when clinical datasets are fragmented or poorly documented.
What Strategies Can Help Us Navigate the Future of AI in Healthcare?
Gabriele then outlined how PLG helps companies navigate this evolving landscape. She emphasized the importance of integrating regulatory strategy early in the development process and leveraging regulatory intelligence across the product lifecycle. This includes automating pharmacovigilance and validating digital systems in line with Good Automated Manufacturing Practice (GAMP) and Good Manufacturing Practice (GMP). She also noted the absence of clear guidance for AI validation under Good Pharmacovigilance Practices (GVP), a gap PLG actively addresses through ongoing dialogue with the European Medicines Agency (EMA).
From a legal and operational standpoint, Lorraine advocated for a pragmatic, staged governance model. Rather than reinventing entire systems, she encouraged companies to build on existing quality and data protection structures, adapting them progressively to AI contexts. Drawing on her experience in digital health, she recommended starting with minimal viable procedures and iterating through documentation, training, and alignment with risk. The ultimate goal, she noted, is not exhaustive compliance but risk mitigation, avoiding patient harm, regulatory failure, or data breaches.
Anca contributed a clinical and systems perspective. She referenced a recent Nature meta-analysis showing that while AI performs well in structured analytical tasks, it still struggles with communication, clinical judgment, and contextual sensitivity. She cited OpenAI’s LLM HealthBench, which found that large language models met quality standards in only 60% of physician-patient interactions, especially in emotionally nuanced cases. Anca stressed the need for task-specific integration, where AI complements but does not replace human decision-making. She also pointed to future trends, including domain-specific models, synthetic datasets for underserved populations, and the growing role of consumer devices in health monitoring.
The roundtable closed with a clear message: Artificial intelligence must be integrated through a responsible and agile regulatory framework that supports human-machine collaboration while safeguarding patient care. Legal uncertainties remain, particularly around generative AI, including ownership, consent, and the reuse of clinical data. Clarifying these aspects is essential to ensuring lawful and ethical deployment.
In parallel to the roundtable’s themes, the event also reflected an ongoing academic-industry collaboration between PLG and Paris-Saclay University through a CIFRE-funded PhD conducted by Bouchra DERRAZ, co-supervised by Gabriele and Deborah. The research addresses regulatory challenges related to innovative combination products and diagnostic and therapeutic solutions that integrate drug, biologic, and/or device components, often enhanced by AI. Positioned at the intersection of academic insight and regulatory practice, the project seeks to support the development of flexible, lifecycle-based frameworks adapted for emerging technologies.
At ProductLife Group, this research-based approach reinforces our commitment to bridging regulatory science and real-world application. By integrating early regulatory strategy, engaging with authorities, and contributing to global guidance, PLG supports Medtech and biotech companies from early-stage
startups to large companies bringing AI-powered solutions to market safely, effectively, and sustainably.
Go to our Events to register
Go to our News to get insights