The speed of AI innovation is outpacing the evolution of regulatory frameworks like the FDA and EMA.

Thomas Carganico, Vice President of Marketing Strategy at PQE Group.
“By the time we create the law, the technology has already surpassed us along the way. And it’s getting faster and faster,” said Thomas Carganico, vice president of Marketing Strategy at PQE Group, a global consulting firm specializing in life sciences.
Navigating the new era of FDA AI oversight
The FDA’s recent guidance requires device-level quality, validation and lifecycle controls whenever AI influences regulated decisions.
In January 2025, the FDA issued Draft Level 1 Guidance about the use of AI in decision-making for drug and biological products. The guidance introduces a risk-based credibility assessment framework for evaluating AI models based on their specific context of use. It provides recommendations on using AI to produce information intended to support regulatory decisions regarding the safety, effectiveness or quality of drugs.
The FDA also finalized guidance on Predetermined Change Control Plans (PCCP) in August, 2025. The guidance allows manufacturers to pre-authorize algorithm modifications without necessitating new marketing submissions for each implementation. A PCCP must include a description of modifications, a device modification protocol and an impact assessment. This applies to AI-enabled devices reviewed through 510(k), De Novo and PMA pathways.
Current guidelines enforce Attributable, Legible, Contemporaneous, Original and Accurate (ALCOA+) principles. Sponsors must document training data sources, feature selection and the model’s decision logic. Software must be implemented on self-hosted servers rather than open-source platforms to ensure cybersecurity and data privacy.
The FDA also emphasizes meaningful human oversight, often called “human in the loop.”
Accountability in the Data Chain
When AI plays a part in decision-making, it is difficult to determine who bears responsibility for its mistakes. Many AI tools for science come from third-party vendors. If the AI makes an error, who is at fault?
In life sciences, it is the life science company that remains responsible, Carganico explained. “No matter how many tools and software are used, the responsibility is always with the life science company. They need to understand that when the FDA inspects their data, they will be the final people responsible.”
AI models are validated using a risk-based approach. First, validation considers whether the application of the model directly impacts the product. Depending on the risk of the process the model will work on, it is validated differently.
Life science companies must audit and come to quality agreements with their vendors or suppliers, Carganico said.
“If a supplier wants to, for example, create a new technology for AI, they have to make sure that they are meeting the requirements of the life science industry, as if they were a pharma company, basically providing a secure and safe perimeter of working space within their AI platforms,” he said.
The future skill gap
AI guidance often emphasizes the necessity of a “human in the loop.” Carganico thinks this needs to go one step further.
“The final validation of every output that an AI platform is giving has to be validated by not only a human, but an expert in the loop. Someone who can read the data,” Carganico said. These experts often have decades worth of firsthand experience, gained before AI was present in the lab.
This could become a problem in the future, however, as the tasks necessary to gain this expertise are taken over by AI, Carganico said. If machines do the entry-level work, the next generation may struggle to gain experience.
The human touch of teaching and mentoring is one thing AI will not be able to replace, Carganico said.
“Our work will become very hybrid in the future,” he said. While AI will take over some tasks, humans must remain at the center.




Tell Us What You Think!
You must be logged in to post a comment.