FDA Issues Warning for Inappropriate AI Use: What Pharma Manufacturers Need to Know
On April 2, 2026, the Food and Drug Administration (FDA or Agency) issued a warning letter to Purolea Cosmetic Lab identifying significant violations of Current Good Manufacturing Practice (cGMP) regulations for finished pharmaceuticals, as outlined in 21 CFR Parts 210 and 211. These findings indicate that the facility’s methods, controls, and manufacturing processes did not conform to federal standards. Among the violations cited in the warning letter was how artificial intelligence (AI) agents were utilized in the creation of specifications, procedures, and records intended for compliance with FDA requirements.
Unpacking the FDA Warning Letter for Inappropriate AI Use
The FDA warning letter highlighted the risks of overreliance on AI within pharmaceutical manufacturing environments. This specific facility faced scrutiny for using automated agents to generate critical compliance documentation and production records without performing necessary human oversight and review.
The technology used failed to identify legal mandates for process validation; therefore, the company unknowingly violated federal safety standards. The regulatory body emphasizes that while AI can assist in operations, all digital output must be critically reviewed by qualified personnel to ensure accuracy.
The FDA states “We recognize that you have ceased drug production. If you plan to resume drug production, and use AI to help with cGMP activities, such as development of procedures and specifications, any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of your firm’s QU in accordance with section 501(a)(2)(B) of the FD&C Act. See also 21 CFR 211.22; 21 CFR 211.100.”
FDA and AI Use in GMP Environments
The FDA views AI as an aid rather than a replacement for human judgment. Treating AI as the final authority on documentation within manufacturing operations can lead to gaps in regulatory compliance and product safety. Moving forward, the firm must implement rigorous human verification protocols before resuming any production activities to maintain quality control.
This warning serves as a stern reminder that automated tools do not replace the responsibilities of a firm’s quality unit.
Ensuring Safe AI Use in Regulatory Environments
AI can accelerate documentation; however, in a regulated environment, speed without control is a consequential liability. You must design and incorporate controls to demonstrate the AI outputs are rigorously reviewed, traceable, and governed within your quality system.
Some things to consider:
- Classify the GxP risk level of the proposed output: High, Medium, Low
- Check output for hallucinations and outdated references
- Have content reviewed by a second independent Subject Matter Expert (SME)
- Have the output reviewed by Quality Assurance/Regulatory Affairs (QA/RA) for compliance with applicable regulations
- Confirm Data Integrity (ALCOA+ Principles)
- Ensure the document generation process can be explained during inspection
- Perform Computer Software Assurance (CSA) validation of the AI generation tool
The overarching lesson from the Purolea warning letter is not that AI introduces new regulatory risks, but that it can amplify existing weaknesses if not properly governed. Compliance ultimately depends on disciplined operations, empowered quality oversight, and informed human judgment. AI may accelerate the work, but responsibility remains human.
By helping clients strengthen these foundational elements while implementing thoughtful AI governance, Clarkston Consulting can play a pivotal role in ensuring that innovation and compliance advance hand in hand. Reach out to us today.
Subscribe to Clarkston's Insights


