Clarkston Consulting https://www.facebook.com/ClarkstonConsulting https://twitter.com/Clarkston_Inc https://www.linkedin.com/company/clarkston-consulting http://plus.google.com/112636148091952451172 https://www.youtube.com/user/ClarkstonInc
Skip to content

Leveraging CSA Principles for AI Validation

Contributors: Courtney Woodson

As pharmaceutical companies explore the use of artificial intelligence (AI) across GxP-regulated processes, many find themselves stalled in pilot phases due to unclear validation pathways. The rapid evolution of AI technologies has outpaced traditional validation approaches. Without a defined framework to account for AI’s unique characteristics, organizations face uncertainty in how to validate these systems and ensure compliance.  

While Computer Software Assurance (CSA) has already shifted the industry focus toward risk-based, critical thinking approaches, applying CSA principles to AI introduces new considerations. By grounding AI validation in CSA principles, this piece outlines a practical framework for regulated environments, starting with defining intended use and progressing to validation activities tailored to system complexity and process risk.   

Computer Software Assurance (CSA) 

CSA emphasizes testing what matters most, i.e. high process risk areas. It applies critical thinking and a risk-based methodology to ensure validation efforts focus on features that pose higher risks to product quality or patient safety, while reducing emphasis on lower-risk functionalities.  

Identifying the Intended Use 

To effectively integrate AI in GxP environments, the first step is clearly defining its intended use. This includes understanding what process(es) the system supports, what decisions it influences, and how it impacts product quality or patient safety.  

Emerging frameworks, such as the AI Maturity Model developed by the ISPE D/A/CH AI Validation Group, offer helpful structure for this assessment. The model considers two key dimensions:  

  • Control Design: the degree to which the AI system governs GxP tasks  
  • Autonomy: how independently the system can update, learn, or evolve 

Mapping a system against these dimensions helps organizations tailor validation efforts to risk. For example, a locked, decision-support tool may require minimal validation while a self-learning system driving GxP critical actions would demand more rigorous testing and oversight.  

Starting with a clear view of intended use — aligned with system maturity—enables teams to apply CSA principles effectively, ensuring compliance while avoiding unnecessary burden.  

Whether using a custom-built or off-the-shelf solution, organizations should engage qualified vendors early to ensure alignment with both technical requirements and regulatory expectations. Collaborative planning at this stage helps clarify intended use, support risk assessments, and streamline CSA-based validation activities.  

Determining the Risk-Based Approach 

Once the intended use has been defined, the next step is to assess the risk posed by potential AI system failure. Specifically, teams should consider whether a failure to perform as intended could compromise product safety, identity, strength, purity, quality (SISPQ), and/or data integrity. This includes evaluating factors such as system configuration, data storage, security controls, data transfer mechanisms, and the potential for operational error. The goal is to identify “reasonably foreseeable” failure modes and determine whether those failures would pose a high process risk.  

Determining the Appropriate Assurance Activities  

For AI implementations, the primary objective of validation is to ensure that data, information, and intellectual property remain secure, accurate, and protected. Validation activities should be right-sized proportionate to the potential risk to SISPQ. Within a CSA framework, this translates to a least burdensome approach: testing only what is necessary to mitigate identified risks. This enables more efficient use of resources while maintaining high standards of product quality and regulatory compliance.  

Establishing the Appropriate Record 

When using AI in regulated environments, sufficient objective evidence must be provided to demonstrate that the AI systems have been independently assessed, meet applicable regulatory requirements, and consistently perform as intended. Documentation should be traceable, risk-justified, and aligned with the intended use of the system.  

Maintaining AI in a Validated State 

Many AI technologies, particularly those incorporating open-source components or dynamic models, require robust change management to remain in a validated state. This includes impact assessments, code reviews, version control, and regression testing. Any updates to models, algorithms, or underlying code should be evaluated for their effect on validated functionality to ensure the system continues to operate within the approved parameters.  

How Organizations Can Prepare  

To prepare an organization for the use of AI, it’s critical to assess its current operational state and compare it with future business goals, compliance requirements, and system capabilities. As with any AI tool, it is only as good as the data it relies on; therefore, a thorough data assessment should be carried out.  

It may be necessary to adopt new procedures or revise existing ones to ensure data is accurate, consistent, and structured for AI use. Clarkston Consulting brings deep expertise to the life sciences industry, life science technologies, and the experience needed to help guide the journey to AI.   

 

Subscribe to Clarkston's Insights

  • I'm interested in...
  • Clarkston Consulting requests your information to share our research and content with you. You may unsubscribe from these communications at any time.
  • This field is for validation purposes and should be left unchanged.

Contributions by Ayan Mohamud and Loreen Nyangweso