Computer System Validation (CSV) Challenges in Life Sciences (Part 2)

As discussed in Part 1 of this blog series, pharmaceutical, biotech and medical device companies use computerized systems for various purposes, including R&D, clinical trials, manufacturing, facilities and distribution. To ensure the quality, safety, and efficacy of drugs, it is essential to validate these computerized systems and the software used by them. Regulatory agencies require computer system validation (CSV) to verify and document that a computerized system does exactly what it is designed to do in a consistent and reproducible manner. CSV is required by regulatory agencies when implementing a new computerized system or when making a change to an existing validated computerized system (upgrades, patches, extensions, etc.) Some examples of computerized systems which are required to be validated include:

  • Laboratory data capture devices
  • Automated laboratory equipment
  • Manufacturing execution systems
  • New or modified manufacturing equipment and processes
  • Laboratory, clinical or manufacturing database systems

When done properly, CSV can be a smooth and efficient process. There are, however, several common mistakes that companies can make when undertaking CSV, which could result in a stalled or failed process. Let’s examine some of these pitfalls in more detail.

Varied Processes and Standards: Standards exist across the organization, such as policies, procedures, work instructions, and templates; yet they can vary by business, department, or site. Significant costs result from overlapping SOPs and inconsistent standards, which make sharing of assets difficult. Organizations should invest in developing and maintaining a centrally controlled and maintained set of harmonized standards with integrated risk analysis from subject matter experts focused on different business areas. This will ensure that duplicate efforts are minimized, and the quality is higher due to the involvement of the top experts across the organization. It is common that these efforts are supported by vendors with significant expertise in CSV.

Organization and Governance: Many companies still have decentralized governance and uncontrolled execution. The ownership and management of validation activities vary from project to project and from one department to another. Projects are not handled consistently with clear roles and responsibilities. Some are led by IT, QC Labs, and others by users or quality. Centralized, uniform monitoring and benchmarking will allow a company to identify deficiencies, improve compliance processes, and enhance quality. Compliance with quality standards will turn from being an overhead cost to becoming an asset to the company.

Tools and Automation: System lifecycle assets, such as templates, work instructions, forms, and guidance documents are often inconsistent across departments. These tools are put in place to minimize the risk of project team members taking shortcuts and skipping steps but are not flexible and drive unnecessary efforts with minimal value to quality or compliance. A centralized, consistent, flexible, shared, and continually enhanced set of automation tools (e.g., Kneat or ValGenesis software) that are supported by quality coaching will drive value, help the project teams to produce more consistent deliverables faster, and provide the right level of consistency. While adding flexibility often means opening the door for shortcuts and negligence, quality coaching and quality reviews can mitigate those risks.

Risk Profile: Many organizations fail to do a thorough risk profile early on in the process and wasted time and work can result. Risk identification can come from a variety of places – system stakeholders, historical data, SMEs, vendor audits, etc. A thorough process of identifying all potential system risks (system failures) should be undertaken by qualified personnel, and these potential risks should be documented. Once the potential risks of system failure have been identified, they should be classified in terms of their likelihood of occurring. The likelihood of system failure can be thought of as a measure of system complexity and is classified in GAMP 5 guidelines as low, medium or high. A risk-based approach to CSV ensures that the computer system functionalities with the highest risk receive the most focused validation effort first.

Resources and Training: CSV processes are often deployed without appropriate training and coaching and without assurance of consistent interpretation. Central organizational control with business, IT, and quality personnel deeply involved and integrated in the CSV effort, as well as quality coaches (CSV subject matter experts) acting as a center of excellence, enable a successfully executed risk-based approach to CSV. Evolution to business user ownership of CSV is also essential to enabling the system owners and business areas to drive compliance based on business and regulatory risk.

CSV is an important part of confirming the accuracy and integrity of your data, along with ensuring product safety and effectiveness. Inefficient or ineffective CSV processes prevent projects from being delivered on time and within budget and may also result in regulatory action. CSV today requires a paradigm shift from a project-specific engagement to a comprehensive, centralized compliance program.