Download as pdf or txt
Download as pdf or txt
You are on page 1of 102

The Validation Specialists

Publish by Premier Validation

An Easy to Understand Guide | Software Validation


Software Validation

First Edition

© Copyright 2010 Premier Validation

All rights reserved. No part of the content or the design of this book
maybe reproduced or transmitted in any form or by any means without the
express written permission of Premier Validation.

The advice and guidelines in this book are based on the experience of
the authors, after more than a decade in the Life Science industry, and as
such is either a direct reflection of the "predicate rules" (the legislation
governing the industry) or are best practices used within the industry. The
author takes no responsibility for how this advice is implemented

Visit Premier Validation on the web at www.premiervalidation.com or


visit our forum at www.askaboutvalidation.com

ISBN 978-1-908084-02-6

An Easy to Understand Guide | Software Validation


So what's this book all about?

Hey there,
This book has drawn on years of the authors' experience in the field of
Software Validation within regulated environments, specifically Biotech and
Pharmaceutical. We have wrote and published this book as an aid to anyone
either working in the software validation field, as well as for anyone that is
interested in software testing.

Every software validation effort seems to be a combination of checking


procedures, updating procedures and reviewing good practice guides and
industry trends in order to make the validation effort more robust and as
easy as possible to put in place.

This can often be a tiresome and long-winded process with the


techncial wording of each piece of legislation, the standards such as ASTM,
ISO, etc. just seem to make us validation folks feel more like lawyers half of
the time.

The purpose of this book is to try and pull all of that together - to make a
Software Validaton book that is just written in an easy to understand
language, to give help and guidance regarding the approach taken to
validate the software whilst laying out an easy launchpad to allow users of
the book to be able to search for more detailed information as and when it is
required.

An Easy to Understand Guide | Software Validation


We hope that the information in this book will be as enjoyable for you to
use, as it was for us to put together and that your next software validation
project will be more welcomed than not.

So I think it's pretty clear, you've just purchased the Software Validation
bible.

Enjoy!

An Easy to Understand Guide | Software Validation


The brains behind the operation!

Program Director: Graham O'Keeffe


Content Author: Orlando Lopez
Technical Editor: Mark Richardson
Editor: Anne-Marie Smith
Printing History: First Edition: February 2011
Cover and Graphic Design: Louis Je Tonno

Notes of Rights
All rights reserved. No part of this book may be reproduced, stored in a
retrieval system, or transmitted in any form or by any means, without the
prior written permission of the copyright holder, except in the case of brief
quotations embedded in critical articles or reviews.

Notes of Liability
The author and publisher have made every effort to ensure the accuracy of
the information herein. However, the information contained in this book is
sold without warranty, either express or implied. Neither the authors and
Premier Validation Ltd, nor its dealers or distributors will be held liable for
any damages to be caused either directly or indirectly by the instructions
contained in this book

The Validation Specialists

Published by Premier Validation Ltd


Web: www.premiervalidation.com
Forum: www.askaboutvalidation.com
Email: query@premiervalidation.com

ISBN 978-1-908084-02-6

Print and bound in the United Kingdom

An Easy to Understand Guide | Software Validation


Table of Contents
Purpose of this Document 1
What is Software Validation? 2
Why Validate? 3
Validation is a Journey, Not a Destination 5

Planning for Validation


1: Determine What Needs to be Validated 7
2: Establish a Framework 10
3: Create a Validation Plan for Each System 13
Software Development Life Cycle 16

Validation Protocols
Validation Protocol 23
Design Qualification (IQ) 24
Installation Qualification 25
Operational Qualification (OQ) 29
Performance Qualification (PQ) 31
Other Test Considerations 32

Validation Execution
Preparing for a Test 41
Executing and Recording Results 42
Reporting 44
Managing The Results 47

An Easy to Understand Guide | Software Validation


Maintaining the Validated State
Assessing change 50
Re-testing 54
Executing The Re-test 56
Reporting 56

Special Considerations
Commercial 58
Open Source Systems 62
Excel Spreadsheets 63
Retrospective Validation 65
Summary 66
Frequently Asked Questions 67
Appendix A: Handling Deviations 72
Appendix B: Handling Variances 74
Appendix C: Test Development Considerations 77
Appendix D: Capturing Tester Inputs and Results 81
References 84
Glossary 85
Quiz 88

An Easy to Understand Guide | Software Validation


Purpose of this Document
This document addresses software validation for support
systems—that is, systems used to develop, deliver, measure, maintain, or
assess products such as, Document Management Systems, Manufacturing
Execution Systems (MES) and CAPA applications; manufacturing and control
systems. The main purpose of this document is to help you establish a solid
validation process.

The validation procedures in this document address software that is


vendor supplied, Commercial Off-The-Shelf (COTS), internally-developed
software, or a hybrid (customized COTS software).

Throughout this document, “best practices,” which although


are not required but have been proven to be invaluable in
validating software, will be noted by a hand symbol as shown
here.

1
An Easy to Understand Guide | Software Validation
What is
Software Validation?

Software validation is a comprehensive and methodical approach that


ensures that a software program does what it's intended to do and works in
your environment as intended. Some software vendors verify that the
requirements of the software are fulfilled but do not validate the entire
system (network, hardware, software, processing, and so on).

Verification is a systematic approach to verify that computerised


systems (including software), acting singly or in combination, are fit for
intended use, have been properly installed, and are operating correctly. This
is an umbrella term that encompasses all types of approaches to assuring
systems are fit for use such as qualification, commissioning and
qualification, verification, system validation, or other.

2
An Easy to Understand Guide | Software Validation
Why Validate?

Regulations often drive the need to validate software including those

required by:

- the United States Food and Drug Administration (FDA);


- the Sarbanes-Oxley (SOX) Act;
- the European Medicines Agency (EMEA) (see Eudralex)

All countries and regions around the world have their own set of rules
and regulations detailing validation requirements: EudraLex is the collection
of rules and regulations governing medicinal products in the European
Union; FDA is the US equivalent and in Japan it is the Japanese Ministry of
Health & Welfare

Additionally, companies operating under standards such as ISO 9001


and ISO 13485 for medical devices also require software validation.

But over and above regulations, the most important reason for
software validation is to ensure the system will meet the purpose for which
you have purchased or developed it, especially if the software is “mission
critical” to your organization and you will rely on it to perform vital functions.
3
An Easy to Understand Guide | Software Validation
A robust Software validation effort also:

- Utilises established incident management, change management


and release management procedures both operationally and to
address any errors or system issues.
- Demonstrates that you have objective evidence to show that the
software meets its requirements;
- Verifies the software is operating in the appropriate secure
environment;
- Shows that any changes are being managed with change control
(including the managed roll-out of upgrades) and with roll-back
plans, where appropriate;
- Verify the data used or produced by the software is being backed up
appropriately and can be restored.
- Ensures users are trained on the system and are using it within its
intended purpose in conjunction with approved operating
procedures (for commercially-procured software, this means in
accordance with the manufacturers' scope of operations);
- Ensures that a business continuity plan is in place if a serious
malfunction to the software or environment occurs.

4
An Easy to Understand Guide | Software Validation
Validation is a Journey,
Not a Destination

Being in a validated state, by the way, does not mean that the software
is bug-free or that once it's validated, you're done. Systems are not static.

Software patches must be applied to fix issues, new disk space may
need to be added as necessary, and additions and changes in users occur.
Being in a “validated state” is a journey, not a destination. It's an iterative
process to ensure the system is doing what it needs to do throughout its
lifetime.

Note: Any changes to the validated system must be performed in a


controlled fashion utilising change control procedures and performing
documentation updates as necessary. The documentation must be a
reflection of the actual system.

5
An Easy to Understand Guide | Software Validation
Planning for Validation
As with most efforts, planning is a vital component for success. It's the
same for validation. Before beginning the validation process, you must:

1. Determine what needs to be validated;


2. Establish a framework;
3. Create a validation plan.

6
An Easy to Understand Guide | Software Validation
1 Determine What Needs
to be Validated

The first step is to create an inventory of software systems and identify


which are candidates for validation. A good rule of thumb is to validate all
software:

· That is required to be validated based on regulatory requirements;


· Where there is a risk and where it can impact quality (directly or
indirectly).
This could include spreadsheets, desktop applications, manufacturing
systems software, and enterprise-level applications. If your organization is
small, this should be a relatively easy task. If you have a large organization,
consider breaking up the tasks by functional area and delegating them to
responsible individuals in those areas.

Risk Management
Validation efforts should be commensurate with the risks. If a human
life depends on the software always functioning correctly, you'll want to take
a more detailed approach to validation than for software that assesses color
shades of a plastic container (assuming the color shade is only a cosmetic
concern).

7
An Easy to Understand Guide | Software Validation
If quality may be affected or if the decision is made that the system
needs to be validated anyway, a risk assessment should be used to
determine the level of effort required to validate the system. There are
various ways to assess risk associated with a system. Three of the more
common methods are:

· Failure Modes Effects and Analysis (FMEA) – an approach that


considers how the system could fail with analysis of the ultimate
effects;
· Hazard Analysis – a systematic approach that considers how
systems could contribute to risks;
· Fault Tree Analysis (FTA) – a bottom-up approach looking at specific
faults (failures) and identifying what happens to realize the fault.

For each system assessed, document the risk assessment findings, the
individuals involved in the process, and the conclusions from the process.

Note: ASTM E2500 Standard Guide for Specification, Design, and Verification of
Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment
is a very useful tool for developing a risk based approach to validation and
achieving QbD (Quality by Design). Similarly, ISO 14971 is a good reference for
general risk management.

8
An Easy to Understand Guide | Software Validation
Formal risk assessments provide a documented repository for the
justification of approaches taken in determining the level of validation
required and can be used in the future for reference purposes.

During the risk assessment, items that may lead to problems with the
system, validation effort or both should be addressed; this is called risk
mitigation and involves the systematic reduction in the extent of exposure to
a risk and/or the likelihood of its occurrence (sometimes referred to as risk
reduction).

A system owner should be appointed overall responsibility for each


system – this person will be knowledgeable about system (or the
business/system requirements for new systems) The person who purchased
the product, or the person responsible for development of the system will
usually become the system owner – this person will be a key representative
at any risk assessment.

Document all decisions in a formal risk assessment document and make


sure it is approved (signed-off) by all stakeholders including the System
Owner and QA.

9
An Easy to Understand Guide | Software Validation
2 Establish a Framework

When building a house, the first thing you must do is create a blueprint.
A Validation Master Plan (VMP) is a blueprint for the validation of your
software. The VMP provides the framework for how validation is performed
and documented, how issues are managed, how to assess validation impact
for changes, and how to maintain systems in a validated state.

The VMP covers topics such as:

The scope of the validation;


The approach for determining what is validated (unless covered in a
separate corporate plan, for example, a quality plan);
Elements to be validated (and how to maintain the list);
Inter-relationships between systems;
Validation responsibilities;
Periodic reviews;
Re-validation approaches;
The rationale for not validating certain aspects of a system (so you
don't have to revisit your decision during audits);

10
An Easy to Understand Guide | Software Validation
The approach to validating these applications if your company has
applications that have been in production but have not been
validated (retrospective vs prospective)
Then validation approach to validate a system that is already
commissioned and live, but not formally validated.
General time lines, milestones, deliverables and roles and
responsibilities of resources assigned to the validation project.

The majority of the VMP is static. To avoid re-releasing the entire


document, maintain specific elements to be validated and the top-level
schedule in separate, controlled documents.

The VMP should include a section on training and the minimum level of
training/qualification. This can either be a statement referring to a training
plan or a high-level statement.

The VMP should also include, in general terms, the resources (including
minimum qualifications) necessary to support the validation. Again, you
don't need to take it to the level of, “Mary will validate the user interface
application <x>. It should be more like, “Two validation engineers who have
completed at least three software validation projects and XYZ training.”
Resources required may include outside help (contractors) and any special
equipment needed.

11
An Easy to Understand Guide | Software Validation
Error handling
Finally, the VMP should describe how errors (both in the protocols and
those revealed in the software by the protocols) are handled. This should
include review boards, change control, and so on, as appropriate for the
company. For a summary of a typical deviation, see Appendix A. For a
summary of protocol error types and typical validation error-handling, see
Appendix B.

Well-explained, rational decisions (say what you do and do what you


say) in the VMP can avoid regulatory difficulties.

Note: For regulated industries, a VMP may also need to include non-software
elements, such as manufacturing processes. This is outside the scope of this
document.

12
An Easy to Understand Guide | Software Validation
3 Create a Validation Plan
for Each System

You must create a validation plan for each software system you need to
validate. Like the VMP, the validation plan specifies resources required and
timelines for validation activities, but in far greater detail. The plan lists all
activities involved in validation and includes a detailed schedule.

The validation plan's scope is one of the most difficult things to


determine due to multiple considerations including servers, clients, and
stand-alone applications. Is it required that the software be validated on
every system on which the software runs? It depends on the risk the
software poses to your organization or to people.

Generally, all servers must be validated. If there are a limited number of


systems on which a stand-alone application runs, it may be better to qualify
each one (and justify your approach in the plan). Again, it depends on the
risk. For example, if human lives might be at stake if the system were to fail,
it's probably necessary to validate each one. You must assess the risks and
document them in the appropriate risk assessment.

An Easy to Understand Guide | Software Validation


The results of your risk analysis become activities in the validation plan
or requirements in your system specifications (e.g. URS). For example, if the
risk analysis indicates that specific risk mitigations be verified, these
mitigations become new requirements which will be verified. For example, if
a web application is designed to run with fifty concurrent users, however
during stress testing it is identified that the system becomes unstable after
40 concurrent logins, then a requirement of the system must be that the
system has adequate resources to accommodate all fifty users.

The validation plan must also identify required equipment and whether
calibration of that equipment is necessary. Generally, for software
applications, calibrated equipment is not necessary (but not out of the
question).

The validation plan should specify whether specialized training or


testers with specialized skills are needed. For example, if the source code is
written in Java, someone with Java experience would be needed.

Finally, if the software is to be installed at multiple sites or on multiple


machines, the validation plan should address the approach to take. For
example, if the computers are exactly the same (operating system, libraries,
and so on) then a defensible justifiable solution for validation across the
installation base could be to perform installation verification (or possibly
even a subset) of each location the software is installed. To further mitigate
risk, OQ or parts of OQ could also be run, whatever the strategy it should be

14
An Easy to Understand Guide | Software Validation
documented in the VMP, Risk Assessment or both (without too much
duplication of data).

For customised or highly-configurable software, your plan may need to


include an audit or assessment of the vendor. This would be necessary to
show that the vendor has appropriate quality systems in place, manages and
tracks problems, and has a controlled software development life cycle.

If safety risks exist, you will be in a better defensible position if you


isolate tests for those risks and perform them on each installation.

The VP/VMP are live documents that are part of the System
Development Lifecycle (SDLC) and can be updated as required. Typical
updates can include cross reference changes and any other detail that might
have changed due to existing business, process or predicate quality system
changes.

15
An Easy to Understand Guide | Software Validation
Software Development
Life Cycle

Both the development of software and the validation of that software


should be performed in accordance with a proven System Development
Lifecycle (SDLC) methodology.

A properly implement SDLC allows the system and validation


documentation to be produce in a way such that a great level of
understanding about the system can be gained by the design,
implementation and validation teams whilst putting down the foundations
for maintenance and managing changes and configurations.

There are many SDLC models that are acceptable and there are benefits
and drawbacks with each. The important thing is that:

There is objective evidence that a process was followed;


There are defined outputs that help ensure the effort is controlled.

It's not always the case that a software product is developed under such
controlled conditions. Sometimes, a software application evolves from a
home-grown tool which, at the time, wasn't expected to become a system
that could impact the quality of production.

16
An Easy to Understand Guide | Software Validation
Similarly, a product may be purchased from a vendor that may or may
not have been developed using a proven process. In these cases, a set of
requirements must exist, at a minimum, in order to know what to verify, and
the support systems must be established to ensure the system can be
maintained.

Let's look at a typical “V” development model:

requirements System
analysis testing

High level Integration


design testing

Detailed
Unit testing
design

implementation

In this model, the defined outputs of one stage are inputs to a


subsequent stage. The outputs from each stage are verified before moving
to the next stage. Let's look at the requirements and support stages a bit
more closely.

17
An Easy to Understand Guide | Software Validation
The Requirements stage
Requirement specifications are critical components of any validation
process because you can verify only what you specify—and your verification
can be only as good as your specifications. If your specifications are vague,
your tests will be vague, and you run the risk of having a system that doesn't
do what you really want. Without requirements, no amount of testing will
get you to a validated state.

How to specify requirements


Because systems are ever evolving—whether to fix problems, add a
new capability, or support infrastructure changes—it's important to keep
the requirements specifications (User/Functional and Design Specs) up to
date. However, if requirements are continually changing, you run the risk of
delays and synchronization issues and cost overruns. To avoid this,
“baseline” your requirements specifications and updates so they are
controlled and released in a managed manner. Well-structured specification
documents facilitate upkeep and allow for the verification protocols to be
structured so that ongoing validation activities can be isolated to only the
changed areas and regression testing.

Requirements must be specified in a manner in which they can be


measured and verified.

18
An Easy to Understand Guide | Software Validation
Use “shall” to specify requirements. Doing so makes verifiable
requirements easy to find. Each requirement should contain
only one “shall.” If there is more than one, consider breaking
up the requirement.

It's easy, by the way, to create “statements of fact” requirements, which


force the testing of something that adds little value. For example, “The user
interface module shall contain all the user interface code.” Yes, you could
verify this, but why? And what would you do if it failed? Completely
restructuring the code would greatly increase risk, so it's likely nothing
would be done. These types of requirements can still be stated in a
specification as design expectations or goals, just not as verifiable
requirements.

Requirements traceability
Part of the validation effort is to show that all requirements have been
fulfilled via verification testing. Thus, it's necessary to trace requirements to
the tests. For small systems, this can be done with simple spreadsheets. If
the system is large, traceability can quickly become complex, so investing in
a trace management tool is recommended. These tools provide the ability to
generate trace reports quickly, “sliced and diced” any way you want.

19
An Easy to Understand Guide | Software Validation
Additionally, some tools provide the ability to define attributes. The
attributes can then be used to refine trace criteria further. Attributes can
also be used to track test status. The tracing effort culminates in a Trace
Matrix (or Traceability Matrix). The purpose of a matrix is to map the design
elements of a validation project to the test cases that verified or validated
these requirements. The matrix becomes part of the validation evidence
showing that all requirements are fully verified.

Caution: Tracing and trace reporting can easily become a project in itself. The tools
help reduce effort, but if allowed to grow unchecked they can become a
budget drain.

Requirements reviews
Requirements reviews are key to ensuring solid requirements. They are
conducted and documented to ensure that the appropriate stakeholders are
part of the overall validation effort. Reviewers should include:

· System architects, to confirm the requirements represent and can


support the architecture;
· Developers, to ensure requirements can be developed;
· Testers, to confirm the requirements can be verified;
· Quality assurance, to ensure that the requirements are complete;
· End users and system owners.

20
An Easy to Understand Guide | Software Validation
Validation Maintenance and Project Control
Support processes are an important component of all software
development and maintenance efforts. Even if the software was not
developed under a controlled process, at the time of validation the following
support processes must be defined and operational to ensure that the
software remains in a validated state and will be maintained in a validated
state beyond initial deployment:

· Reviews (change reviews, release reviews, and so on);


· Document control;
· Software configuration management (configuration control,
release management, configuration status accounting, and so on);
· Problem/incident management (such as fault reporting and
tracking to resolution);
· Change control.

21
An Easy to Understand Guide | Software Validation
Validation Protocol
Design Qualification (DQ)
Installation Qualification (IQ)
Operational Qualification (OQ)
Performance Qualification (PQ)
Other Considerations

22
An Easy to Understand Guide | Software Validation
Validation Protocols

So now that you've assigned your testers to develop protocols to


challenge the system and you have solid requirements, you must structure
the tests to fully address the system and to support ongoing validation.

The de-facto standard for validating software is the IQ/OQ/PQ


approach, a solid methodology and one that auditors are familiar with (a
benefit). Not following this approach shouldn't get you written up for a
citation, but expect auditors to look a little more closely.

Note: An example V-Model Methodology Diagram is depicted above on page x of y

Appendix C describes several aspects of test development. These are general


guidelines and considerations that can greatly ensure completeness and
viability of tests.

23
An Easy to Understand Guide | Software Validation
Design
Qualification DQ
Design Qualification (DQ) is an often overlooked protocol, but can be a
valuable tool in your validation toolbox. You can use DQ to verify both the
design itself and specific design aspects. DQ is also a proven mechanism for
achieving Quality by Design (QbD).

If software is developed internally and design documents are produced,


DQ can be used to determine whether the design meets the design aspects
of the requirements. For example, if the requirement states that a system
must run on both 16-bit and 32-bit operating systems, does the design take
into account everything that the requirement implies?

This, too, is generally a traceability exercise and requires expertise in


software and architecture. Tracing can also show that the implementation
fully meets the design. In many cases, this level of tracing is not required. It's
often sufficient to show that the requirements are fully verified through test.

The DQ may also be used to address requirements that don't lend


themselves to operations-level testing. These are generally non-functional,
static tests. The concept is mostly out of system validations (for example, the
system shall use a specific camera or a specific software library shall be
used), but can be applied to software as well.
24
An Easy to Understand Guide | Software Validation
Installation
Qualification IQ
Installation Qualification (IQ) consists of a set of tests that confirm the
software is installed properly. IQ may verify stated requirements. Where this
is the case, the requirements should be traced to the specific test
objective(s).

There are three aspects to assessing IQ:

· Physical installation;
· Software lifecycle management;
· Personnel.

Where the system is client-server based, both physical installation and


software lifecycle management must be addressed. The client side is often
the most difficult due to the difficulty of maintaining the configuration.

Physical installation
There are many factors that can be considered for software Installation
Qualification, including:

· Amount of memory available on the processor where the software


will run;
25
An Easy to Understand Guide | Software Validation
· Type of processor (for example, 16-bit, 32-bit) on which the
software will run;
· Available disk space;
· Support applications (including revisions);
· Operating system patches;
· Peripherals.

A potential issue with some applications, especially those that are web-
based, is the browser. If you're a Firefox user, for example, you've probably
gone to some sites where the pages don't display correctly and you can see
the full site only if you use Internet Explorer. These cases may require that
every client configuration be confirmed during IQ.

Where commercial products or components are used, the


requirements for them need to be well understood and, if not documented
elsewhere, documented (and verified) in the IQ.

The “cloud” and other virtual concepts are tough because you
haverelinquished control over where your application runs or where the
data is stored. Does that rule out the use of such environments in a regulated
environment? Not necessarily. Again, it comes back to risk.

If there's no risk to human life, then the solution may be viable. If the
system maintains records that are required by regulation, it will take
considerable justification and verification. And, as always, if the decision is
made to take this direction, document the rationale and how any risks are

26
An Easy to Understand Guide | Software Validation
mitigated. In certain cases the Risk Assessment may require re-visiting.

Software Lifecycle Management


The IQ should assess the software's lifecycle management aspects to be
sure that necessary support processes are in place to ensure the software
can continually deliver the expected performance throughout its life. These
include:

· General management processes (change management, software


configuration management, problem management);
· Maintenance processes (backup and recovery, database
maintenance, disk maintenance).
· Verification that appropriate Disaster Recovery procedures are in
place.
These areas are more abstract than the physical aspects. It's fairly easy
to determine if the specified amount of disk space is available. It's far more
difficult to determine if the software content management system is
sufficient to ensure the proper configuration is maintained across multiple
branches. Clearly, though, these aspects cannot be ignored if a full
assessment is to be made of the software's ability to consistently deliver
required functionality. Most large companies have specialists that can help
in these areas. If such specialists are not available, outside consulting can
provide invaluable feedback.

If the software is or has components that are purchased (commercially


or contracted), the efforts will need to be assessed both internally and
27
An Easy to Understand Guide | Software Validation
externally. For example, if your company purchases and distributes a
software package, you will need to accept problems from the field and
manage them. You will likely pass on these problems to the vendor for
correction. So, even if some processes are contracted out, it does not relieve
you of the responsibility to ensure their adequacy.

Audits can be carried out either internally or externally to verify that


software lifecycle components are being addressed and managed correctly.
The finding of such audits is usually written-up in an audit report and may be
cross referenced from a validation report as necessary.

Personnel
If the use of a system requires special training, an assessment needs to
be made to determine if the company has the plans in place to ensure that
users are properly trained before a system is deployed. Of course, if the
system has been deployed and operators are not adequately trained, this
would be a cause for concern.

Consider usage scenarios when assessing training needs. For example,


general users may not need any training. Individuals assigned to database
maintenance, however, may require substantial skills and, thus, training.

As part of any validation effort, training must be verified – an


appropriate training plan must be in place to ensure that all users are trained
and that any changes to systems or processes trigger additional training. All
training must be documented and auditable. Training is a GxP requirement.

28
An Easy to Understand Guide | Software Validation
Operational
Qualification OQ
Operational Qualification (OQ) consists of a set of tests that verify that
requirements are implemented correctly. This is the most straight-forward
of the protocols: see requirement, verify requirement. While this is a gross
simplification—verifying requirements can be extremely challenging—the
concept is straightforward. OQ must:

· Confirm that error and alarm conditions are properly detected and
handled;
· Verify that start-ups and shutdowns perform as expected;
· Confirm all applicable user functions and operator controls;
· Examine maximum and minimum ranges of allowed values.
· OQ Tests the Functional Requirements of the system.

OQ can also be used to verify compliance to required external


standards. For example, if 21 CFR Part 11 is required, OQ is the place where
the system's ability to maintain an audit trail is confirmed.

Be sure to have clear, quantifiable expected results. If you have vague


requirements, verification is difficult to do. For example, if a requirement
was established for the system to “run fast,” it's not possible to verify this.

29
An Easy to Understand Guide | Software Validation
“Fast” is subjective. Verifiable requirements are quantifiable (for
example, “Response time to a query shall always be within 15 seconds.” A
good expected result will give the tester a clear path to determine whether
or not the objective was met. If vague requirements do slip through,
however, at least define something quantifiable in the test via a textual
description of the test objectives.

30
An Easy to Understand Guide | Software Validation
Performance
Qualification PQ
Performance Qualification (PQ) is where confirmation is made that a
system properly handles stress conditions applicable to the intended use of
the equipment. The origination of PQ was in manufacturing systems
validation, where PQ shows the ability of equipment to sustain operations
over an extended period, usually several shifts.

Those concepts don't translate well to software applications. There are,


however, good cases where PQ can be used to fully validate software. Web-
based applications, for example, may need to be evaluated for connectivity
issues, such as what happens when a large number of users hit the server at
once. Another example is a database application. Performance can be
shown for simultaneous access and for what happens when a database
begins to grow.

Critical thinking about what could impact performance is key to


developing a PQ strategy. It may well be the case that a PQ is not applicable
for an application. The decision and rationale should, once again, be
documented in the validation plan.

31
An Easy to Understand Guide | Software Validation
Other Test
Consideration

So if you do IQ, OQ, and PQ, do you have a bug-free system? No. Have
you met regulatory expectations? To some degree, yes. You are still,
however, expected to deliver software that is safe, effective, and, if you want
return business, as error free as possible.

Generally, most validation functional testing is, “black box” testing.


That is, the system is treated as a black box: you know what goes in and
what's supposed to come out. (As opposed to white-box, where test design
allows one to peek inside the "box," and focuses specifically on using
internal knowledge of the software to guide the selection of test data.)

There are a number of other tools in the validation toolbox that can be
used to supplement regulatory-required validation. These are not required,
but agencies such as the US FDA have been pushing for more test-based
analysis to minimize the likelihood of software bugs escaping to the
customer. They include:

· Static analysis;
· Unit-level test;
· Dynamic analysis; Easy to Understand Guide to oftware

32
An Easy to Understand Guide | Software Validation
Ad-hoc, or exploratory testing;
Misuse testing.
We'll briefly look at these tools. Generally, you want to choose the
methods that best suit the verification effort.

Static analysis
Static analysis provides a variety of information, from coding style
compliance to complexity measurements. Static testing is gaining more and
more attention by companies looking to improve their software, and by
regulatory agencies. Recent publications by the US FDA have encouraged
companies to use static analysis to supplement validation efforts.

Static analysis tools are becoming increasingly sophisticated, providing


more insight into code. Static analysis can't replace formal testing, but it can
provide invaluable feedback at very little cost.

Static analysis can generate many warnings, and each must be


addressed. This doesn't mean they need to be corrected, but you do need to
assess them and determine if the risk of making changes outweighs the
benefits. For “working” software, even a change to make variables comply
with case standards is probably too risky.

Static analysis is best done before formal testing. The results don't need
to be included in the formal test report. A static analysis report should,
however, be written, filed, controlled, and managed for subsequent

33
An Easy to Understand Guide | Software Validation
retrieval. The report should address all warnings and provide rationale for
not correcting them. The formal test report can reference the static analysis
report as supplemental information; doing so, however, will bring the report
under regulatory scrutiny, so take care in managing it.

Unit-level test
Some things shouldn't be tested from a user perspective only. Examples
using this approach include scenarios where a requirement is tested in a
stand-alone environment. For example, when software is run in debug
mode and breakpoints are set, or verifying results via software code
inspection.

Another good use of unit testing is for requirements that are not
operational functionality but do need to be thoroughly tested. For example,
an ERP system with a requirement to allow an administrator to customise a
screen. A well-structured unit test or set of tests can greatly simplify matters.

Unit tests and test results are quality records and need to be managed
as such. There are several methods you can use to cite objective evidence to
verify requirements using unit-level testing. One way is to collect all original
data (executed unit tests) and retain in a unit test folder or binder (in
document control). The test report could then cite the results. Another way
is to reference the unit tests in formal protocols. Using this method, the unit
tests can either be kept in the folder or attached to the formal protocols.

34
An Easy to Understand Guide | Software Validation
Version identifiers for unit tests and for referencing units
tested greatly clarify matters when citing results. For example,
revision A of the unit test may be used on revision 1.5 of the
software for the first release of the system. The system then
changes, and a new validation effort is started. The unit test
needs to change, so you bump it to revision B and it cites
revision 2.1 of the software release. Citing the specific version
of everything involved in the test (including the unit test)
minimizes the risk of configuration questions or mistakes.

Dynamic analysis
Dynamic analysis provides feedback on code covered in testing and is,
generally, for non-embedded applications as the code has to be
instrumented (typically an automated process done by the tool;
instrumenting allows the tool to know the state of the software and which
lines of code were executed – as well as providing other useful information)
and generates data that has to be output as the software runs. The fact that
the software is instrumented adds a level of concern regarding the results,
but in a limited scope gives an insight into testing not otherwise possible.

Dynamic analysis is also best done prior to and outside the scope of
formal testing. Results from a dynamic analysis are not likely needed in the
formal test report, but if there's a desire to show test coverage it can be

35
An Easy to Understand Guide | Software Validation
included. Again, referencing the report would bring it under regulatory
scrutiny, so be sure it's very clear if taking that route.

Ad-hoc or exploratory testing


Structured testing, by nature, cannot cover every aspect of the
application. Another area getting critical acclaim from both internal testing
and from regulatory agencies is ad-hoc, or exploratory, testing. With a good
tester (critical thinking skills), exploratory testing can uncover issues,
concerns, and bugs that would otherwise go undetected, until the product
hits the field.

We'll use an application's user interface as an example. A text box on a


screen is intended to accept a string of characters representing a ten
character serial number. Acceptable formal testing might verify that the field
can't be left blank, the user can't type more than 10 characters, and that a
valid string is accepted. What's missing? What if someone tries to copy and
paste a string with embedded carriage returns, or a string greater than ten
characters? What happens if special characters are entered? Again, the
possibilities are nearly endless.

All scenarios can't be formally tested, so ad-hoc testing provides a


vehicle to expand testing. This is largely informal testing carried out by the
developer and SMEs to eliminate as many problems as possible before dry-
running and demonstrating to the user base.

36
An Easy to Understand Guide | Software Validation
The problem lies in how to manage exploratory testing and reporting
the results. All regulatory agencies expect protocols to be approved before
they are executed. This is not possible with exploratory testing since the
actual test results aren't discovered until the tester begins testing. This is
overcome by addressing the approach in the VMP and/or the product-
specific validation plan. Explain how formal testing will satisfy all regulatory
requirements, and then exploratory testing will be used to further analyze
the software in an unstructured manner.

Reporting results is more difficult. You can't expect a tester to jot down
every test attempted. This would be a waste of time and effort and would
not add any value. So, it's reasonable to have the tester summarize the
efforts undertaken. For example, using the user interface example, the
tester wouldn't report on every test attempted on the text box; instead, the
tester would report that exploratory testing was performed on the user
interface, focusing on challenging the text box data entry mechanism. Such
reporting would be more suited to a project reporting mechanism and
information sharing initiative rather than being formal validation testing.

Note: It is perfectly acceptable for the developers to test their work prior to formal
validation testing

37
An Easy to Understand Guide | Software Validation
All failures must be captured, so it's a good idea to include tests
that failed into formal protocols. Additionally, encourage the
tester to capture issues (things that didn't fail but didn't give
expected results) and observations (things that may just not
seem right). The change management process can control any
changes required.

Misuse testing is a hybrid of formal testing and exploratory testing. For


example, if software is running on a device that has a software-controlled
button, misuse testing would include actions such as holding the button
down, or rapidly pressing the button. Unlike exploratory testing, all tests
attempted should be documented and the results captured in a formal test
report.

People with general knowledge of the technology but not of the


software makes good misuse testers. They aren't biased by any
implementation details. If they do something “wrong,” it may be an
indication of a potential problem with the software.

38
An Easy to Understand Guide | Software Validation
Validation Execution
Preparing for a test
Executing and recording results
Reporting
Managing the results

39
An Easy to Understand Guide | Software Validation
Validation Execution

Execution of validation protocols is pretty straightforward: follow the


protocols. Of course, few things go as planned, so in addition to discussing
the basics of execution, we'll also discuss how to handle anomalies.

Before jumping into protocol execution, conduct a Test


Readiness Review, before the start of each phase (before IQ,
before OQ, and so on) ideally. This review assesses the
organization's readiness to begin validation work. All
requirements are reviewed to ensure both human and
equipment resources are available and in the proper state (for
example, testers are adequately trained, or equipment is
available and within calibration).

40
An Easy to Understand Guide | Software Validation
Preparing for a test

The first step in preparing for a test is to establish the environment,


which should be as close to “production” as possible. This means that the
software is:

· Developed using a defined, controlled configuration management


and build process;
· Installed according to the defined installation procedure;
· Installed on production or production-equivalent equipment.

These elements would be part of the Test Readiness Review. If,


for example, production-equivalent equipment is not
available, the team could analyze what's available and
determine if some or all of testing can proceed. Such risk
assessments must be documented; the Test Readiness Review
minutes are a good place.

In parallel, you must prepare to identify and allocate test personnel.


Test personnel must be sufficiently trained, educated, and experienced to
properly execute the protocols. Personnel skill requirements should have
been detailed in the validation plan (or associated training plan) so that

41
An Easy to Understand Guide | Software Validation
personnel decisions are not arbitrary.

Furthermore, you must make sure test personnel are not put in conflict-
of-interest positions. Ideally, a separate QA team should be used. Clearly,
members of the development team should not be selected, but the lines get
fuzzy quickly after that. Just make sure that your selection is defensible. For
example, if a developer is needed to perform a unit test (typical, since QA
folks may not have the code-reading skills of a developer), then ensure the
developer is not associated with the development of that unit.

Note: It is absolutely forbidden for anyone to review AND approve their own work.

Executing and recording results


Record all test results using Good Documentation Practices (GDP)! A
summary of what to capture is provided in Appendix D.

Minimize annotations, but don't hesitate to make them if it helps clarify


results or test efforts. For example, if a test is stopped at the end of one day
and resumed the next day, an annotation should be made to show where
and when (the day) testing stopped and where and when testing resumed.

Similarly, if a test is performed by multiple testers, a sign-off should be


available for all testers. If this is not possible, then an annotation should be
made indicating which steps were executed by whom, and which steps were
performed by which tester.

Mistakes will invariably be made. Again, use GDP to correct the mistake
and provide an explanation.
42
An Easy to Understand Guide | Software Validation
If a screen shot is captured to provide evidence of compliance, the
screen shot becomes a part of the test record. It's important, therefore, to
properly annotate the screen shot. Annotations must include:

· A unique “attachment” reference (for example “Attachment 1 of


VPR-ENG-001”);
· The tester's initials (or signature, if required by company
procedures);
· The date the screenshot was taken;
· A reference to the test protocol and test step;
· Correct pagination in the form “page x of y” (even if a single page).
Annotations must ensure complete linkage to the originating test and
are in line with GDP.

Protocol variances are to be handled as defined in the VMP. Appendix B


provides a summary of standard techniques. Company policies and
procedures must be followed, as always.

Issues and observations should be noted by the tester. Support tools


(for example, problem reporting systems) should be provided to facilitate
such reporting, but do not need to be detailed in the test report since they
don't represent a test failure.

If you want to learn more about Good Documentation Practices why not
purchase a copy of our E-Book “An Easy to Understand Guide to Good
Documentation Practices” -> Go to www.askaboutvalidation.com

43
An Easy to Understand Guide | Software Validation
Reporting
At its core, the test report summarizes the results (from IQ, OQ, and PQ)
of the test efforts. Testers and reviewers involved in testing, along with test
dates, should be recorded. In general, a report follows the following outline:

I. System Description
II. Testing Summary (test dates, protocols used, testers involved)
III. Results
IV. Deviations, Variances, and Incidents
V. Observations and Recommendations
VI. Conclusions

In many cases, there will be variances (for example, the test protocol
steps or the expected results were incorrect) and/or deviations (expected
results not met), which should be handled in accordance with the VMP.
Generally, variances and deviations are summarized in the test report,
showing that they are recognized and being dealt with properly.

Failures, on the other hand, must be explicitly described and explained.


For each failure, provide a link to the formal problem report. It's typical to
summarize the results in the “Results” section of the report and then use an
appendix to provide additional details (for example, a problem report
tracking number).

It's also a good practice to allow the test team to provide


recommendations. For example, a tester could observe that in a particular

44
An Easy to Understand Guide | Software Validation
situation the system runs extremely slow. Perhaps the requirements were
met, but the customer or end user may not be happy with the application.
Allowing recommendations in the report can highlight areas that may need
further investigation before launch. The report should draw a conclusion
based on objective evidence that the product:

· Sufficiently meets the requirements


· Is safe (per verified risk mitigations)
· Can consistently fulfill its intended purpose (or not).

Observations and recommendations should be followed up by the


development team.

If there are test failures, a risk assessment should be performed to


determine whether the system can or cannot be used in a production
environment. For example, if a requirement specifies that a response to a
particular input is made within five seconds, but one response time comes
back after five seconds, an assessment can be performed. If the team agrees
that this is acceptable for production (and this is the only failure), the
rationale can be documented in the test report (or release report), and the
system can be accepted for production use. Of course, if the failure is in a
safety-critical area, there's probably no reasonable rationale for releasing
the system. (presuming it's justifiable).

45
An Easy to Understand Guide | Software Validation
If the validation effort is contracted out, however, the test report may
not be available or appropriate to determine whether the system can be
released with test failures—especially if the impact of some of the failures
cannot be fully be assessed by the test team. In such cases, it's acceptable for
a report addendum or a separate “release report” to be produced with the
company's rationale for releasing the system (presuming it's justifiable).

Depending on the size of the system, multiple reports can be used as


required. For example, a unit testing report may be issued after a batch of
unit tests have completed; or on the other side of the scale there may be a
requirement for each test to be reported on in its own right.

46
An Easy to Understand Guide | Software Validation
Managing the results
Testing is done, the software is released, and you're making money.
What could go wrong? Audits and auditors—the reality that everyone in a
regulated industry faces. You can mitigate potential problems by preparing a
Validation Pack that contains:

User requirements;
Supporting documentation (internal and external – user manuals,
maintenance manuals, admin, and so on);
Vendor data (functional specification, FAT, SAT, validation
protocols);
Design documentation;
Executed protocols;
· An archive copy of the software (including libraries, data, and so
on);
· Protocol execution results;
· The test report;
· A known bug list with impact assessment.

Then, when the auditor asks whether the software has been validated,
present the Validation Pack (or at least take them to the relevant library).
You'll make his or her day.

47
An Easy to Understand Guide | Software Validation
Maintaining the
Validated state
Assessing Change
Re-testing
Executing the re-test
Reporting

48
An Easy to Understand Guide | Software Validation
Maintaining
The Validated State

It's common knowledge that changes increase the risk of introducing


errors. The current validation is challenged by:

· Any change to the system's environment;


· Any change to requirements or implementation;
· Daily use, as databases grow, disks fill, and additional users are
added.

That's why it's critical to assess validated systems continually and take
action when the validation is challenged. As stated earlier, validation is a
journey, not a destination. Once you achieve the desired validated state,
you're not finished. In fact, it's possible the work gets harder.

49
An Easy to Understand Guide | Software Validation
Accessing Change

Change will happen, so you might as well prepare for it. Change comes
from just about everywhere, including:

· Specification changes (users want new features, different modes);


· Software changes (driven by specification changes and bug fixes).
Infrastructure changes (additions of new equipment on the
network, changes to the network architecture, new peripherals
installed, change from local servers to the cloud);
· System upgrades (auto- and manually-installed upgrades to the
operating system, tools, and libraries);
· Interface changes (to front-end or back-end with which the system
communicates, browser changes);
· User changes (new users, change of user roles, modification of user
permissions);
· An aging system (databases grow, disks fill up, more users slow
down the system);
· Expansion (the system needs to be installed and used at another
site).

50
An Easy to Understand Guide | Software Validation
These are just a few examples. The answer is not to re-run all validation
tests on a weekly basis. That would be a waste of time and money. So, how
do you know what changes push you out of validation and require re-test?

Risk Assessment. The easiest changes to analyze are specification and


software changes. They are, generally, controlled pretty well in terms of
knowing when changes are made, the scope of the change, and the likely
impact.

Assess each change in terms of risk to the existing validation results.


Could the change affect the results of the test? If so, it's likely some re-
testing is in order.

For software changes, it's important to understand the scope of the


change and assess the potential impacts so you can define a rational
regression test effort, in addition to the tests for the specific changes.

Timing releases
So, you can see that you should plan out updates and not push releases
out on a daily or weekly basis. Of course, if a customer demands a new
release, or there are critical bugs that need to be corrected, you don't want
to delay installing it. But delaying releases saves time because you can do the
re-test on a batch of changes where there are likely overlaps in testing.

51
An Easy to Understand Guide | Software Validation
Risk analysis and the Trace Matrix
Risk analysis is best facilitated through the Trace Matrix. Using the
matrix enables you to:

· See what requirements were affected;


· Identify related requirements;
· Establish the full regression test suite.
The applicable segments of the Trace Matrix should be used to
document the rationale for the test and regression suite.

Indirect changes
How do you address indirect changes—that is, those changes that you
may not have direct control over, or potentially not even know about?

Understand that anything your IT folks do may well have an impact on


the validated states of your systems. So, the first order of business is to get
friendly with your IT staff. Establish a working relationship with them so
you'll be able to monitor their plans. This is not to be obstructive, it's just
good planning. The earlier on in planning you can get involved, the less
chaotic things will be after the changes are made. Dealing with a non-
compliance audit because of an IT change will cost more than dealing with
any potential problems up front.

Again, assess all changes in terms of system impact. Most likely, the IQ
will be impacted and those tests may need to be re-developed before re-
executing.
52
An Easy to Understand Guide | Software Validation
For cases where the install base must be expanded (additional sites,
additional equipment, and so on), you'll need to assess the level of
validation required for each additional installation. If this has already been
addressed in the validation plan, the effort should be consistent with the
plan. If, however, it makes sense to expand the test effort based on
experience, you should update the validation plan and perform the extra
testing as required. If it's not addressed in the validation plan, perform a risk
assessment, update the validation plan (so you don't have to re-visit the
exercise should the need arise again in the future), and perform the
validation.

General Observations

General observations are another source of input to the risk analysis.


Are customers or users complaining that the system is slower? Do some
transactions time out? These factors may signify a degrading system. Don't
ignore such complaints; monitor them closely. They could be leading
indicators that the performance (PQ) is no longer adequate.

53
An Easy to Understand Guide | Software Validation
Re-testing

Re-testing
Risk assessment gives you a good idea what to re-test. But is that
sufficient? Before launching the re-test effort, take a step back and re-read
the VMP. Make sure that you're staying compliant with what the master plan
says. Then, update the validation plan for the system. This will:

· Lay out what tests you plan to execute for the re-test;
· Provide the rationale from the risk analysis for the scope of the
retest.
In most cases, not all tests in the validation suite will need to be re-run.
For example, if the change is isolated to a user interface screen (a text box
went from 10 characters long to 20 characters), you likely don't need to re-
run IQ. Consider the back-end consequences, however, because it's possible
that such a change fills up a database faster, so be sure to look at the big
picture.

As mentioned previously, a well-structured test suite facilitates


isolating the testing to a limited scope. For example, if you have a user
interface (UI) protocol and only the UI changed, it may be possible to limit

54
An Easy to Understand Guide | Software Validation
testing to just the UI protocol. If, however, you scatter UI tests throughout
the functional tests, it may not be possible to test the UI in isolation and,
depending on the change, may require all tests to be executed.

Regression testing
You may run into a case where something changed that was considered
isolated to a particular function, but testing revealed it also impacted
another function. These “regressions” can be difficult to manage. Such
relationships are generally better understood over time.

The best advice is to look for any connections between any changes and
existing functionality and be sure to include regression testing (testing
related functions even though no direct impact from the known changes) in
the validation plan. Better to expand test scope and find them in testing than
let a customer or user find them.

Test Invalidation
Hopefully, you never run into a situation where something turns up that
invalidates the original testing, but it's been known to happen.

For example, if calibrated equipment used in the test was found to be


out of calibration in the next calibration update, there's no fault and
deception, but you must re-run the test. Or, worse-case scenario, a test
failed originally but the tester didn't write it up as a failure (due to fear, or
some other reasons) and passed it. This is fraud, by the way. In this case, it's

55
An Easy to Understand Guide | Software Validation
possible that this is a systemic issue and auditors may not have any faith in
any of the results, so plenty of analysis and rationale will have to go into test
scope reduction. In fact, this may even require a CAPA to fully reveal the root
cause and fix the systemic issue. But better to find it, admit it, and fix it than
have an auditor find it.

Executing the re-test


No matter how good your tests are structured, there will likely be some
things that are simply out of scope. For example, if you structured your tests
to isolate the UI (User Interface), and then had only minor change to the UI, it
probably doesn't make sense to re-test the entire UI. Instead of writing a
new protocol, one approach is to note which steps will be executed in the
updated validation plan and then strike through the unused steps and mark
as N/A (using GDP). The validation plan and mark-ups absolutely have to jive,
so check and double-check. Explanatory annotations on the executed test
(referring to the validation plan) also help.

Reporting
Report the re-test effort similar to the results from the original
execution. Show that the re-validation plans were met through testing and
report the results. Handle failures and deviations as before.

56
An Easy to Understand Guide | Software Validation
Special Consideration
Commercial
Open Source Systems
Excel Spreadsheet
Retrospective Validation

57
An Easy to Understand Guide | Software Validation
Commercial

Commercial applications have a supplier and, thus, the purchase falls


under supplier management. This means the supplier must be approved, so
you must have criteria to approve a supplier.

Since commercial software falls under regulatory scrutiny, part of the


supplier approval criteria must include confirmation that the software is
developed and maintained in a controlled manner.

Depending on the level of risk or how critical the application is, an


assessment of the vendor's capabilities, including and up to an audit of the
vendor, may be warranted. Any such assessment should be available in the
Validation Pack. Supplier Assessments are common are in the validation
world.

One of the issues when validating purchased software is how to verify


large, commercial packages. Many companies purchase packages for
enterprise-level systems—Enterprise Resource Planning (ERP),
Manufacturing Resource Planning (MRP), Electronic Document
Management System (EDMS), and so on. By nature, these are “do all”
applications. Most companies use only a fraction of the capabilities and,
typically, tailor the use of the system to their specific needs.
58
An Easy to Understand Guide | Software Validation
Some systems allow add-on applications to be built either by the
provider or by the client. If you don't use certain built-in capabilities of an
application, those capabilities need not be in the validation scope.

When you purchase software, you can:

· Use it out-of-the-box;
Tailor it for use in your facility;
Customize it.

Using out-of-the-box
Since validation is done on the system's intended use, for out-of-the-
box systems, the testing part of validation would be only on how it's used in
your facility (per your specifications on how you expect it to work).

Tailoring
Tailoring takes the complexity to the next level. In addition to testing for
your intended use, you should also add tests to verify that the customized
components function as required, consistently.

Frequently, systems allow you to define levels of security, or assign


users to pre-defined levels of security (such as administrator, maintenance,
engineer, operator, and so on). Part of the testing, in this case, would include
that users are defined with the appropriate security access levels and the
specific correctly configured policies enforce any restrictions imposed on
the various user groups.

59
An Easy to Understand Guide | Software Validation
Customizing
Customized systems are the most complex in terms of test effort.
Customizations need to be thoroughly tested to ensure the custom
functions perform as specified.

In addition, substantial regression testing must be performed to ensure


that related areas of the base (out-of-the-box) functionality are not
adversely impacted.

Due diligence
Regardless of how the software is incorporated into use, due diligence
should be performed to get problem reports related to commercial
software. These may be available online through the vendor site, or may
need to be requested from the vendor. Once obtained, be sure to assess
each issue against how the software will be used at your site to determine if
concessions or workarounds (or additional risk mitigations) need to be
made to ensure the software will consistently meet its intended use. This
analysis and any decisions become part of the validation file.

For systems that are customized by the vendor, there may be


proprietary information involved in implementing the
customizations. Ensure that the contracts protect your
Intellectual Property (IP).

60
An Easy to Understand Guide | Software Validation
Protecting yourself
With commercial software, especially mission-critical software, you are
at risk if the vendor is sold or goes out of business. One way to protect your
company is to put the software into an “escrow” account. Should the
company fold or decide to no longer support the software, at least the
software source can be salvaged. This has inherent risks. For example, now
that you have the software, what do you do with it? Most complex
applications require a certain level of expertise to maintain. This all needs to
be considered when purchasing an application.

Documentation
A “How We Use the System Here” or "SOP (Standard Operating
Procedure)" specification facilitates the validation effort regardless of how
the system is incorporated into the environment. The validation plan
specifies how commercial systems will be validated.

SOPs are written to detail how the system should be operated,


everyone must follow the SOPs meaning that the system is used in a
consistent fashion at all times.

61
An Easy to Understand Guide | Software Validation
Open Source Systems

Conventional wisdom says to stay away from open source systems.


Many say that you can't expect quality when using an open source system.

From experience, however, a widely-used open source system is robust.


So, you have to weigh the potential risks against the benefits. It's probably
not a good idea to use an open source system for a mission-critical
application.

The easy road is to avoid open source systems altogether. If, however,
the benefits outweigh the risks you can probably expect some very critical
scrutiny. Thus, your validation efforts will need to be extremely robust. Since
the system is open source, you should plan on capturing the source code and
all of the tools to build the system into the configuration management
system. Testing should be very detailed and should include tests such as
misuse, exploratory, and dynamic analysis.

62
An Easy to Understand Guide | Software Validation
Excel Spreadsheets

Spreadsheets are a sticky issue. If you do any research, you'll see


assertions such as, “You can't validate a spreadsheet.” The truth is, you can't
validate the spreadsheet software (e.g Excel), but you can validate a
spreadsheet. It's done quite frequently. But you need to take a rigid stance
on usage. You must:

· Validate all formulas, macros, and data validation items;


· Lock down (protect) the spreadsheet to prevent changes (only data
entry fields should be open for editing);
· Set up data entry fields to validate the data entered (numeric values
should be bound, and so on).

There are arguments that a spreadsheet can be used without validation


if the final results are printed out (and signed). This would constitute the
“typewriter rule” for electronic records. However, when a spreadsheet uses
formulas, the only way for this approach to be valid would be to manually
verify each calculation. This would bypass the use of the spreadsheet, so
that doesn't seem to be a viable approach. Thus we recommend that even if
you print and manually sign it, you must validate any formulas and macros.

63
An Easy to Understand Guide | Software Validation
It is a common myth that according to 21 CFR, Part 11 (US FDA
regulation on Electronic Records/Electronic Signatures), Spreadsheets
cannot be validated for e-signatures. To do so, add-on packages are
required. Whilst it is accepted that Microsoft Excel doesn't faciliate
electronic signatures (yet!), other spreadsheet packages may be able to do
so.

64
An Easy to Understand Guide | Software Validation
Retrospective
Validation

If a system has not been validated and isn't in production use and
validation is required, a “retrospective” validation exercise needs to be
performed, which is no different from a normal validation exercise (except
that the validation of the system isn't incorporated into this initial project
delivery). The concern is what to do if anomalies are revealed during
validation.

This is a risk management exercise and is beyond the scope of this


document. It is sufficient to say that any anomaly would need to be analyzed
to determine if subsequent actions are warranted. Depending on the
severity of the anomaly, actions such as customer notification or recall could
be necessary. So, it's always better to validate before production use to
minimize the likelihood of such drastic actions.

65
An Easy to Understand Guide | Software Validation
Summary

Software validation is a challenge with many aspects to consider—from


process to technical to administrative. This document should help with the
process and administrative aspects. If you take a planned, responsible
approach and your positions are defensible, you should be fine.

Use validation efforts as a business asset. Doing validation for the sake
of checking a box shows no commitment and will likely result in problems in
the long run.

Consultants can be an effective means to either kick-start validation


efforts, bring in validation expertise as needed, or manage the entire
validation effort. This book, hopefully, has provided you with sufficient
understanding of the validation process so you can assess consultants for
competency.

In addition, this book is intended to be an affordable and easy to


understand guide – it is a guide. Also check your company procedures and
relevant regulatory regulations to ensure that you are always both current
and correct.

66
An Easy to Understand Guide | Software Validation
Frequently
Asked Questions

Q: I have purchased Application XYZ but I only use capabilities 1, 3, and 7


of the ten included. Do I have to validate the entire application?

A: No, validate only what you use. Develop the requirements specification
to define exactly what you use, as you use them. This forms the basis
(and rationale) for what you actually validate. Should you begin using
additional capabilities later, you will need to update the requirements
specification and validate the new capabilities. You will also need to
perform a risk analysis to determine what additional testing needs to be
done (those validated capabilities that may interface or be influenced
by the new capabilities requiring regression or other testing).

Q: When do I start?

A: Now. Whether you have a system that's been in operation for years, or
whether you've started development of a new system, if your quality
system requires validated systems, they need to be validated.

Q: How do I start?
A: The Validation Master Plan is the foundation. It will also help you
determine (through risk analysis) what the next steps need to be.

67
An Easy to Understand Guide | Software Validation
Q: Do I have to validate Windows (or Linux, or…)

A: The operating system on which the application runs could well


introduce errors or cause the application to function in an unexpected
manner. But how would you validate an operating system? Generally, it
should be sufficient to validate the application on the target operating
system (remember the rule about testing in the intended environment).
If the application is to run on multiple operating systems, it's good
practice to qualify the installation on each operating system. Most
companies have server qualification procedures that ensure that each
build has been done consistently and is hosted on a specific (qualified)
hardware platform.

Q: Do I have to validate my network?

A: A qualified network is expected when that network is hosting a


validated system. Generally, a network need not be validated to ensure
validation of a software application. There could be cases where
network validation is necessary, but that's outside the scope of
software validation.

Q: What about hosted software?

A: Hosted software—software resident on another company's equipment


and execution is controlled via the internet or some other networking
protocol—is a relatively new idea. If the software meets criteria to
require validation (affects quality, and so on), you are responsible for

68
An Easy to Understand Guide | Software Validation
validation. Typically, you as the client would establish contractual
requirements for the hosting company to meet basic standards for life
cycle management, change control, backup and recovery, and so on.
You might perform validation testing yourself or you might contract
with the hosting company to do so. In this situation, it's extremely
important that the contract be set up to require the hosting company to
notify you of any changes (to the software, the environment, and so on).
And should changes occur, an assessment would be required to
determine if re-validation is warranted. Many companies realize the
potential exposure for regulatory violations and tend to avoid using
hosted applications.

Q: What about distributed applications? Is validation required on every


installation?

A: A distributed application is a system installed on multiple computers


and networks in multiple locations. If you can prove that each install on
each system is equivalent, then you can test on one system and verify a
subset of the others. In general, you must prove the specification on
each computer has the minimum requirements and the software on
each is the same version. If that's not possible or is impractical, you may
be able to show equivalence and minimize testing on other systems (for
example, it may not be necessary to test the user interface on every
system), however systems such as those that are safety-critical should
be tested on every installation. Of course, the non-test aspects of

69
An Easy to Understand Guide | Software Validation
validation apply to every installation. This applies to both clients and
servers.

Q: Does migrating data from one validated system to another require


validation?

A: Generally, yes. The data is part of the system, so it constitutes a change


in the system. The new system should be tested to ensure the migration
completed as expected. If there are tools involved in the migration, then
those tools would likely need to be validated as well.

Q: What's the best way to organize the validation package?

A: Whatever works for your company! There is no right way. For regulatory
purposes, it should be readily retrievable in human readable format
(and complete). As long as those conditions are met, you've done it
right. This document outlines a generally accepted approach, but it's by
no means required.

Q: Are screen shots required to support test results?

A: No, but screen shots are a good way to show that the expected results
have been met. There's a tendency in practice to overdo it and have
screenshots for every action. This results in volumes of data and makes
it difficult to find specific data when needed. It also adds an overhead in
properly managing and maintaining the data. Judicious usage of
screenshots can greatly enhance justification for meeting expected
results. A small amount of screenshots for very critical items are

70
An Easy to Understand Guide | Software Validation
sometimes helpful, but when screenshots are used for nearly every test
step, then it more or less undermines the actual tester signing and
dating each test step and is also cumbersome for reviewers, especially
those (such as QA) who aren't entirely familiar with the system in the
first place.

Q: I have a system that has software controlling the fill volume of a vial. If
I change the amount of volume for the fill process (a system
parameter), do I have to re-validate?

A: It depends. There's more to this question than on the surface. If you do a


good job of validation, you have validated the system to support a range
of configurations, not just a static configuration. Take, for example, a
system that assesses a volume of fluid dispensed to a vial. Initially, the
system is only expected to support filling a 20mL vial. An astute
validation analyst, though, decided to validate the system to verify
functionality across a range of fill volumes, including no fill. Then, when
the company determined a need for a vial filled with only 10mL of fluid,
no additional validation was required. The system had already been
validated for this configuration. Had they initially validated for a filled
vial only, additional validation for the new configuration would be
required. What if the company decided to use a different fluid? If the
system hadn't been validated for a range of fluids (viscosity, specific
gravity, and so on), additional validation is likely needed.

71
An Easy to Understand Guide | Software Validation
Appendix A:
Handling Deviations

Deviations are when an actual result differs from an expected result;


these are often test failures. Deviations could occur due to a technical
reason, or simply a typographical error on the part of the test script or
document author.

Deviations could cause testing to stop cold while the deviation is


analyzed and a solution determined. All deviations should be immediately
raised to appropriate personnel (test co-ordinator, Quality Assurance,
management). Deviation handling is defined by company policy, but
deviations generally should:

· Stop testing until corrective actions are implemented; often in the


case of typos, the tester can simply make a note in the comments
section and continue;
· Allow testing to continue, but the deviation must be corrected prior
to launch/release;
· Allow testing to continue, but the deviation will be acceptable for
release (and will be corrected in a later release);
· Be rejected if the deviation is a protocol error and the protocol
redlined (or updated and re-released) before testing continues.

72
An Easy to Understand Guide | Software Validation
Deviations are considered original data so they should not be “covered
up,” even if the protocol is updated and re-released.

73
An Easy to Understand Guide | Software Validation
Appendix B:
Handling Variances

It's often the case that, despite good intentions and careful reviews,
protocol errors slip through the cracks. There are several varieties of
protocol errors and all are considered variances.

Obvious typographical errors;


Procedural errors in a test step;
Procedural errors in expected results.

Obvious typographical errors


The easiest type of variance to deal with is an obvious typographical
error. Generally, these can be marked up during execution by the tester with
a note indicating the error. Typographical errors need not be detailed in the
test report as individual variances. The report can make a general statement
that typographical errors were identified and changes redlined. Note that
once testing is completed, the protocols should be updated to incorporate
the changes. (Protocols that don't require continuous execution won't
benefit from being updated.)

74
An Easy to Understand Guide | Software Validation
Procedural errors in a test step
Procedural errors in a test step generally require a bit more effort to
correct. The tester should make the necessary changes using mark-ups and
annotations with explanations, and execute the protocol in accordance with
the changes. Once the test is completed, the reviewer should review the
change and record an annotation indicating concurrence. Depending on the
extent of the variance, the tester may wish to involve the reviewer early and
get concurrence that the change is appropriate before executing the
changes. Procedural errors are individually summarized in the test report.

Procedural errors in expected results


Procedural errors in expected results are the most challenging errors to
deal with. These changes generally raise red flags for auditors. Put yourself in
their shoes - You are executing a protocol with pre-defined and approved
expected results. When executing the test, however, the tester changes the
expected result. Was this done just to pass the test (make the expected
results match the actual results) or was it an appropriate change? Ideally,
changes to expected results are pre-approved before execution either by
the reviewer or by a QA representative. The tester should not make a
unilateral decision to change the expected results and move on. The tester
should redline the changes and then approval can be shown as an additional
annotation.

75
An Easy to Understand Guide | Software Validation
Use a red pen to mark up changes to correct protocol
variances. This helps the changes stand out during review. It
also helps the individuals maintaining the protocols quickly
identify protocol changes.

Note: Some companies specifically only allow one colour of ink to be used on all
paperwork and this must take priority as necessary.

Suspension of testing
Once execution begins, the software (and environment) will not
change, ideally. But this is not always the case. For example, if a fatal flaw is
exposed, especially one that has additional consequences (for example, will
cause other tests to fail); it's better to suspend testing, fix the problem, and
then resume testing. In some situations, it would not be appropriate to pick
up where testing was suspended.

An analysis must be made to determine whether the test results


captured before the point of the change are still valid. Such analysis is
recorded in the test report. If analysis shows that results gathered before the
point of suspension are no longer valid, the results are still kept as “original
data” and noted in the test report that the analysis indicated that re-
execution of the tests was necessary.

76
An Easy to Understand Guide | Software Validation
Appendix C:
Test Development
Considerations

Test case development considerations


Regardless of the type of testing, it's important to have good test
developers. Good test developers have strong critical thinking skills. They
think of ways to challenge the system that system developers likely never
considered. For example:

· Question A system takes a date as input. What are the different


tests that can challenge the system?

· Answer Virtually limitless.


- Different formats: 1-jan-10; January 1, 2010; 1/1/10;
- Invalid values: 32-jan-10; 13/1/10;
February 29, 2009;
- Missing values: 1 Jan, June, 3/10, 2010;
- Incorrect entries: nothing entered, January B, 2010;
- Other considerations: US format (m/d/y) versus EU
format (d/m/y), support for other languages.

77
An Easy to Understand Guide | Software Validation
This is how a good tester thinks. What are the possible ways that a user
could perform this operation? Is there something a system couldn't handle?
Will the system store or display the date differently depending on my
computer's regional settings?

Test protocol layout considerations


At a high level, when developing protocols, try to group functionality. In
subsequent (re)validation efforts, such grouping facilitates re-test without
having to test the entire system. This is heavily dependent on how the
software is used and maintained and is only learned through experience.

Within a particular test, space must be provided for the tester to record
the name of the software and version under test. In addition to the software,
all equipment used should be recorded. All equipment should be uniquely
identified (serial or model number and/or revision number, for example).

As appropriate, space should be provided to record whether or not the


test equipment is required to be calibrated and, if so, the last calibration date
and the next calibration due date. The list of equipment includes:

· Computers or devices on which the software under test was


executed;
· Peripherals used in the test (barcode scanners, and so on);
· Test equipment (test stands, oscilloscopes, multimeters, and so
on);
· Support software (browsers, debuggers, and so on).

78
An Easy to Understand Guide | Software Validation
Within a protocol, there should be a place for the tester to record his or
her name and date when the protocol was executed. Some companies
require this only on the first page of the test (section) and the last page of the
test (section). Others require some “mark” (signature or initials) by the
tester on every page on which data was entered. In addition to the tester's
“mark”, blocks for a reviewer's “mark” should also be provided.

Often, things don't go exactly as planned, or a tester may


notice something outside the prescribed steps. Providing
space for comments, issues, and observations from the tester
allows this important information to be captured and recorded
in an easily located place.

79
An Easy to Understand Guide | Software Validation
Acceptance criteria
For each test performed, clear and unambiguous acceptance criteria
must be defined. This can take multiple forms. If a quantitative value is
defined or if there is any tolerance, be sure to provide the range.

Ideally, this is defined in requirements, but if it's not it must be defined


in the protocol (Remember that appropriate personnel review and approve
the protocol so such definition is acceptable but, again, should be
considered a 'last resort' for specifications.)

In some cases, a screen shot that the tester can compare against can be
included in the protocol. Be sure to specify any portions which are in or out
of scope in case there is, for example, a time or date displayed (which would
fail a bit-by-bit comparison).

80
An Easy to Understand Guide | Software Validation
Appendix D:
Capturing Tester
Inputs and Results

Regulatory bodies expect to see some “substance” with test protocols.


Expected results should be as specific as possible without painting yourself
into a corner. Heeding the following considerations will help ensure a clean
protocol/report.

Recording inputs
When inputs are made, the inputs should be recorded. If the input is
specified (for example, enter “2” in the field), then it can be pre-defined in
the protocol. If the tester has freedom to enter something different, provide
space in the protocol for the tester to record exactly what was entered.

Outputs and quantification


When outputs are quantified, the units must be stated in the protocol.
For example, if an expected result is a time measurement, record what the
units were (for example, seconds, minutes, hours).

81
An Easy to Understand Guide | Software Validation
Precision
Precision can be a headache if not carefully handled. For example, if a
requirement is stated as, “The software shall illuminate the LED for three
seconds when <some condition> occurs.” On the surface, this seems fairly
straight-forward, but how do you accurately time this? You could take an
oscilloscope and hook it up to the hardware and measure the signal. This,
however, requires that you have a calibrated scope and extra equipment. It's
preferable to establish precision in the specification document. For
example, “The software shall illuminate the LED for approximately three
seconds (± 1 second).” As you can see, this greatly simplifies the test effort
and allows for manual timing.

So it's less than ideal to defer precision to the test. But if the
requirements do not establish a range, it can be established in the test but
should be thoroughly explained (for example, “There is no safety or efficacy
concerns regarding the three second timing for LED illumination and, thus,
the timing will be done manually, allowing a ± 1 second difference for
manual timing inaccuracies.”) This should be stated in the protocol before
approval and not left to the tester to document. Regulatory agencies tend to
frown upon allowing testers to make such assertions on their own accord.

Expected results
Expected results must be specific and descriptive. They should never be
stated as, “Works as expected.” The expected results must have a very clear

82
An Easy to Understand Guide | Software Validation
definition of what constitutes a “pass” and the recorded results should
clearly substantiate the assertion.

In practice, some results just don't lend themselves to be quantifiable.


For example, if a certain action causes a state change (a screen repaints with
a new user interface window), a screen shot may be a better way to capture
the results than having the tester record them.

If an expected result is a stream of data output to a human-readable


file, it's probably better to print the file to show the results. In cases where
data is captured outside the protocol, the results must be appended to the
documented results. Use GDP to attach the addenda. Cite the use of an
attachment on the particular step or steps where the data is generated or
analyzed. For example, annotate the step(s) with, “See Attachment 1 of
<Document Number> for results.”

Each page of the attachment must refer to the protocol and step, and
should be paginated in the form “Page x of y.” An example annotation for an
appendix might then look like “Attachment 1 for <product>
<protocol_reference>, step 15. Page 1 of 5,” (where <product> and
<protocol_reference> clearly establish the document to which the
attachment is being made). This way, if a page or pages become separated,
it's easy to locate the results package to which the pages belong. Backward
and forward traceability must always exist between validation
documentation.

83
An Easy to Understand Guide | Software Validation
References

United States Code of Federal Regulations, Title 21 Part 820


(Quality Systems Regulation);
United States Code of Federal Regulations, Title 21 Part 11
(Electronic Records/Electronic Signatures);
International Standards Organization 13485:2003;
ANSI/AAMI/IEC 62304:2006 Medical device software — Software
life cycle processes;
EudraLex Volume 4;
ISO 14971 - Risk Management
ASTM E2500 - Standard Guide for Specification, Design, and
Verification of Pharmaceutical and Biopharmaceutical
Manufacturing Systems and Equipment.

84
An Easy to Understand Guide | Software Validation
Glossary

21 CFR Part 11 21 CFR Part 11 of the Code of Federal Regulation deals


with the Food and Drug Administration (FDA) guidelines
on electronic records and electronic signatures in the
United States. Part 11, as it is commonly called, defines
the criteria under which electronic records and
electronic signatures are considered to be trustworthy,
reliable and equivalent to paper records

ASTM American Society for Testing and Materials

CAPA Corrective Action, Preventative Action

CFR Code of Federal Regulations

Cloud (Computing) Computing in which services and storage are provided


over the Internet (or "cloud")

COTS Commercial Off-The-ShelfDQDesign Qualification

EDMS Electronic Document Management System

EMEA European Medicines Agency

ERP Enterprise Resource Planning

85
An Easy to Understand Guide | Software Validation
Eudralex European Union Regulations

FAT Factory Acceptance Testing

FDA Food and Drug Administration (US Regulatory Body)

FMEA Failure Modes Effects and Analysis

FTA Fault Tree Analysis

GDP Good Documentation Practice

GxP Good 'x' Practices

Where 'x' is a variable for either:

- Manufacturing

- Laboratory or

- Clinical

IP Intellectual Property

IQ Installation Qualification

ISO International Standards Organisation

IT Information Technology

LED Light Emitting Diode

MRP Manufacturing Resource Planning

N/A Not Applicable

86
An Easy to Understand Guide | Software Validation
OQ Operational Qualification

PQ Performance Qualification

QA Quality Assurance

QbD Quality by Design

SAT Site Acceptance Testing

SDLC Software Development Life Cycle

SOP Standard Operating Procedure

SOX Sarbanes-Oxley

UI User Interface

URS User Requirement Specification

US United States

VMP Validation Master Plan

VP Validation Plan

87
An Easy to Understand Guide | Software Validation
SOFTWARE VALIDATION
QUIZ

1. Give three reasons why software validation should be performed in


regulated industries.

2. In terms of Risk Management, what does the acronym FMEA stand


for?

3. There are several types of qualification activity that comprise the


validation system lifecycle. At which point would the physical install-
ation of a component or software item be verified? DQ, IQ, OQ, PQ

4. True or False:
Qualification protocols typically have an associated report (or
sometimes called summary report).

88
An Easy to Understand Guide | Software Validation
5. As part of defining the validation framework, which document would
be generated, approved and act as the blueprint for the validation
effort?

6. List 3 benefits of utilizing a trace matrix to support a risk analysis


exercise.

89
An Easy to Understand Guide | Software Validation
7. Which US regulation should be closely followed and adhered to when
validating excel spreadsheets?

8. When utilizing commercial software products whose functionality is a


superset of your requirements, what should be validated?

9. Traceability between test protocols and attachments/annotations


must exist. List 3 particular items that support traceability in this
context.Any 3 of the following items:

10. QbD is an acronym standing for what?

90
An Easy to Understand Guide | Software Validation
ANSWER

1. Any three of the following are correct (in any order):

· Ensures that processes are in place to address any errors;

· Demonstrates that you have objective evidence to show that the


software meets its requirements;

· Verifies the software is operating in the appropriate secure


environment;

· Shows that the software is being managed with change control


(including the managed roll-out of upgrades) and with roll-back
plans, where appropriate;

· Verifies the data used or produced by the software is being backed


up appropriately and can be restored to a managed level of risk;

· Ensures users are trained on the system and are using it within its
intended purpose (for commercially-procured software, this means
in accordance with the manufacturers' scope of operations);

· Ensures that a business continuity plan is in place if a serious


malfunction to the software or environment occurs.

91
An Easy to Understand Guide | Software Validation
2. Failure Modes Effects and Analysis (FMEA)

3. IQ – Installation Qualification

4. True

5. VMP – Validation Master Plan

6. · See what requirements were affected;

· Identify related requirements;

· Establish the full regression test suite.

7. 21 CFR Part 11

8. Only the items or functionality that is required, extra functionality that


isn't utilized does not need to be validated.

9. · A unique “attachment” reference (for example “Attachment 1”);

· The tester's initials (or signature, if required by company


procedures);

· The date the screenshot was taken;

· A reference to the test protocol and test step;

· Correct pagination in the form “page x of y” (even if a single page).

10. Quality by Design

92
An Easy to Understand Guide | Software Validation
SCORE

True False
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Your score

93
An Easy to Understand Guide | Software Validation
The Validation Specialists

You might also like