Professional Documents
Culture Documents
Software Validation Book
Software Validation Book
First Edition
All rights reserved. No part of the content or the design of this book
maybe reproduced or transmitted in any form or by any means without the
express written permission of Premier Validation.
The advice and guidelines in this book are based on the experience of
the authors, after more than a decade in the Life Science industry, and as
such is either a direct reflection of the "predicate rules" (the legislation
governing the industry) or are best practices used within the industry. The
author takes no responsibility for how this advice is implemented
ISBN 978-1-908084-02-6
Hey there,
This book has drawn on years of the authors' experience in the field of
Software Validation within regulated environments, specifically Biotech and
Pharmaceutical. We have wrote and published this book as an aid to anyone
either working in the software validation field, as well as for anyone that is
interested in software testing.
The purpose of this book is to try and pull all of that together - to make a
Software Validaton book that is just written in an easy to understand
language, to give help and guidance regarding the approach taken to
validate the software whilst laying out an easy launchpad to allow users of
the book to be able to search for more detailed information as and when it is
required.
So I think it's pretty clear, you've just purchased the Software Validation
bible.
Enjoy!
Notes of Rights
All rights reserved. No part of this book may be reproduced, stored in a
retrieval system, or transmitted in any form or by any means, without the
prior written permission of the copyright holder, except in the case of brief
quotations embedded in critical articles or reviews.
Notes of Liability
The author and publisher have made every effort to ensure the accuracy of
the information herein. However, the information contained in this book is
sold without warranty, either express or implied. Neither the authors and
Premier Validation Ltd, nor its dealers or distributors will be held liable for
any damages to be caused either directly or indirectly by the instructions
contained in this book
ISBN 978-1-908084-02-6
Validation Protocols
Validation Protocol 23
Design Qualification (IQ) 24
Installation Qualification 25
Operational Qualification (OQ) 29
Performance Qualification (PQ) 31
Other Test Considerations 32
Validation Execution
Preparing for a Test 41
Executing and Recording Results 42
Reporting 44
Managing The Results 47
Special Considerations
Commercial 58
Open Source Systems 62
Excel Spreadsheets 63
Retrospective Validation 65
Summary 66
Frequently Asked Questions 67
Appendix A: Handling Deviations 72
Appendix B: Handling Variances 74
Appendix C: Test Development Considerations 77
Appendix D: Capturing Tester Inputs and Results 81
References 84
Glossary 85
Quiz 88
1
An Easy to Understand Guide | Software Validation
What is
Software Validation?
2
An Easy to Understand Guide | Software Validation
Why Validate?
required by:
All countries and regions around the world have their own set of rules
and regulations detailing validation requirements: EudraLex is the collection
of rules and regulations governing medicinal products in the European
Union; FDA is the US equivalent and in Japan it is the Japanese Ministry of
Health & Welfare
But over and above regulations, the most important reason for
software validation is to ensure the system will meet the purpose for which
you have purchased or developed it, especially if the software is “mission
critical” to your organization and you will rely on it to perform vital functions.
3
An Easy to Understand Guide | Software Validation
A robust Software validation effort also:
4
An Easy to Understand Guide | Software Validation
Validation is a Journey,
Not a Destination
Being in a validated state, by the way, does not mean that the software
is bug-free or that once it's validated, you're done. Systems are not static.
Software patches must be applied to fix issues, new disk space may
need to be added as necessary, and additions and changes in users occur.
Being in a “validated state” is a journey, not a destination. It's an iterative
process to ensure the system is doing what it needs to do throughout its
lifetime.
5
An Easy to Understand Guide | Software Validation
Planning for Validation
As with most efforts, planning is a vital component for success. It's the
same for validation. Before beginning the validation process, you must:
6
An Easy to Understand Guide | Software Validation
1 Determine What Needs
to be Validated
Risk Management
Validation efforts should be commensurate with the risks. If a human
life depends on the software always functioning correctly, you'll want to take
a more detailed approach to validation than for software that assesses color
shades of a plastic container (assuming the color shade is only a cosmetic
concern).
7
An Easy to Understand Guide | Software Validation
If quality may be affected or if the decision is made that the system
needs to be validated anyway, a risk assessment should be used to
determine the level of effort required to validate the system. There are
various ways to assess risk associated with a system. Three of the more
common methods are:
For each system assessed, document the risk assessment findings, the
individuals involved in the process, and the conclusions from the process.
Note: ASTM E2500 Standard Guide for Specification, Design, and Verification of
Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment
is a very useful tool for developing a risk based approach to validation and
achieving QbD (Quality by Design). Similarly, ISO 14971 is a good reference for
general risk management.
8
An Easy to Understand Guide | Software Validation
Formal risk assessments provide a documented repository for the
justification of approaches taken in determining the level of validation
required and can be used in the future for reference purposes.
During the risk assessment, items that may lead to problems with the
system, validation effort or both should be addressed; this is called risk
mitigation and involves the systematic reduction in the extent of exposure to
a risk and/or the likelihood of its occurrence (sometimes referred to as risk
reduction).
9
An Easy to Understand Guide | Software Validation
2 Establish a Framework
When building a house, the first thing you must do is create a blueprint.
A Validation Master Plan (VMP) is a blueprint for the validation of your
software. The VMP provides the framework for how validation is performed
and documented, how issues are managed, how to assess validation impact
for changes, and how to maintain systems in a validated state.
10
An Easy to Understand Guide | Software Validation
The approach to validating these applications if your company has
applications that have been in production but have not been
validated (retrospective vs prospective)
Then validation approach to validate a system that is already
commissioned and live, but not formally validated.
General time lines, milestones, deliverables and roles and
responsibilities of resources assigned to the validation project.
The VMP should include a section on training and the minimum level of
training/qualification. This can either be a statement referring to a training
plan or a high-level statement.
The VMP should also include, in general terms, the resources (including
minimum qualifications) necessary to support the validation. Again, you
don't need to take it to the level of, “Mary will validate the user interface
application <x>. It should be more like, “Two validation engineers who have
completed at least three software validation projects and XYZ training.”
Resources required may include outside help (contractors) and any special
equipment needed.
11
An Easy to Understand Guide | Software Validation
Error handling
Finally, the VMP should describe how errors (both in the protocols and
those revealed in the software by the protocols) are handled. This should
include review boards, change control, and so on, as appropriate for the
company. For a summary of a typical deviation, see Appendix A. For a
summary of protocol error types and typical validation error-handling, see
Appendix B.
Note: For regulated industries, a VMP may also need to include non-software
elements, such as manufacturing processes. This is outside the scope of this
document.
12
An Easy to Understand Guide | Software Validation
3 Create a Validation Plan
for Each System
You must create a validation plan for each software system you need to
validate. Like the VMP, the validation plan specifies resources required and
timelines for validation activities, but in far greater detail. The plan lists all
activities involved in validation and includes a detailed schedule.
The validation plan must also identify required equipment and whether
calibration of that equipment is necessary. Generally, for software
applications, calibrated equipment is not necessary (but not out of the
question).
14
An Easy to Understand Guide | Software Validation
documented in the VMP, Risk Assessment or both (without too much
duplication of data).
The VP/VMP are live documents that are part of the System
Development Lifecycle (SDLC) and can be updated as required. Typical
updates can include cross reference changes and any other detail that might
have changed due to existing business, process or predicate quality system
changes.
15
An Easy to Understand Guide | Software Validation
Software Development
Life Cycle
There are many SDLC models that are acceptable and there are benefits
and drawbacks with each. The important thing is that:
It's not always the case that a software product is developed under such
controlled conditions. Sometimes, a software application evolves from a
home-grown tool which, at the time, wasn't expected to become a system
that could impact the quality of production.
16
An Easy to Understand Guide | Software Validation
Similarly, a product may be purchased from a vendor that may or may
not have been developed using a proven process. In these cases, a set of
requirements must exist, at a minimum, in order to know what to verify, and
the support systems must be established to ensure the system can be
maintained.
requirements System
analysis testing
Detailed
Unit testing
design
implementation
17
An Easy to Understand Guide | Software Validation
The Requirements stage
Requirement specifications are critical components of any validation
process because you can verify only what you specify—and your verification
can be only as good as your specifications. If your specifications are vague,
your tests will be vague, and you run the risk of having a system that doesn't
do what you really want. Without requirements, no amount of testing will
get you to a validated state.
18
An Easy to Understand Guide | Software Validation
Use “shall” to specify requirements. Doing so makes verifiable
requirements easy to find. Each requirement should contain
only one “shall.” If there is more than one, consider breaking
up the requirement.
Requirements traceability
Part of the validation effort is to show that all requirements have been
fulfilled via verification testing. Thus, it's necessary to trace requirements to
the tests. For small systems, this can be done with simple spreadsheets. If
the system is large, traceability can quickly become complex, so investing in
a trace management tool is recommended. These tools provide the ability to
generate trace reports quickly, “sliced and diced” any way you want.
19
An Easy to Understand Guide | Software Validation
Additionally, some tools provide the ability to define attributes. The
attributes can then be used to refine trace criteria further. Attributes can
also be used to track test status. The tracing effort culminates in a Trace
Matrix (or Traceability Matrix). The purpose of a matrix is to map the design
elements of a validation project to the test cases that verified or validated
these requirements. The matrix becomes part of the validation evidence
showing that all requirements are fully verified.
Caution: Tracing and trace reporting can easily become a project in itself. The tools
help reduce effort, but if allowed to grow unchecked they can become a
budget drain.
Requirements reviews
Requirements reviews are key to ensuring solid requirements. They are
conducted and documented to ensure that the appropriate stakeholders are
part of the overall validation effort. Reviewers should include:
20
An Easy to Understand Guide | Software Validation
Validation Maintenance and Project Control
Support processes are an important component of all software
development and maintenance efforts. Even if the software was not
developed under a controlled process, at the time of validation the following
support processes must be defined and operational to ensure that the
software remains in a validated state and will be maintained in a validated
state beyond initial deployment:
21
An Easy to Understand Guide | Software Validation
Validation Protocol
Design Qualification (DQ)
Installation Qualification (IQ)
Operational Qualification (OQ)
Performance Qualification (PQ)
Other Considerations
22
An Easy to Understand Guide | Software Validation
Validation Protocols
23
An Easy to Understand Guide | Software Validation
Design
Qualification DQ
Design Qualification (DQ) is an often overlooked protocol, but can be a
valuable tool in your validation toolbox. You can use DQ to verify both the
design itself and specific design aspects. DQ is also a proven mechanism for
achieving Quality by Design (QbD).
· Physical installation;
· Software lifecycle management;
· Personnel.
Physical installation
There are many factors that can be considered for software Installation
Qualification, including:
A potential issue with some applications, especially those that are web-
based, is the browser. If you're a Firefox user, for example, you've probably
gone to some sites where the pages don't display correctly and you can see
the full site only if you use Internet Explorer. These cases may require that
every client configuration be confirmed during IQ.
The “cloud” and other virtual concepts are tough because you
haverelinquished control over where your application runs or where the
data is stored. Does that rule out the use of such environments in a regulated
environment? Not necessarily. Again, it comes back to risk.
If there's no risk to human life, then the solution may be viable. If the
system maintains records that are required by regulation, it will take
considerable justification and verification. And, as always, if the decision is
made to take this direction, document the rationale and how any risks are
26
An Easy to Understand Guide | Software Validation
mitigated. In certain cases the Risk Assessment may require re-visiting.
Personnel
If the use of a system requires special training, an assessment needs to
be made to determine if the company has the plans in place to ensure that
users are properly trained before a system is deployed. Of course, if the
system has been deployed and operators are not adequately trained, this
would be a cause for concern.
28
An Easy to Understand Guide | Software Validation
Operational
Qualification OQ
Operational Qualification (OQ) consists of a set of tests that verify that
requirements are implemented correctly. This is the most straight-forward
of the protocols: see requirement, verify requirement. While this is a gross
simplification—verifying requirements can be extremely challenging—the
concept is straightforward. OQ must:
· Confirm that error and alarm conditions are properly detected and
handled;
· Verify that start-ups and shutdowns perform as expected;
· Confirm all applicable user functions and operator controls;
· Examine maximum and minimum ranges of allowed values.
· OQ Tests the Functional Requirements of the system.
29
An Easy to Understand Guide | Software Validation
“Fast” is subjective. Verifiable requirements are quantifiable (for
example, “Response time to a query shall always be within 15 seconds.” A
good expected result will give the tester a clear path to determine whether
or not the objective was met. If vague requirements do slip through,
however, at least define something quantifiable in the test via a textual
description of the test objectives.
30
An Easy to Understand Guide | Software Validation
Performance
Qualification PQ
Performance Qualification (PQ) is where confirmation is made that a
system properly handles stress conditions applicable to the intended use of
the equipment. The origination of PQ was in manufacturing systems
validation, where PQ shows the ability of equipment to sustain operations
over an extended period, usually several shifts.
31
An Easy to Understand Guide | Software Validation
Other Test
Consideration
So if you do IQ, OQ, and PQ, do you have a bug-free system? No. Have
you met regulatory expectations? To some degree, yes. You are still,
however, expected to deliver software that is safe, effective, and, if you want
return business, as error free as possible.
There are a number of other tools in the validation toolbox that can be
used to supplement regulatory-required validation. These are not required,
but agencies such as the US FDA have been pushing for more test-based
analysis to minimize the likelihood of software bugs escaping to the
customer. They include:
· Static analysis;
· Unit-level test;
· Dynamic analysis; Easy to Understand Guide to oftware
32
An Easy to Understand Guide | Software Validation
Ad-hoc, or exploratory testing;
Misuse testing.
We'll briefly look at these tools. Generally, you want to choose the
methods that best suit the verification effort.
Static analysis
Static analysis provides a variety of information, from coding style
compliance to complexity measurements. Static testing is gaining more and
more attention by companies looking to improve their software, and by
regulatory agencies. Recent publications by the US FDA have encouraged
companies to use static analysis to supplement validation efforts.
Static analysis is best done before formal testing. The results don't need
to be included in the formal test report. A static analysis report should,
however, be written, filed, controlled, and managed for subsequent
33
An Easy to Understand Guide | Software Validation
retrieval. The report should address all warnings and provide rationale for
not correcting them. The formal test report can reference the static analysis
report as supplemental information; doing so, however, will bring the report
under regulatory scrutiny, so take care in managing it.
Unit-level test
Some things shouldn't be tested from a user perspective only. Examples
using this approach include scenarios where a requirement is tested in a
stand-alone environment. For example, when software is run in debug
mode and breakpoints are set, or verifying results via software code
inspection.
Another good use of unit testing is for requirements that are not
operational functionality but do need to be thoroughly tested. For example,
an ERP system with a requirement to allow an administrator to customise a
screen. A well-structured unit test or set of tests can greatly simplify matters.
Unit tests and test results are quality records and need to be managed
as such. There are several methods you can use to cite objective evidence to
verify requirements using unit-level testing. One way is to collect all original
data (executed unit tests) and retain in a unit test folder or binder (in
document control). The test report could then cite the results. Another way
is to reference the unit tests in formal protocols. Using this method, the unit
tests can either be kept in the folder or attached to the formal protocols.
34
An Easy to Understand Guide | Software Validation
Version identifiers for unit tests and for referencing units
tested greatly clarify matters when citing results. For example,
revision A of the unit test may be used on revision 1.5 of the
software for the first release of the system. The system then
changes, and a new validation effort is started. The unit test
needs to change, so you bump it to revision B and it cites
revision 2.1 of the software release. Citing the specific version
of everything involved in the test (including the unit test)
minimizes the risk of configuration questions or mistakes.
Dynamic analysis
Dynamic analysis provides feedback on code covered in testing and is,
generally, for non-embedded applications as the code has to be
instrumented (typically an automated process done by the tool;
instrumenting allows the tool to know the state of the software and which
lines of code were executed – as well as providing other useful information)
and generates data that has to be output as the software runs. The fact that
the software is instrumented adds a level of concern regarding the results,
but in a limited scope gives an insight into testing not otherwise possible.
Dynamic analysis is also best done prior to and outside the scope of
formal testing. Results from a dynamic analysis are not likely needed in the
formal test report, but if there's a desire to show test coverage it can be
35
An Easy to Understand Guide | Software Validation
included. Again, referencing the report would bring it under regulatory
scrutiny, so be sure it's very clear if taking that route.
36
An Easy to Understand Guide | Software Validation
The problem lies in how to manage exploratory testing and reporting
the results. All regulatory agencies expect protocols to be approved before
they are executed. This is not possible with exploratory testing since the
actual test results aren't discovered until the tester begins testing. This is
overcome by addressing the approach in the VMP and/or the product-
specific validation plan. Explain how formal testing will satisfy all regulatory
requirements, and then exploratory testing will be used to further analyze
the software in an unstructured manner.
Reporting results is more difficult. You can't expect a tester to jot down
every test attempted. This would be a waste of time and effort and would
not add any value. So, it's reasonable to have the tester summarize the
efforts undertaken. For example, using the user interface example, the
tester wouldn't report on every test attempted on the text box; instead, the
tester would report that exploratory testing was performed on the user
interface, focusing on challenging the text box data entry mechanism. Such
reporting would be more suited to a project reporting mechanism and
information sharing initiative rather than being formal validation testing.
Note: It is perfectly acceptable for the developers to test their work prior to formal
validation testing
37
An Easy to Understand Guide | Software Validation
All failures must be captured, so it's a good idea to include tests
that failed into formal protocols. Additionally, encourage the
tester to capture issues (things that didn't fail but didn't give
expected results) and observations (things that may just not
seem right). The change management process can control any
changes required.
38
An Easy to Understand Guide | Software Validation
Validation Execution
Preparing for a test
Executing and recording results
Reporting
Managing the results
39
An Easy to Understand Guide | Software Validation
Validation Execution
40
An Easy to Understand Guide | Software Validation
Preparing for a test
41
An Easy to Understand Guide | Software Validation
personnel decisions are not arbitrary.
Furthermore, you must make sure test personnel are not put in conflict-
of-interest positions. Ideally, a separate QA team should be used. Clearly,
members of the development team should not be selected, but the lines get
fuzzy quickly after that. Just make sure that your selection is defensible. For
example, if a developer is needed to perform a unit test (typical, since QA
folks may not have the code-reading skills of a developer), then ensure the
developer is not associated with the development of that unit.
Note: It is absolutely forbidden for anyone to review AND approve their own work.
Mistakes will invariably be made. Again, use GDP to correct the mistake
and provide an explanation.
42
An Easy to Understand Guide | Software Validation
If a screen shot is captured to provide evidence of compliance, the
screen shot becomes a part of the test record. It's important, therefore, to
properly annotate the screen shot. Annotations must include:
If you want to learn more about Good Documentation Practices why not
purchase a copy of our E-Book “An Easy to Understand Guide to Good
Documentation Practices” -> Go to www.askaboutvalidation.com
43
An Easy to Understand Guide | Software Validation
Reporting
At its core, the test report summarizes the results (from IQ, OQ, and PQ)
of the test efforts. Testers and reviewers involved in testing, along with test
dates, should be recorded. In general, a report follows the following outline:
I. System Description
II. Testing Summary (test dates, protocols used, testers involved)
III. Results
IV. Deviations, Variances, and Incidents
V. Observations and Recommendations
VI. Conclusions
In many cases, there will be variances (for example, the test protocol
steps or the expected results were incorrect) and/or deviations (expected
results not met), which should be handled in accordance with the VMP.
Generally, variances and deviations are summarized in the test report,
showing that they are recognized and being dealt with properly.
44
An Easy to Understand Guide | Software Validation
situation the system runs extremely slow. Perhaps the requirements were
met, but the customer or end user may not be happy with the application.
Allowing recommendations in the report can highlight areas that may need
further investigation before launch. The report should draw a conclusion
based on objective evidence that the product:
45
An Easy to Understand Guide | Software Validation
If the validation effort is contracted out, however, the test report may
not be available or appropriate to determine whether the system can be
released with test failures—especially if the impact of some of the failures
cannot be fully be assessed by the test team. In such cases, it's acceptable for
a report addendum or a separate “release report” to be produced with the
company's rationale for releasing the system (presuming it's justifiable).
46
An Easy to Understand Guide | Software Validation
Managing the results
Testing is done, the software is released, and you're making money.
What could go wrong? Audits and auditors—the reality that everyone in a
regulated industry faces. You can mitigate potential problems by preparing a
Validation Pack that contains:
User requirements;
Supporting documentation (internal and external – user manuals,
maintenance manuals, admin, and so on);
Vendor data (functional specification, FAT, SAT, validation
protocols);
Design documentation;
Executed protocols;
· An archive copy of the software (including libraries, data, and so
on);
· Protocol execution results;
· The test report;
· A known bug list with impact assessment.
Then, when the auditor asks whether the software has been validated,
present the Validation Pack (or at least take them to the relevant library).
You'll make his or her day.
47
An Easy to Understand Guide | Software Validation
Maintaining the
Validated state
Assessing Change
Re-testing
Executing the re-test
Reporting
48
An Easy to Understand Guide | Software Validation
Maintaining
The Validated State
That's why it's critical to assess validated systems continually and take
action when the validation is challenged. As stated earlier, validation is a
journey, not a destination. Once you achieve the desired validated state,
you're not finished. In fact, it's possible the work gets harder.
49
An Easy to Understand Guide | Software Validation
Accessing Change
Change will happen, so you might as well prepare for it. Change comes
from just about everywhere, including:
50
An Easy to Understand Guide | Software Validation
These are just a few examples. The answer is not to re-run all validation
tests on a weekly basis. That would be a waste of time and money. So, how
do you know what changes push you out of validation and require re-test?
Timing releases
So, you can see that you should plan out updates and not push releases
out on a daily or weekly basis. Of course, if a customer demands a new
release, or there are critical bugs that need to be corrected, you don't want
to delay installing it. But delaying releases saves time because you can do the
re-test on a batch of changes where there are likely overlaps in testing.
51
An Easy to Understand Guide | Software Validation
Risk analysis and the Trace Matrix
Risk analysis is best facilitated through the Trace Matrix. Using the
matrix enables you to:
Indirect changes
How do you address indirect changes—that is, those changes that you
may not have direct control over, or potentially not even know about?
Again, assess all changes in terms of system impact. Most likely, the IQ
will be impacted and those tests may need to be re-developed before re-
executing.
52
An Easy to Understand Guide | Software Validation
For cases where the install base must be expanded (additional sites,
additional equipment, and so on), you'll need to assess the level of
validation required for each additional installation. If this has already been
addressed in the validation plan, the effort should be consistent with the
plan. If, however, it makes sense to expand the test effort based on
experience, you should update the validation plan and perform the extra
testing as required. If it's not addressed in the validation plan, perform a risk
assessment, update the validation plan (so you don't have to re-visit the
exercise should the need arise again in the future), and perform the
validation.
General Observations
53
An Easy to Understand Guide | Software Validation
Re-testing
Re-testing
Risk assessment gives you a good idea what to re-test. But is that
sufficient? Before launching the re-test effort, take a step back and re-read
the VMP. Make sure that you're staying compliant with what the master plan
says. Then, update the validation plan for the system. This will:
· Lay out what tests you plan to execute for the re-test;
· Provide the rationale from the risk analysis for the scope of the
retest.
In most cases, not all tests in the validation suite will need to be re-run.
For example, if the change is isolated to a user interface screen (a text box
went from 10 characters long to 20 characters), you likely don't need to re-
run IQ. Consider the back-end consequences, however, because it's possible
that such a change fills up a database faster, so be sure to look at the big
picture.
54
An Easy to Understand Guide | Software Validation
testing to just the UI protocol. If, however, you scatter UI tests throughout
the functional tests, it may not be possible to test the UI in isolation and,
depending on the change, may require all tests to be executed.
Regression testing
You may run into a case where something changed that was considered
isolated to a particular function, but testing revealed it also impacted
another function. These “regressions” can be difficult to manage. Such
relationships are generally better understood over time.
The best advice is to look for any connections between any changes and
existing functionality and be sure to include regression testing (testing
related functions even though no direct impact from the known changes) in
the validation plan. Better to expand test scope and find them in testing than
let a customer or user find them.
Test Invalidation
Hopefully, you never run into a situation where something turns up that
invalidates the original testing, but it's been known to happen.
55
An Easy to Understand Guide | Software Validation
possible that this is a systemic issue and auditors may not have any faith in
any of the results, so plenty of analysis and rationale will have to go into test
scope reduction. In fact, this may even require a CAPA to fully reveal the root
cause and fix the systemic issue. But better to find it, admit it, and fix it than
have an auditor find it.
Reporting
Report the re-test effort similar to the results from the original
execution. Show that the re-validation plans were met through testing and
report the results. Handle failures and deviations as before.
56
An Easy to Understand Guide | Software Validation
Special Consideration
Commercial
Open Source Systems
Excel Spreadsheet
Retrospective Validation
57
An Easy to Understand Guide | Software Validation
Commercial
· Use it out-of-the-box;
Tailor it for use in your facility;
Customize it.
Using out-of-the-box
Since validation is done on the system's intended use, for out-of-the-
box systems, the testing part of validation would be only on how it's used in
your facility (per your specifications on how you expect it to work).
Tailoring
Tailoring takes the complexity to the next level. In addition to testing for
your intended use, you should also add tests to verify that the customized
components function as required, consistently.
59
An Easy to Understand Guide | Software Validation
Customizing
Customized systems are the most complex in terms of test effort.
Customizations need to be thoroughly tested to ensure the custom
functions perform as specified.
Due diligence
Regardless of how the software is incorporated into use, due diligence
should be performed to get problem reports related to commercial
software. These may be available online through the vendor site, or may
need to be requested from the vendor. Once obtained, be sure to assess
each issue against how the software will be used at your site to determine if
concessions or workarounds (or additional risk mitigations) need to be
made to ensure the software will consistently meet its intended use. This
analysis and any decisions become part of the validation file.
60
An Easy to Understand Guide | Software Validation
Protecting yourself
With commercial software, especially mission-critical software, you are
at risk if the vendor is sold or goes out of business. One way to protect your
company is to put the software into an “escrow” account. Should the
company fold or decide to no longer support the software, at least the
software source can be salvaged. This has inherent risks. For example, now
that you have the software, what do you do with it? Most complex
applications require a certain level of expertise to maintain. This all needs to
be considered when purchasing an application.
Documentation
A “How We Use the System Here” or "SOP (Standard Operating
Procedure)" specification facilitates the validation effort regardless of how
the system is incorporated into the environment. The validation plan
specifies how commercial systems will be validated.
61
An Easy to Understand Guide | Software Validation
Open Source Systems
The easy road is to avoid open source systems altogether. If, however,
the benefits outweigh the risks you can probably expect some very critical
scrutiny. Thus, your validation efforts will need to be extremely robust. Since
the system is open source, you should plan on capturing the source code and
all of the tools to build the system into the configuration management
system. Testing should be very detailed and should include tests such as
misuse, exploratory, and dynamic analysis.
62
An Easy to Understand Guide | Software Validation
Excel Spreadsheets
63
An Easy to Understand Guide | Software Validation
It is a common myth that according to 21 CFR, Part 11 (US FDA
regulation on Electronic Records/Electronic Signatures), Spreadsheets
cannot be validated for e-signatures. To do so, add-on packages are
required. Whilst it is accepted that Microsoft Excel doesn't faciliate
electronic signatures (yet!), other spreadsheet packages may be able to do
so.
64
An Easy to Understand Guide | Software Validation
Retrospective
Validation
If a system has not been validated and isn't in production use and
validation is required, a “retrospective” validation exercise needs to be
performed, which is no different from a normal validation exercise (except
that the validation of the system isn't incorporated into this initial project
delivery). The concern is what to do if anomalies are revealed during
validation.
65
An Easy to Understand Guide | Software Validation
Summary
Use validation efforts as a business asset. Doing validation for the sake
of checking a box shows no commitment and will likely result in problems in
the long run.
66
An Easy to Understand Guide | Software Validation
Frequently
Asked Questions
A: No, validate only what you use. Develop the requirements specification
to define exactly what you use, as you use them. This forms the basis
(and rationale) for what you actually validate. Should you begin using
additional capabilities later, you will need to update the requirements
specification and validate the new capabilities. You will also need to
perform a risk analysis to determine what additional testing needs to be
done (those validated capabilities that may interface or be influenced
by the new capabilities requiring regression or other testing).
Q: When do I start?
A: Now. Whether you have a system that's been in operation for years, or
whether you've started development of a new system, if your quality
system requires validated systems, they need to be validated.
Q: How do I start?
A: The Validation Master Plan is the foundation. It will also help you
determine (through risk analysis) what the next steps need to be.
67
An Easy to Understand Guide | Software Validation
Q: Do I have to validate Windows (or Linux, or…)
68
An Easy to Understand Guide | Software Validation
validation. Typically, you as the client would establish contractual
requirements for the hosting company to meet basic standards for life
cycle management, change control, backup and recovery, and so on.
You might perform validation testing yourself or you might contract
with the hosting company to do so. In this situation, it's extremely
important that the contract be set up to require the hosting company to
notify you of any changes (to the software, the environment, and so on).
And should changes occur, an assessment would be required to
determine if re-validation is warranted. Many companies realize the
potential exposure for regulatory violations and tend to avoid using
hosted applications.
69
An Easy to Understand Guide | Software Validation
validation apply to every installation. This applies to both clients and
servers.
A: Whatever works for your company! There is no right way. For regulatory
purposes, it should be readily retrievable in human readable format
(and complete). As long as those conditions are met, you've done it
right. This document outlines a generally accepted approach, but it's by
no means required.
A: No, but screen shots are a good way to show that the expected results
have been met. There's a tendency in practice to overdo it and have
screenshots for every action. This results in volumes of data and makes
it difficult to find specific data when needed. It also adds an overhead in
properly managing and maintaining the data. Judicious usage of
screenshots can greatly enhance justification for meeting expected
results. A small amount of screenshots for very critical items are
70
An Easy to Understand Guide | Software Validation
sometimes helpful, but when screenshots are used for nearly every test
step, then it more or less undermines the actual tester signing and
dating each test step and is also cumbersome for reviewers, especially
those (such as QA) who aren't entirely familiar with the system in the
first place.
Q: I have a system that has software controlling the fill volume of a vial. If
I change the amount of volume for the fill process (a system
parameter), do I have to re-validate?
71
An Easy to Understand Guide | Software Validation
Appendix A:
Handling Deviations
72
An Easy to Understand Guide | Software Validation
Deviations are considered original data so they should not be “covered
up,” even if the protocol is updated and re-released.
73
An Easy to Understand Guide | Software Validation
Appendix B:
Handling Variances
It's often the case that, despite good intentions and careful reviews,
protocol errors slip through the cracks. There are several varieties of
protocol errors and all are considered variances.
74
An Easy to Understand Guide | Software Validation
Procedural errors in a test step
Procedural errors in a test step generally require a bit more effort to
correct. The tester should make the necessary changes using mark-ups and
annotations with explanations, and execute the protocol in accordance with
the changes. Once the test is completed, the reviewer should review the
change and record an annotation indicating concurrence. Depending on the
extent of the variance, the tester may wish to involve the reviewer early and
get concurrence that the change is appropriate before executing the
changes. Procedural errors are individually summarized in the test report.
75
An Easy to Understand Guide | Software Validation
Use a red pen to mark up changes to correct protocol
variances. This helps the changes stand out during review. It
also helps the individuals maintaining the protocols quickly
identify protocol changes.
Note: Some companies specifically only allow one colour of ink to be used on all
paperwork and this must take priority as necessary.
Suspension of testing
Once execution begins, the software (and environment) will not
change, ideally. But this is not always the case. For example, if a fatal flaw is
exposed, especially one that has additional consequences (for example, will
cause other tests to fail); it's better to suspend testing, fix the problem, and
then resume testing. In some situations, it would not be appropriate to pick
up where testing was suspended.
76
An Easy to Understand Guide | Software Validation
Appendix C:
Test Development
Considerations
77
An Easy to Understand Guide | Software Validation
This is how a good tester thinks. What are the possible ways that a user
could perform this operation? Is there something a system couldn't handle?
Will the system store or display the date differently depending on my
computer's regional settings?
Within a particular test, space must be provided for the tester to record
the name of the software and version under test. In addition to the software,
all equipment used should be recorded. All equipment should be uniquely
identified (serial or model number and/or revision number, for example).
78
An Easy to Understand Guide | Software Validation
Within a protocol, there should be a place for the tester to record his or
her name and date when the protocol was executed. Some companies
require this only on the first page of the test (section) and the last page of the
test (section). Others require some “mark” (signature or initials) by the
tester on every page on which data was entered. In addition to the tester's
“mark”, blocks for a reviewer's “mark” should also be provided.
79
An Easy to Understand Guide | Software Validation
Acceptance criteria
For each test performed, clear and unambiguous acceptance criteria
must be defined. This can take multiple forms. If a quantitative value is
defined or if there is any tolerance, be sure to provide the range.
In some cases, a screen shot that the tester can compare against can be
included in the protocol. Be sure to specify any portions which are in or out
of scope in case there is, for example, a time or date displayed (which would
fail a bit-by-bit comparison).
80
An Easy to Understand Guide | Software Validation
Appendix D:
Capturing Tester
Inputs and Results
Recording inputs
When inputs are made, the inputs should be recorded. If the input is
specified (for example, enter “2” in the field), then it can be pre-defined in
the protocol. If the tester has freedom to enter something different, provide
space in the protocol for the tester to record exactly what was entered.
81
An Easy to Understand Guide | Software Validation
Precision
Precision can be a headache if not carefully handled. For example, if a
requirement is stated as, “The software shall illuminate the LED for three
seconds when <some condition> occurs.” On the surface, this seems fairly
straight-forward, but how do you accurately time this? You could take an
oscilloscope and hook it up to the hardware and measure the signal. This,
however, requires that you have a calibrated scope and extra equipment. It's
preferable to establish precision in the specification document. For
example, “The software shall illuminate the LED for approximately three
seconds (± 1 second).” As you can see, this greatly simplifies the test effort
and allows for manual timing.
So it's less than ideal to defer precision to the test. But if the
requirements do not establish a range, it can be established in the test but
should be thoroughly explained (for example, “There is no safety or efficacy
concerns regarding the three second timing for LED illumination and, thus,
the timing will be done manually, allowing a ± 1 second difference for
manual timing inaccuracies.”) This should be stated in the protocol before
approval and not left to the tester to document. Regulatory agencies tend to
frown upon allowing testers to make such assertions on their own accord.
Expected results
Expected results must be specific and descriptive. They should never be
stated as, “Works as expected.” The expected results must have a very clear
82
An Easy to Understand Guide | Software Validation
definition of what constitutes a “pass” and the recorded results should
clearly substantiate the assertion.
Each page of the attachment must refer to the protocol and step, and
should be paginated in the form “Page x of y.” An example annotation for an
appendix might then look like “Attachment 1 for <product>
<protocol_reference>, step 15. Page 1 of 5,” (where <product> and
<protocol_reference> clearly establish the document to which the
attachment is being made). This way, if a page or pages become separated,
it's easy to locate the results package to which the pages belong. Backward
and forward traceability must always exist between validation
documentation.
83
An Easy to Understand Guide | Software Validation
References
84
An Easy to Understand Guide | Software Validation
Glossary
85
An Easy to Understand Guide | Software Validation
Eudralex European Union Regulations
- Manufacturing
- Laboratory or
- Clinical
IP Intellectual Property
IQ Installation Qualification
IT Information Technology
86
An Easy to Understand Guide | Software Validation
OQ Operational Qualification
PQ Performance Qualification
QA Quality Assurance
SOX Sarbanes-Oxley
UI User Interface
US United States
VP Validation Plan
87
An Easy to Understand Guide | Software Validation
SOFTWARE VALIDATION
QUIZ
4. True or False:
Qualification protocols typically have an associated report (or
sometimes called summary report).
88
An Easy to Understand Guide | Software Validation
5. As part of defining the validation framework, which document would
be generated, approved and act as the blueprint for the validation
effort?
89
An Easy to Understand Guide | Software Validation
7. Which US regulation should be closely followed and adhered to when
validating excel spreadsheets?
90
An Easy to Understand Guide | Software Validation
ANSWER
· Ensures users are trained on the system and are using it within its
intended purpose (for commercially-procured software, this means
in accordance with the manufacturers' scope of operations);
91
An Easy to Understand Guide | Software Validation
2. Failure Modes Effects and Analysis (FMEA)
3. IQ – Installation Qualification
4. True
7. 21 CFR Part 11
92
An Easy to Understand Guide | Software Validation
SCORE
True False
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Your score
93
An Easy to Understand Guide | Software Validation
The Validation Specialists