Professional Documents
Culture Documents
Testing Best Practice
Testing Best Practice
Testing is a crucial element of software development. It can also be a complex activity to structure
correctly, and in a way that supports maximum efficiency. Because of this complexity, it is always
helpful to review processes and guidelines to ensure you are following best practices, and a great
place to start is with the ISTQB (International Software Testing Qualifications Board), who list seven
These are the principles that have been collated and established by the ISTQB as testing and
software development has evolved over the years, and are recognised as the absolute core of testing.
This article will explore these seven fundamental principles of software testing and the key
We test software to discover issues so that they can be fixed before they are deployed to live
environments – this enables us to have confidence that our systems are working. However, this
testing process does not confirm that any software is completely correct and completely devoid of
issues. Testing helps greatly reduce the number of undiscovered defects hiding in software, but
finding and resolving these issues is not itself proof that the software or system is 100% issue-free.
This concept should always be accepted by teams, and effort should be made to manage client
expectations.
It is important to remember however that while testing shows the presence of bugs and not their
absence, thorough testing will give everyone confidence that the software will not fail. Having a
comprehensive test strategy that includes thorough test plans, reports and statistics along with testing
release plans can all help with this; reassuring clients as to testing progress and providing confidence
Additionally, ongoing monitoring and testing after systems have gone into production is vital. Thinking
forward to potential issues that could arise is another good way to help mitigate against any future
problems, for example considering load testing if a site is launching a new marketing campaign, so
you can be confident the software will withstand any anticipated larger volumes of traffic.
As much as we would like to believe or wish it true(!), it is absolutely impossible to test EVERYTHING
– all combinations of inputs and preconditions – and you could also argue that attempting to do so is
not an efficient use of time and budget. However, one of the skills of testing is assessing risks and
planning your tests around these – you can then cover vast areas, while making sure you are testing
the most important functions. With careful planning and assessment, your test coverage can remain
excellent and enable that necessary confidence in your software, without requiring that you test every
Testing early is fundamentally important in the software lifecycle. This could even mean testing
requirements before coding has started, for example – amending issues at this stage is a lot easier
and cheaper than doing so right at the end of the product’s lifecycle, by which time whole areas of
throughout, rather than a phase (which in a traditional waterfall approach would be at the end)
because it enables quick and timely continuous feedback loops. When a team encounters hurdles or
impediments, early feedback is one of the best ways to overcome these, and testers are essential for
this. Consider the tester as the ‘information provider’ – a valuable role to play.
Essentially, testing early can even help you prevent defects in the first place!
This is the idea that certain components or modules of software usually contain the most number of
issues, or are responsible for most operational failures. Testing, therefore, should be focused on
these areas (proportionally to the expected – and later observed – defect density of these areas). The
Pareto principle of 80:20 can be applied – 80 % of defects are due to 20 % of code!
This is particularly the case with large and complex systems, but defect density can vary for a range
of reasons. Issues are not evenly distributed throughout the whole system, and the more complicated
a component, or the more third-party dependencies there are, the more likely it is that there will be
defects. Inheriting legacy code, and developing new features in certain components that are
undergoing frequent changes and are therefore more volatile, can also cause defect clustering.
Knowing this could prove to be very valuable for your testing; if we find one defect in a particular
module/area there is a strong chance of discovering many more there. Identifying the more complex
components, or areas that have more dependencies or are changing the most, for example, can help
This is based on the theory that when you use pesticide repeatedly on crops, insects will eventually
build up an immunity, rendering it ineffective. Similarly with testing, if the same tests are run
continuously then – while they might confirm the software is working – eventually they will fail to find
new issues. It is important to keep reviewing your tests and modifying or adding to your scenarios to
help prevent the pesticide paradox from occurring – maybe using varying methods of testing
Testing is ALL about the context. The methods and types of testing carried out can completely
depend on the context of the software or systems – for example, an e-commerce website can require
different types of testing and approaches to an API application or a database reporting application.
If your software or system is unusable (or does not fulfil users’ wishes) then it does not matter how
many defects are found and fixed – it is still unusable. So in this sense, it is irrelevant how issue- or
error-free your system is; if the usability is so poor users are unable to navigate, or/and it does not
match business requirements then it has failed, despite having few bugs.
It is important, therefore, to run tests that are relevant to the system’s requirements. You should also
be testing your software with users – this can be done against early prototypes (at the usability
testing phase), to gather feedback that can be used to ensure and improve usability. Remember, just
because there might be a low number of issues, it does not mean your software is shippable –
meeting client expectations and requirements are just as important as ensuring quality.
Validation strategies in testing refer to the approaches and techniques used to ensure that a product or system
meets its intended requirements and specifications. Here are some common validation strategies used in
software testing:
1. **Requirements Validation**: Ensure that the product or system meets the specified requirements and
fulfils the needs of stakeholders. This may involve reviewing requirements documents, conducting
walkthroughs or inspections, and validating requirements through test cases.
2. **Functional Validation**: Validate that the product or system functions correctly according to its functional
specifications. This involves testing individual features and functionalities to ensure they behave as expected.
Techniques such as unit testing, integration testing, system testing, and user acceptance testing (UAT) are
commonly used for functional validation.
4. **Regression Validation**: Validate that recent changes or updates to the product or system have not
introduced new defects or regression issues. Regression testing involves retesting previously tested
functionalities to ensure they still work as expected after changes are made. Techniques such as automated
regression testing, selective regression testing, and impact analysis are used for regression validation.
5. **Data Validation**: Validate the correctness and integrity of data processed by the product or system. This
involves checking data input, storage, processing, and output to ensure consistency, accuracy, and compliance
with data requirements. Techniques such as boundary value analysis, equivalence partitioning, and data
validation rules are used for data validation.
6. **User Validation**: Validate that the product or system meets the needs and expectations of end-users.
This involves gathering feedback from users through usability testing, user interviews, surveys, and feedback
mechanisms to identify areas for improvement and ensure user satisfaction.
7. **Compliance Validation**: Validate that the product or system complies with relevant standards,
regulations, and industry best practices. This may involve conducting compliance audits, security assessments,
and validation against regulatory requirements and industry standards.
Overall, validation strategies in testing involve a combination of techniques and approaches to ensure that the
product or system meets quality standards, satisfies stakeholder needs, and delivers value to end-users.
Types of testing
https://www.geeksforgeeks.org/types-software-testing/
Integration testing
https://www.geeksforgeeks.org/software-engineering-integration-testing/
https://www.getsoftwareservice.com/integration-test-strategies/
Smoke testing
https://www.geeksforgeeks.org/smoke-testing-software-testing/
Acceptance testing
https://www.geeksforgeeks.org/acceptance-testing-software-testing/
system testing
https://www.geeksforgeeks.org/system-testing/
regression testing
https://www.javatpoint.com/regression-testing