Software Testing: Testing in The Software Developement Life Cycle (SDLC)

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 63

Software Testing

Chapter 2
Testing in the Software
Developement Life Cycle (SDLC)
Contents

1 The Role of Software Testing in SDLC

2 Definitions: Verification, Validation, QA, QC

3 Test Levels

4 Types of Testing
1.The Role of Software Testing in
SDLC
Software development models:
• A development life cycle for a software product involves
capturing the initial requirements from the customer,
expanding on these to provide the detail required for
code production, writing the code and testing the
product, ready for release.
• Some software development models: Waterfall, V-Model,
Prototype model, Spiral model, Agile model…
From the view of testing, the V-model according to
plays an especially important role. The model shows
that testing activities are as valuable as development
and programming.
V-Model
 Development and testing tasks are corresponding
activities of equal importance. The two branches
of the V symbolize this.
V-Model
 The left branch represents the development process:
• Requirements definition
capturing of user needs
• Functional specification
Definition of functions required to meet user needs
• Technical system design
technical design of functions identified in the functional
specification
• Component specification
detailed design of each module or unit to be built to meet
required functionality
• Programming: use programming language to build the
specified component (module, unit, class…)
V-Model
 The right branch defines a test level for each
specification and construction level
• Component test:
Verifies whether each software, component correctly fulfills its
specification.
• Integration test
Checks if groups of components interact in the way that is specified
by the technical system design.
• System test
Verifies whether the system as a whole meets the specified
requirements.
• Acceptance test
Checks if the system meets the customer requirements, as specified
in the contract and/or if the system meets user needs and
expectations.
V-Model
V-model illustrates the testing aspects of
verification and validation (IEEE definition)
• Validation – The process of evaluating a system or
component during or at the end of the development
process to determine whether it satisfies specified
requirements.
• Verification – The process of evaluating a system or
component to determine whether the products of a
given development phase satisfy the conditions
imposed at the start of that phase.
Verification refers only one single phase of
development process.
V-Model
Some most important characteristics and ideal
behind the V-model:
• Implementation and testing activities are separated
but are equally important (left side/right side)
• V-model illustrates the testing aspects of verification
and validation.
• Each test level is testing against its corresponding
development level.
2.1 Validation and Verification
By IEEE definition:
• Validation – The process of evaluating a system or
component during or at the end of the development
process to determine whether it satisfies specified
requirements. - Is the deliverable fit for the purpose?
- Does it provide a solution to the problem?

• Verification – The process of evaluating a system or


component to determine whether the products of a
given development phase satisfy the conditions
imposed at the start of that phase.
- Is the deliverable built according to the
specification?
2.2 Quality Assurance vs Quality
Control
Quality Control is defined as “a set of activities designed to
evaluate the quality of a developed or manufactured product”
(IEEE)
• We have QC inspections during development and before
deployment
• QC activities are only a part of the total range of QA activities.
Quality Assurance’s objective is to minimize the cost of
guaranteeing quality by a variety of activities performed
throughout the development and manufacturing processes /
stages.
• Activities prevent causes of errors; detect and correct them
early in the development process
• QA substantially reduces the rate of products that do not qualify
for shipment and/at the same time, reduce the costs of
guaranteeing quality in most cases
3. Test Levels
Component Testing
Integration Testing
System Testing
Acceptance Testing
Regression Testing
3.1 Component Testing
Is also known as Unit Testing
Performed by the developer
Test individual program units, such as a procedures,
functions, methods, or classes, in isolation.
Unit testing is intended to ensure that the code
written for the unit meets its specification, prior to
its integration with other units.
Unit testing is often supported by a unit test
framework
An approach to unit testing is called test-driven
development (TDD)
3.1 Component Testing
Most often stubs and drivers are used to replace
the missing software and simulate the interface
between the software components in a simple
manner. A stub is called from the software
component to be tested; a driver calls a component
to be tested
Stub and driver:
3.2 Integration testing
Once the units have been written, the next stage is
to put them together to create the system. This is
called integration. It involves building something
large from a number of smaller pieces.
Integration testing tests interfaces between
components, interactions to different parts of a
system such as an operating system, file system and
hardware or interfaces between systems.
3.2 Integration testing
Before integration testing can be planned, an
integration strategy is required. This involves making
decisions on how the system will be put together
prior to testing.
There are three commonly quoted integration
strategies:
• Big-bang integration
• Top-down integration
• Bottom-up integration
3.2 Integration testing
Big-bang integration:
• testing all components or modules are integrated
simultaneously, after which everything is tested as a whole.
• In this approach individual modules are not integrated until
and unless all the modules are ready.
3.2 Integration testing
Big-bang integration:
• Advantage: everything is finished before integration testing
starts.
• Disadvantages:
in general it is very time consuming
It is very difficult to trace the cause of failures because of
this late integration.
The chances of having critical failures are more because of
integrating all the components together at same time.
If any bug is found then it is very difficult to detach all the
modules in order to find out the root cause of it.
There is high probability of occurrence of the critical bugs in
the production environment
3.2 Integration testing
Top-down integration:
•  Testing takes place from top to bottom. High-level modules
are tested first and then low-level modules and finally
integrating the low-level modules to a high level to ensure the
system is working as intended.
• Stubs are used as temporary module if a module is not ready
for integration testing.
3.2 Integration testing
Bottom-up integration:
• Testing takes place from bottom to up. Lowest level modules
are tested first and then high-level modules and finally
integrating the high-level modules to a low level to ensure the
system is working as intended.  
• Drivers are used as a temporary module for integration testing.
3.2 Integration testing
There may be more than one level of integration
testing:
• Component integration testing focuses on the interactions
between software components and is done after component
(unit) testing. This type of integration testing is usually carried
out by developers.
• System integration testing focuses on the interactions
between different systems and may be done after system
testing each individual system. For example, a trading system
in an investment bank will interact with the stock exchange to
get the latest prices for its stocks and shares on the
international market. This type of integration testing is usually
carried out by testers.
3.3. System testing
The process of testing an integrated system to verify that
it meets specified requirements (IEEE definition)
It may include tests based on risks and/or requirement
specifications, business process, use cases, or other high
level descriptions of system behavior, interactions with
the operating systems, and system resources.
System testing is most often the final test to verify that
the system to be delivered meets the specification and its
purpose.
System testing is carried out by specialists testers or
independent testers.
3.3. System testing
System testing should investigate both functional and non-
functional requirements of the testing.
• non-functional tests include performance and reliability. Testers may
also need to deal with incomplete or undocumented requirements.
• functional requirements starts by using the most appropriate
specification-based (black-box) techniques for the aspect of the
system to be tested
System testing requires a controlled test environment
with regard to, amongst other things, control of the
software versions, testware and the test data
• A system test is executed by the development organization in a
(properly controlled) environment.
• The test environment should correspond to the final target or
production environment as much as possible in order to minimize the
risk of environment-specific failures not being found by testing.
3.4 Acceptance testing
After the system test has corrected all or most defects,
the system will be delivered to the user or customer
for Acceptance Testing or User Acceptance Testing (UAT).
Acceptance testing is basically done by the user or
customer although other stakeholders may be involved as
well.
The purpose of acceptance testing is to provide the end
users with confidence that the system will function
according to their expectations.
How much acceptance testing?
• Depend on the product risk.
3.4 Acceptance testing
Acceptance testing may occur at more than just a single
level:
• A Commercial Off The Shelf (COTS) software product may be
acceptance tested when it is installed or integrated.
• Acceptance testing of the usability of a component may be done
during component testing.
• Acceptance testing of a new functional enhancement may come
before system testing
There are four typical forms of acceptance testing:
• Contract acceptance testing and compliance acceptance testing
• User acceptance testing
• Operational acceptance testing
• Field testing (Alpha and beta testing)
3.4 Acceptance testing
Contract acceptance testing and compliance
acceptance testing
• Contract acceptance testing – sometimes the criteria for
accepting a system are documented in a contract. Testing is
then conducted to check that these criteria have been met,
before the system is accepted
• Regulation acceptance testing – in some industries, systems
must meet governmental, legal or safety standards. Examples
of these are the defence, banking and pharmaceutical
industries
3.4. Acceptance testing
User acceptance testing:
• Testing by user representatives to check that the system meets
their business needs. This can include factory acceptance
testing, where the system is tested by the users before moving
it to their own site.
• Site acceptance testing could then be performed by the users
at their own site
3.4. Acceptance testing
 Operational acceptance testing
• Often called operational readiness testing. This
involves checking that the processes and procedures
are in place to allow the system to be used and
maintained.
• This can include checking:
 back-up facilities;
 procedures for disaster recovery;
 training for end users;
maintenance procedures;
 data load and migration tasks;
 security procedures.
4. System testing
Field testing (Alpha and beta testing)
• Alpha testing takes place at the developer’s site – the
operational system is tested while still at the
developer’s site by internal staff, before release to
external customers.
Note that testing here is still independent of the
development team.
• Beta testing takes place at the customer’s site – the
operational system is tested by a group of customers,
who use the product at their own locations and
provide feedback, before the system is released.
 This is often called ‘field testing’.
4. Types of testing
The following types of testing can be distinguished:
• Functional testing.
• Non-functional testing.
• Structural testing.
• Testing related to changes.
4.1 Functional testing
Functional testing is a type of software testing that
validates the software system against the functional
requirements.
• The purpose of Functional tests is to test each function of
the software application, by providing appropriate input,
verifying the output against the Functional requirements.
• To verify that each function of the software application
behaves as specified in the requirement document by
providing appropriate input to verify whether the actual
output is matching the expected output or not.
• Functional testing mainly involves black box testing and it
is not concerned about the source code of the application
4.1 Functional testing
Functional testing is also called specification-based
testing: testing against a specification.
Functional testing can be done in the following methods:
• Requirements-based testing
uses a specification of the functional requirements for the system as the
basis for designing tests (Test cases are derived from specifications)
prioritize the requirements based on risk criteria and use this to
prioritize the tests, this ensure that the most important and most critical
tests are included in the testing effort.
• Business-process-based testing
uses knowledge of the business processes (Test cases are derived from
specifications)
Business processes describe the scenarios involved in the day-to-day
business use of the system
Use cases are a very useful basis for test cases from a business
perspective.
4.1 Functional testing
According to ISO 9126 Functionality consists of:
Suitability: the capability to provide an appropriate set
of functions for specified tasks and user objectives.
Accuracy: the capability to provide the right or agreed
results or effects with the needed degree of precision.
Security: the capability to prevent unauthorized
access, whether accidental or deliberate, to programs
and data
Interoperability (compatibility): capability of the
software product to interact with one or more specified
components or systems.
compliance: the capability of the software product to
adhere to standards, conventions or regulations in laws
and similar prescriptions.
4.1 Functional testing example
Task: Test Save feature of Notepad
application.

Functional Testing Procedure: test


different flows of Save functionality
(Save new file, save updated file, test
Save As, save to protected folder, save
with incorrect name, re-write existed
document, cancel saving, etc.)

Defect: While trying to save file using


Save As command, still default file
name can only be used. User cannot
change the filename because the edit-
box is disabled.
4.2 Non-Functional testing
Non-Functional testing is a kind of testing the attributes
of a component or system that do not relate to
functionality (performance, usability, reliability, security,
etc) of a software application.
According to ISO 9126, Non-functional characteristics
are:
• Reliability
• Usability
• Efficiency
• Maintainability
• Portability
4.2 Non-Functional testing
Reliability: maturity (robustness), fault-tolerance,
recoverability and compliance.
Usability: understandability, learnability,
operability, attractiveness and compliance.
Efficiency: performance, resource utilization and
compliance.
Maintainability: analyzability, changeability,
stability, testability and compliance;
Portability:adaptability, installability, co-existence,
replaceability and compliance.
4.2 Non-Functional testing
Some of the important facts about Non-
Functional Testing:
• Increase the ease of use, efficiency, maintainability,
and portability of the product.
• Help reducing production risk and the cost associated
with non-functional aspects of the product.
• Optimize the way the product is installed, configured,
executed, managed and monitored.
• Improve and enhance the knowledge of the behavior
of the product and its uses, ensure customer
satisfaction with smooth performing application.
4.2 Non-Functional testing
Types of Non-Functional testing:
1. Performance testing
Load testing
Stress testing
Endurance testing
Volume testing
Spike Testing
2. Document Testing
3. Installation Testing
4. Reliability Testing
5. Security Testing
4.2.1 Performance testing
Testing with the intent of determining how
efficiently a product handles a variety of
events.
Tester needs to check product’s speed,
scalability and stability
• Speed : Need to verify that application response is
getting quick or not.
• Scalability : Need to verify that application can handle
maximum user load.
• Stability : Need to verify that application is stable
under varying load.
4.2.1 Performance testing
Testing with the intent of determining how
efficiently a product handles a variety of
events.
Purposes:
 demonstrate that the system meets
performance criteria;

 compare two systems to find which performs


better;

 measure what parts of the system or


workload cause the system to perform badly.
4.2.1 Performance testing:
Example
Task: Server should respond in less
than 2 sec when up to 100 users
access it concurrently. Server
should respond in less than 5 sec
when up to 300 users access it
concurrently.

Performance Testing Procedure:


emulate different amount of
requests to server in range (0; 300),
for instance, measure time for 10,
50, 100, 240 and 290 concurrent
users.

Defect: starting from 200


Concurrent requests
respond time is 10-15 seconds.
4.2.2.1 Load testing
Load testing is a type of testing which involves
evaluating the performance of the system under the
expected workload, e.g. numbers of parallel users
and/or numbers of transactions, to determine what
load can be handled by the component or system.
Purposes/;
 evaluation of performance and
efficiency of software

 performance optimization (code


optimization, server configuration)

 selection of appropriate hardware


and software platforms for the
application
4.2.2.1 Load testing: Example
Task: Server should allow up to 500
concurrent connections.

Load Testing Procedure: emulate


different amount of requests to
server close to pick value, for
instance, measure time for 400,
450, 500 concurrent users.

Defect: Server returns “Request


Time Out” starting from 490
concurrent requests.
4.2.2.2 Stress testing
Stress testing: is a type of performance testing conducted
to evaluate a system or component at or beyond the limits
of its anticipated or specified work loads, or with reduced
availability of resources such as access to memory or
servers
Purposes:
 the general study of the behavior of
the system under extreme loads

 examination of handling of errors


and exceptions under extreme load

 examination of certain areas of the


system or its components under the
disproportionate load

 testing the system capacity


4.2.2.2 Stress testing: Example
Task: Server should allow up to 500
concurrent connections.

Stress Testing Procedure: emulate


amount of requests to server greater
than pick value, for instance, check
system behavior for 500, 510, and 550
concurrent users.

Defect: Server crashes starting


from 500 concurrent requests
and user’s data is lost.
Data should not be lost even in
stress situations. If possible,
system crash also should be avoided.
4.2.2.3 Endurance testing
Endurance testing is a testing of the software
to check system performance under specific
load conditions over an extended or longer
amount of time.
It is also known as Soak testing
Purposes:
 help in finding out any memory leaks
in the system(A memory leak is a failure
in a software program to release
discarded memory, causing impaired
performance or failure.)
 help checking the response time of
the system over a longer period.
4.2.2.3 Endurance testing:
Example
In software testing, a system may behave exactly as
expected when tested for 1 hour but when the same
system is tested for 3 hours, problems such as memory
leaks cause the system to fail or behave randomly.
For an application like Income tax filing, the
application is used continuously for a very long
duration by different users. In this type of application,
memory management is very critical. For an
application like these, we can run the test for 24 hours
to 2 days duration and monitor the memory utilization
during the whole test execution
4.2.2.4 Volume testing
Volume testing refers to testing a software
application or the product with a certain amount of
data.
E.g., if we want to volume test our application
with a specific database size, we need to expand
our database to that size and then test the
application’s performance on it.
Purposes:
 to determine system performance
with increasing volumes of data in the
database.
4.2.2.5 Spike Testing
It refers to test conducted by subjecting the
system to a short burst of concurrent load to
the system. This test might be essential while
conducting performance tests on an auction
site wherein a sudden load is expected.
• Example – For an e-commerce application running an advertisement
campaign, the number of users can increase suddenly in a very short
duration. Spike testing is done to analyze these types of scenarios.
Purposes:
 to determine system performance
with increasing volumes of data in the
database.
4.2.2. Document Testing
Document Testing means verify the technical
accuracy and readability of the user manuals,
tutorials and the online help.
Document testing include a spelling and grammar
checking in the documents using available tools, to
manually reviewing the documentation to remove
any ambiguity or inconsistency.
Documentation testing can start at the very
beginning of the software process and hence save
large amounts of money, since the earlier a defect
is found the less it will cost to be fixed.
4.2.2.3. Installation Testing

Installation testing is performed to check if the


software has been correctly installed with all the
inherent features and that the product is working as
per expectations.
 Also known as implementation testing
It is done in the last phase of testing before the
end user has his/her first interaction with the
product.
To validate the quality as well as the correctness of
the installation process and ensure that the users
receive optimum user experience.
4.2.2.4.Reliability Testing
Reliability testing is performed to ensure that the
software is reliable, it satisfies the purpose for
which it is made, for a specified amount of time in
a given environment and is capable of rendering a
fault-free operation
Purposes:
 To find the faults present in the
system and the reason behind it.
 To ensure the quality of the system.
4.2.2.5. Security Testing
Security testing is a testing technique to
determine if an information system protects
data and maintains functionality as intended.
Security testing always involve with the SDLC
cycle
4.2.2.5. Security Testing:
Example
A password should be in encrypted format
Application or System should not allow invalid users
 Check cookies and session time for application
This is an example of a very basic security test which
anyone can perform on a web site/application:
• Log into the web application.
• Log out of the web application.
• Click the BACK button of the browser (Check if you are asked to log in again or
if you are provided the logged-in application.)
Most types of security testing involve complex steps and
out-of-the-box thinking but, sometimes, it is simple tests
like the one above that help expose the most severe
security risks.
4.2.2.5. Security Testing
Few security checkup points to check within Web Application
Test the Role and Privilege Manipulation to Access the Resources.
Test for cookie and parameter Tempering using web spider tools.
Test for HTTP Request Tempering and check whether to gain illegal
access to reserved resources.
Perform Code injection testing to identify input validation Error.
Send Any Large number of Requests that perform database
operations and observe any
Slowdown and New Error Messages
Perform manual source code analysis and submit a range of input
varying lengths to the applications
4.3 Structural testing
The structural testing is the testing of the structure
of the system or component.
Structural testing is often referred to as ‘white box’
testing or ‘glass box’ testing because in structural
testing we are interested in what is happening ‘inside
the system/application’.
In structural testing the testers are required to have
the knowledge of the internal implementations of the
code. Here the testers require knowledge of how the
software is implemented, how it works.
4.3 Structural testing
During structural testing the tester is concentrating on how
the software does it.
Structural testing can be used at all levels of testing.
Developers use structural testing in component testing and
component integration testing, especially where there is
good tool support for code coverage.
 Coverage measurement tools assess the percentage of
executable elements (e.g. statements or decision outcomes)
that have been exercised (i.e. covered) by a test suite.
The techniques used for structural testing are structure-
based techniques, also referred to as white-box techniques.
Control flow models are often used to support structural
testing.
4.4 Testing related to changes
When a defect has been corrected, two types
of tests should be executed:
• Confirmation testing (retesting)
• Regression testing
4.4 Testing related to changes
Re-testing (Confirmation testing) is testing a defect
again – after it has been fixed.
When doing retesting, it is important to ensure that
the test is executed in exactly the same way as it was
the first time, using the same inputs, data and
environment.
The fix may have introduced or uncovered a different
defect elsewhere in the software. The way to detect
these 'unexpected side-effects' of fixes is to do
regression testing.
4.4 Testing related to changes
Regression testing is a testing of a previously
tested program following modification to
ensure that defects have not been introduced
or uncovered in unchanged areas of the
software, as a result of the changes made. It is
performed when the software or its
environment is changed.
Purpose:
verifies that the system still
meets its requirements

May be any type of software testing (functional, GUI, etc…)


4.4 Testing related to changes

Regression testing = find new bugs.


Retesting = confirm bugs were fixed.
Regression testing locates bugs after updates or changes to the code or UI.
Retesting makes sure that a bug that was found and addressed by a
developer was actually fixed

You might also like