Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Hindustan Institute of Management and Computer Studies

Department of Computer Application


MCA 3 Sem

KCA302: Software Engineering


Unit-4

Software Testing: Testing Objectives, Unit Testing, Integration Testing,


Acceptance Testing, Regression Testing, Testing for Functionality and Testing for
Performance, Top Down and Bottom- Up Testing Strategies: Test Drivers and
Test Stubs, Structural Testing (White Box Testing), Functional Testing (Black
Box Testing), Test Data Suit Preparation, Alpha and Beta Testing of Products.
Static Testing Strategies: Formal Technical Reviews (Peer Reviews), Walk
Through, Code Inspection, Compliance with Design and Coding Standards.

Software Testing
Software testing can be stated as the process of verifying and validating whether a software
or application is bug-free, meets the technical requirements as guided by its design and
development, and meets the user requirements effectively and efficiently by handling all
the exceptional and boundary cases. The process of software testing aims not only at
finding faults in the existing software but also at finding measures to improve the software
in terms of efficiency, accuracy, and usability.
Software Testing is a method to assess the functionality of the software program. The
purpose of software testing is to identify the errors, faults, or missing requirements in
contrast to actual requirements. It mainly aims at measuring the specification, functionality,
and performance of a software program or application.

Software testing can be divided into two steps:


1. Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has
been built is traceable to customer requirements. It means “Are we building the right
product?”.

Importance of Software Testing:


 Defects can be identified early: Software testing is important because if there are any
bugs they can be identified early and can be fixed before the delivery of the software.
 Improves quality of software: Software Testing uncovers the defects in the software,
and fixing them improves the quality of the software.
 Increased customer satisfaction: Software testing ensures reliability, security, and
high performance which results in saving time, costs, and customer satisfaction.
 Helps with scalability: Software testing type non-functional testing helps to identify
the scalability issues and the point where an application might stop working.
 Saves time and money: After the application is launched it will be very difficult to
trace and resolve the issues, as performing this activity will incur more costs and time.
Thus, it is better to conduct software testing at regular intervals during software
development.
 Security: Security testing is a type of software testing that is focused on testing the
application for security vulnerabilities from internal or external sources.

Disadvantages of Software Testing


 Time-Consuming and adds to the project cost.
 This can slow down the development process.
 Not all defects can be found.
 Can be difficult to fully test complex systems.
 Potential for human error during the testing process.

Principles of Testing
 All the tests should meet the customer’s requirements.
 To make our software, testing should be performed by a third party.
 Exhaustive testing is not possible. As we need the optimal amount of testing based on
the risk assessment of the application.
 All the tests to be conducted should be planned before implementing it
 It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20%
of program components.
 Start testing with small parts and extend it to large parts.

Need for Software Testing


Software bugs can cause potential monetary and human loss. There are many examples in
history that clearly depicts that without the testing phase in software development lot of
damage was incurred.

Goals (Objectives) of Software Testing


The main goal of software testing is to find bugs as early as possible and fix bugs and make
sure that the software is bug-free.

Important Goals of Software Testing:


 Detecting bugs as soon as feasible in any situation.
 Avoiding errors in a project’s and product’s final versions.
 Inspect to see whether the customer requirements criterion has been satisfied.
 Last but not least, the primary purpose of testing is to gauge the project and product
level of quality.

The goals of software testing may be classified into three major categories as follows:
1. Immediate Goals
2. Long-term Goals
3. Post-Implementation Goals

1. Immediate Goals: These objectives are the direct outcomes of testing. These
objectives may be set at any time during the SDLC process. Some of these are:
 Bug Discovery: This is the immediate goal of software testing to find errors at any
stage of software development. The number of bugs is discovered in the early stage
of testing. The primary purpose of software testing is to detect flaws at any step of
the development process. The higher the number of issues detected at an early stage,
the higher the software testing success rate.
 Bug Prevention: This is the immediate action of bug discovery, that occurs as a
result of bug discovery. Everyone in the software development team learns how to
code from the behavior and analysis of issues detected, ensuring that bugs are not
duplicated in subsequent phases or future projects.
2. Long-Term Goals: These objectives have an impact on product quality in the long run
after one cycle of the SDLC is completed. Some of these are:
 Quality: This goal enhances the quality of the software product. Because software
is also a product, the user’s priority is its quality. Superior quality is ensured by
thorough testing. Correctness, integrity, efficiency, and reliability are all aspects that
influence quality.
 Customer Satisfaction: This goal verifies the customer’s satisfaction with a
developed software product. The primary purpose of software testing, from the
user’s standpoint, is customer satisfaction. Testing should be extensive and thorough
if we want the client and customer to be happy with the software product.
 Reliability: It is a matter of confidence that the software will not fail. In short,
reliability means gaining the confidence of the customers by providing them with a
quality product.
 Risk Management: Risk is the probability of occurrence of uncertain events in the
organization and the potential loss that could result in negative consequences. Risk
management must be done to reduce the failure of the product and to manage risk in
different situations.
3. Post-Implemented Goals: After the product is released, these objectives become
critical. Some of these are:
 Reduce Maintenance Cost: Post-released errors are costlier to fix and difficult to
identify. Because effective software does not wear out, the maintenance cost of any
software product is not the same as the physical cost. The failure of a software
product due to faults is the only expense of maintenance. Because they are difficult
to discover, post-release mistakes always cost more to rectify. As a result, if testing
is done thoroughly and effectively, the risk of failure is lowered, and maintenance
costs are reduced as a result.
 Improved Software Testing Process: These goals improve the testing process for
future use or software projects. These goals are known as post-implementation
goals. A project’s testing procedure may not be completely successful, and there
may be room for improvement. As a result, the bug history and post-implementation
results can be evaluated to identify stumbling blocks in the current testing process
that can be avoided in future projects.
Different Types of Software Testing

Software Testing can be broadly classified into 3 types:


1. Functional Testing: Functional testing is a type of software testing that validates the
software systems against the functional requirements. It is performed to check whether
the application is working as per the software’s functional requirements or not. Various
types of functional testing are Unit testing, Integration testing, System testing, Smoke
testing, and so on.
2. Non-functional Testing: Non-functional testing is a type of software testing that
checks the application for non-functional requirements like performance, scalability,
portability, stress, etc. Various types of non-functional testing are Performance testing,
Stress testing, Usability testing, and so on.
3. Maintenance Testing: Maintenance testing is the process of changing, modifying, and
updating the software to keep up with the customer’s needs. It involves regression
testing that verifies that recent changes to the code have not adversely affected other
previously working parts of the software.

Apart from the above classification, software testing can be further divided into 2 more
ways of testing:
1. Manual Testing: Manual testing includes testing software manually, i.e., without
using any automation tool or script. In this type, the tester takes over the role of an end-
user and tests the software to identify any unexpected behavior or bug. There are
different stages for manual testing such as unit testing, integration testing, system
testing, and user acceptance testing. Testers use test plans, test cases, or test scenarios
to test software to ensure the completeness of testing. Manual testing also includes
exploratory testing, as testers explore the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is
when the tester writes scripts and uses another software to test the product. This
process involves the automation of a manual process. Automation Testing is used to re-
run the test scenarios quickly and repeatedly, that were performed manually in manual
testing.
Apart from regression testing, automation testing is also used to test the application
from a load, performance, and stress point of view. It increases the test coverage,
improves accuracy, and saves time and money when compared to manual testing.
Different Types of Manual Testing Techniques
1. Black Box Testing: Black box technique of testing in which the tester doesn’t have
access to the source code of the software and is conducted at the software interface
without any concern with the internal logical structure of the software known as black-
box testing.
2. White-Box Testing: White box technique of testing in which the tester is aware of the
internal workings of the product, has access to its source code, and is conducted by
making sure that all internal operations are performed according to the specifications is
known as white box testing.
3. Grey Box Testing: Grey Box technique is testing in which the testers should have
knowledge of implementation; however, they need not be experts.

S No. Black Box Testing White Box Testing

Internal workings of an application Knowledge of the internal workings is a


1
are not required. must.

Also known as closed box/data-driven Also known as clear box/structural


2
testing. testing.

Normally done by testers and


3 End users, testers, and developers.
developers.

This can only be done by a trial and Data domains and internal boundaries
4
error method. can be better tested.

Black Box testing is carried out to test functionality of the program. It is also called
‘Behavioral’ testing. The tester in this case, has a set of input values and respective desired
results. On providing input, if the output matches with the desired results, the program is
tested ‘ok’, and problematic otherwise. In this testing method, the design and structure of the
code are not known to the tester, and testing engineers and end users conduct this test on the
software. Black-box testing techniques are Equivalence class partitioning, Boundary value
analysis and cause effect testing.

Equivalence Class Partitioning


Input values to a program are partitioned into equivalence classes. Partitioning is done such
that: Program behaves in similar ways to every input value belonging to an equivalence class.
If one element of a class passes the test, it is assumed that all the class is passed. Test the
code with just one representative value from each equivalence class as good as testing using
any other values from the equivalence classes. Determining the equivalence classes to
examine the input data and few general guidelines for determining the equivalence classes
can be used. If the input data to the program is specified by a range of values: for example
numbers between 1 to 5000. One valid and two invalid equivalence classes are defined.
If input is an enumerated set of values for example {a, b, c}, one equivalence class for valid
input values, another equivalence class for invalid input values should be defined.
Examples:
1. A program reads an input value in the range of 1 and 5000 and computes the square root
of the input number
2. There are three equivalence classes: the set of negative integers, set of integers in the
range of 1 and 5000, integers larger than 5000.
3. The test suite must include: Representatives from each of the three equivalence classes: a
possible test suite can be: {-5,500,6000}.

Boundary Value Analysis


Some typical programming errors occur at boundaries of equivalence classes might be purely
due to psychological factors. Programmers often fail to see the special processing required at
the boundaries of equivalence classes. programmers may improperly use < instead of <=.
Boundary value analysis selects test cases at the boundaries of different equivalence classes.
For example a function that computes the square root of an integer in the range of 1 and
5000: Test cases must include the values: {0, 1, 5000, 5001}.

Different Levels of Functional Testing


1. Unit Testing: Unit testing is a level of the software testing process where individual
units/components of a software/system are tested. The purpose is to validate that each
unit of the software performs as designed.
2. Integration Testing: Integration testing is a level of the software testing process where
individual units are combined and tested as a group. The purpose of this level of testing
is to expose faults in the interaction between integrated units.
3. System Testing: System testing is a level of the software testing process where a
complete, integrated system/software is tested. It is carried out on the whole system in
the context of either system requirement specifications or functional requirement
specifications or in the context of both. The software is tested such that it works fine
for the different operating systems. In this, we have security testing, recovery testing,
stress testing, and performance testing. This includes functional as well as non-
functional testing.
4. Acceptance Testing: Acceptance testing is a level of the software testing process
where a system is tested for acceptability. The purpose of this test is to evaluate the
system’s compliance with the business requirements and assess whether it is acceptable
for delivery.

Different Levels of Non-Functional Testing


1. Performance Testing: It is designed to test the run-time performance of software
within the context of an integrated system. It is used to test the speed and effectiveness
of the program. It is also called load testing. In it, we check what the performance of
the system is in the given load. Example: Checking several processor cycles.
2. Usability Testing: Usability testing in software testing is a type of testing, that is done
from an end user’s perspective to determine if the system is easily usable. Usability
testing is generally the practice of testing how easy a design is to use on a group of
representative users. Usability testing is a method of testing the functionality of a
website, app, or other digital product by observing real users as they attempt to
complete tasks on it. The primary goals of usability testing are – discovering problems
(hidden issues) and opportunities, comparing benchmarks, and comparison against other
websites. The parameters tested during usability testing are efficiency, effectiveness,
and satisfaction.
3. Compatibility Testing: Compatibility testing is software testing which comes under
the non functional testing category, and it is performed on an application to check its
compatibility (running capability) on different platform/environments. This testing is
done only when the application becomes stable. Means simply this compatibility test
aims to check the developed software application functionality on various software,
hardware platforms, network and browser etc. This compatibility testing is very
important in product production and implementation point of view as it is performed to
avoid future issues regarding compatibility.

Unit Testing
Unit testing is a method of testing individual units or components of a software application.
It is typically done by developers and is used to ensure that the individual units of the
software are working as intended. Unit tests are usually automated and are designed to test
specific parts of the code, such as a particular function or method. Unit testing is done at
the lowest level of the software development process, where individual units of code are
tested in isolation.

Advantages of Unit Testing:


 It helps to identify bugs early in the development process before they become more
difficult and expensive to fix.
 It helps to ensure that changes to the code do not introduce new bugs.
 It makes the code more modular and easier to understand and maintain.
 It helps to improve the overall quality and reliability of the software.

It’s important to keep in mind that Unit Testing is only one aspect of software testing and it
should be used in combination with other types of testing such as integration testing,
functional testing, and acceptance testing to ensure that the software meets the needs of its
users.
Example:
a) In a program we are checking if the loop, method, or function is working fine.
b) Misunderstood or incorrect, arithmetic precedence.
c) Incorrect initialization.

Integration Testing
Integration testing is a method of testing how different units or components of a software
application interact with each other. It is used to identify and resolve any issues that may
arise when different units of the software are combined. Integration testing is typically done
after unit testing and is used to verify that the different units of the software work together
as intended.

Advantages of Integrating Testing


1. It helps to identify and resolve issues that may arise when different units of the
software are combined.
2. It helps to ensure that the different units of the software work together as intended.
3. It helps to improve the overall reliability and stability of the software.
4. It’s important to keep in mind that Integration testing is essential for complex systems
where different components are integrated together.
5. As with unit testing, integration testing is only one aspect of software testing and it
should be used in combination with other types of testing such as unit testing,
functional testing, and acceptance testing to ensure that the software meets the needs of
its users.

Types of Integration Testing


o Incremental integration testing
o Non-incremental integration testing

Incremental Approach
In the Incremental Approach, modules are added in ascending order one by one or according
to need. The selected modules must be logically related. Generally, two or more than two
modules are added and tested to determine the correctness of functions. The process
continues until the successful testing of all the modules. In this type of testing, there is a
strong relationship between the dependent modules. Suppose we take two or more modules
and verify that the data flow between them is working fine. If it is, then add more modules
and test again.
Incremental integration testing is carried out by further methods:
o Top-Down approach
o Bottom-Up approach

Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested
with lower level modules until the successful completion of testing of all the modules. Major
design flaws can be detected and fixed early because critical modules tested first. In this type
of method, we will add the modules incrementally or one by one and check the data flow in
the same order.

In the top-down approach, we will be ensuring that the module we are adding is the child of
the previous one like Child C is a child of Child B and so on as we can see in the above
image:

Advantages:
o Identification of defect is difficult.
o An early prototype is possible.
Disadvantages:
o Due to the high number of stubs, it gets quite complicated.
o Lower level modules are tested inadequately.
o Critical Modules are tested first so that fewer chances of defects.
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are
tested with higher level modules until the successful completion of testing of all the modules.
Top level critical modules are tested at last, so it may cause a defect. Or we can say that we
will be adding the modules from bottom to the top and check the data flow in the same order.

In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the above image:

Advantages
o Identification of defect is easy.
o Do not need to wait for the development of all the modules as it saves time.
Disadvantages
o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.

Hybrid Testing Method


In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules
tested with high-level modules simultaneously. There is less possibility of occurrence of
defect because each module interface is tested.

Advantages
o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.
Disadvantages
o This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
o Complicated method.
Non- incremental integration testing
We will go for this method, when the data flow is very complex and when it is difficult to
find who is a parent and who is a child. And in such case, we will create the data in any
module bang on all other existing modules and check if the data is present. Hence, it is also
known as the Big bang method.
In this approach, testing is done via integration of all modules at once. It is convenient for
small software systems, if used for large software systems identification of defects is
difficult.
Since this testing can be done after completion of all modules due to that testing team has less
time for execution of this process so that internally linked interfaces and high-risk critical
modules can be missed easily.

Advantages:
o It is convenient for small size software systems.
Disadvantages:
o Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.

Example
In the below example, the development team develops the application and sends it to the
CEO of the testing team. Then the CEO will log in to the application and generate the
username and password and send a mail to the manager. After that, the CEO will tell them to
start testing the application.
Then the manager manages the username and the password and produces a username and
password and sends it to the test leads. And the test leads will send it to the test engineers for
further testing purposes. This order from the CEO to the test engineer is top-down
incremental integrating testing.
In the same way, when the test engineers are done with testing, they send a report to the test
leads, who then submit a report to the manager, and the manager will send a report to
the CEO. This process is known as Bottom-up incremental integration testing as we can see
in the below image:
Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products
perform the desired tasks or not, as stated in the requirements. It is a method of software
testing where a system is tested for acceptability. The major aim of this test is to evaluate
the compliance of the system with the business requirements and assess whether it is
acceptable for delivery or not.
It is a formal testing according to user needs, requirements and business processes
conducted to determine whether a system satisfies the acceptance criteria or not and to
enable the users, customers or other authorized entities to determine whether to accept the
system or not.
Acceptance Testing is the last phase of software testing performed after System Testing and
before making the system available for actual use.

Types of Acceptance Testing:


1. User Acceptance Testing (UAT): User acceptance testing is used to determine
whether the product is working for the user correctly. This is also termed as End-
User Testing.
2. Business Acceptance Testing (BAT): BAT is used to determine whether the product
meets the business goals and purposes or not. BAT mainly focuses on business profits
which are quite challenging due to the changing market conditions and new
technologies so the current implementation may have to being changed which results in
extra budgets.
3. Contract Acceptance Testing (CAT): CAT is a contract that specifies that once the
product goes live, within a predetermined period, the acceptance test must be
performed and it should pass all the acceptance use cases. Here is a contract termed a
Service Level Agreement (SLA), which includes the terms where the payment will be
made only if the Product services are in-line with all the requirements, which means
the contract is fulfilled.
4. Regulations Acceptance Testing (RAT): RAT is used to determine whether the
product violates the rules and regulations that are defined by the government of the
country where it is being released. This may be unintentional but will impact
negatively on the business.
5. Operational Acceptance Testing (OAT): OAT is used to determine the operational
readiness of the product and is non-functional testing. It mainly includes testing of
recovery, compatibility, maintainability, reliability, etc. OAT assures the stability of
the product before it is released to production.

Advantages of Acceptance Testing:


 This testing helps the project team to know the further requirements from the users
directly as it involve the users for testing.
 Automated test execution.
 It brings confidence and satisfaction to the clients as they are directly involved in the
testing process.
 It is easier for the user to describe their requirement.
 It covers only the Black-Box testing process and hence the entire functionality of the
product will be tested.
Disadvantages of Acceptance Testing:
 Users should have basic knowledge about the product or application.
 Sometimes, users don’t want to participate in the testing process.
 The feedback for the testing takes long time as it involves many users and the opinions
may differ from one user to another user.
 Development team is not participated in this testing process.

Regression Testing
Regression testing is a method of testing that is used to ensure that changes made to the
software do not introduce new bugs or cause existing functionality to break. It is typically
done after changes have been made to the code, such as bug fixes or new features, and is
used to verify that the software still works as intended. Regression testing is a black box
testing techniques. Test cases are re-executed to check the previous functionality of the
application is working fine, and the new changes have not produced any bugs.
Regression testing can be performed on a new build when there is a significant change in the
original functionality. It ensures that the code still works even when the changes are
occurring. Regression means Re-test those parts of the application, which are unchanged.
Regression tests are also known as the Verification Method. Test cases are often
automated. Test cases are required to execute many times and running the same test case
again and again manually, is time-consuming and tedious too.

Example of Regression testing


Consider a product Y, in which one of the functionality is to trigger confirmation, acceptance,
and dispatched emails. It also needs to be tested to ensure that the change in the code not
affected them. Regressing testing does not depend on any programming language
like Java, C++, C#, etc. This method is used to test the product for modifications or any
updates done. It ensures that any change in a product does not affect the existing module of
the product. Verify that the bugs fixed and the newly added features not created any problem
in the previous working version of the Software.

Regression Testing Cases


1. When new functionality added to the application.
A website has a login functionality which allows users to log in only with Email. Now
providing a new feature to do login using Facebook.
2. When there is a Change Requirement.
Remember password removed from the login page which is applicable previously.
3. When the defect fixed
Assume login button is not working in a login page and a tester reports a bug stating that
the login button is broken. Once the bug fixed by developers, tester tests it to make sure
Login Button is working as per the expected result. Simultaneously, tester tests other
functionality which is related to the login button.
4. When there is a performance issue fix
Loading of a home page takes 5 seconds, reducing the load time to 2 seconds.
5. When there is an environment change
When we update the database from MySql to Oracle.

Regression Testing Techniques


The need for regression testing comes when software maintenance includes enhancements,
error corrections, optimization, and deletion of existing features. These modifications may
affect system functionality. Regression Testing becomes necessary in this case. Regression
testing can be performed using the following techniques:

1. Re-test All:
Re-Test is one of the approaches to do regression testing. In this approach, all the test
case suits should be re-executed. Here we can define re-test as when a test fails, and we
determine the cause of the failure is a software fault. The fault is reported, we can expect
a new version of the software in which defect fixed. In this case, we will need to execute
the test again to confirm that the fault fixed. This is known as re-testing. Some will refer
to this as confirmation testing. The re-test is very expensive, as it requires enormous time
and resources.

2. Regression test Selection:


In this technique, a selected test-case suit will execute rather than an entire test-case suit.
The selected test case suits divided in two cases
o Reusable test cases can use in succeeding regression cycle.
o Obsolete test cases can't use in succeeding regression cycle.

3. Prioritization of test cases:


Prioritize the test case depending on business impact, critical and frequently functionality
used. Selection of test cases will reduce the regression test suite.

Regression Testing Challenge


Regression Testing is a vital part of the QA process; while performing the regression we may
face the below challenges:
o Time Consuming: Regression Testing consumes a lot of time to complete. Regression
testing involves existing tests again, so testers are not excited to re-run the test.
o Complex: Regression Testing is complex as well when there is a need to update any
product; lists of the test are also increasing.
o Communicating business rule: Regression Testing ensures the existing product features
are still in working order. Communication about regression testing with a non-technical
leader can be a difficult task.

Advantages of Regression Testing


o Regression Testing increases the product's quality.
o It ensures that any bug fix or changes do not impact the existing functionality of the
product.
o Automation tools can be used for regression testing.
o It makes sure the issues fixed do not occur again.
Disadvantages of Regression Testing
o Regression Testing should be done for small changes in the code because even a slight
change in the code can create issues in the existing functionality.
o If in case automation is not used in the project for testing, it will time consuming and
tedious task to execute the test again and again.

Performance Testing
Performance Testing is a type of software testing that ensures software applications
perform properly under their expected workload. It is a testing technique carried out to
determine system performance in terms of sensitivity, reactivity, and stability under a
particular workload.
Performance testing is a type of software testing that focuses on evaluating the performance
and scalability of a system or application. The goal of performance testing is to identify
bottlenecks, measure system performance under various loads and conditions, and ensure
that the system can handle the expected number of users or transactions.

Performance Testing Attributes:


 Speed: It determines whether the software product responds rapidly.
 Scalability: It determines the amount of load the software product can handle at a time.
 Stability: It determines whether the software product is stable in case of varying
workloads.
 Reliability: It determines whether the software product is secure or not.

Objective of Performance Testing:


1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what needs to be improved before the product is launched in the market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.
5. The objective of performance testing is to evaluate the performance and scalability of a
system or application under various loads and conditions. It helps identify bottlenecks,
measure system performance, and ensure that the system can handle the expected
number of users or transactions. It also helps to ensure that the system is reliable,
stable, and can handle the expected load in a production environment.
Types of Performance Testing:
1. Load Testing
Load testing checks how a system handles many users at one time like in real life.
Testers want to see how the system responds, find issues, and discover its user
capacity. It helps find problems like slow response times, bottlenecks, code issues, and
memory leaks. Companies use load testing to prevent failures, save costs, and make
customers happier.
2. Stress Testing
In Stress Testing, we give unfavorable conditions to the system and check how they
perform in those conditions. This test particularly determines the system on its
robustness and error handling under extremely heavy load conditions.
Example:
i. Test cases that require maximum memory or other resources are executed.
ii. Test cases that may cause thrashing in a virtual operating system.
ii. Test cases that may cause excessive disk requirement
3. Scalability Testing
Scalability testing is a type of non-functional testing in which the performance of a
software application, system, network or process is tested in terms of its capability to
scale up or scale down the number of user request load or other such performance
attributes. It can be carried out at a hardware, software or database level. Scalability
Testing is defined as the ability of a network, system, application, product or a process to
perform the function correctly when changes are made in the size or volume of the
system to meet a growing need.
4. Stability Testing
Stability testing is a type of Software Testing to check the quality and behavior of the
software in different environmental parameters. It is defined as the ability of the product
to continue to function over time without failure.
Stability testing is done to check the efficiency of a developed product beyond normal
operational capacity that is known as break point. It has higher significance in error
handling, software reliability, robustness and scalability of a product under heavy load
rather than checking the system behavior under normal circumstances.
5. Endurance testing:
It is performed to ensure the software can handle the expected load over a long period.
6. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
7. Volume testing:
In volume testing, large number of data is saved in a database and the overall software
system’s behaviour is observed. The objective is to check the product’s performance
under varying database volumes.

Advantages of Performance Testing:


 Performance testing ensures the speed, load capability, accuracy, and other
performances of the system.
 It identifies, monitors, and resolves the issues, if anything occurs.
 It ensures the great optimization of the software and also allows many users to use it at
the same time.
 It ensures the client as well as end-customer’s satisfaction. Performance testing has
several advantages that make it an important aspect of software testing:
 Identifying bottlenecks: Performance testing helps identify bottlenecks in the system
such as slow database queries, insufficient memory, or network congestion. This helps
developers optimize the system and ensure that it can handle the expected number of
users or transactions.
 Improved scalability: By identifying the system’s maximum capacity, performance
testing helps ensure that the system can handle an increasing number of users or
transactions over time. This is particularly important for web-based systems and
applications that are expected to handle a high volume of traffic.
 Improved reliability: Performance testing helps identify any potential issues that may
occur under heavy load conditions, such as increased error rates or slow response
times. This helps ensure that the system is reliable and stable when it is deployed to
production.
 Reduced risk: By identifying potential issues before deployment, performance testing
helps reduce the risk of system failure or poor performance in production.
 Cost-effective: Performance testing is more cost-effective than fixing problems that
occur in production. It is much cheaper to identify and fix issues during the testing
phase than after deployment.
 Improved user experience: By identifying and addressing bottlenecks, performance
testing helps ensure that users have a positive experience when using the system. This
can help improve customer satisfaction and loyalty.
 Better Preparation: Performance testing can also help organizations prepare for
unexpected traffic patterns or changes in usage that might occur in the future.
 Compliance: Performance testing can help organizations meet regulatory and industry
standards.
 Better understanding of the system: Performance testing provides a better
understanding of how the system behaves under different conditions, which can help in
identifying potential issue areas and improving the overall design of the system.

Disadvantages of Performance Testing:


 Sometimes, users may find performance issues in the real-time environment.
 Team members who are writing test scripts or test cases in the automation tool should
have a high level of knowledge.
 Team members should have high proficiency in debugging the test cases or test scripts.
 Low performances in the real environment may lead to loss of large number of users
 Performance testing also has some disadvantages, which include:
 Resource-intensive: Performance testing can be resource-intensive, requiring
significant hardware and software resources to simulate many users or transactions.
This can make performance testing expensive and time-consuming.
 Complexity: Performance testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively. This can make it difficult for teams with
limited resources or experience to perform performance testing.
 Limited testing scope: Performance testing is focused on the performance of the system
under stress, and it may not be able to identify all types of issues or bugs. It’s important
to combine performance testing with other types of testing such as functional testing,
regression testing, and acceptance testing.
 Inaccurate results: If the performance testing environment is not representative of the
production environment or the performance test scenarios do not accurately simulate
real-world usage, the results of the test may not be accurate.
 Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage,
and it’s hard to predict how users will interact with the system. This makes it difficult
to know if the system will handle the expected load.
 Complexity in analysing the results: Performance testing generates a large amount of
data, and it can be difficult to analyse the results and determine the root cause of
performance issues.

Other Types of Testing


 Smoke Testing
 Object-Oriented Testing
 Alpha Testing
 Beta Testing

Smoke Testing
Smoke Testing is done to make sure that the software under testing is ready or stable for
further testing. It is called a smoke test as the testing of an initial pass is done to check if it
did not catch fire or smoke in the initial switch-on.
Example: If the project has 2 modules so before going to the module 2 make sure that
module 1 works properly.

Object-Oriented Testing
Object-Oriented Testing testing is a combination of various testing techniques that help to
verify and validate object-oriented software. This testing is done in the following manner:
 Testing of Requirements,
 Design and Analysis of Testing,
 Testing of Code,
 Integration testing,
 System testing,
 User Testing.

Alpha Testing
Alpha testing is a type of validation testing. It is a type of acceptance testing that is done
before the product is released to customers. It is typically done by QA people.
Example: When software testing is performed internally within the organisation.

Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software.
This version is released for a limited number of users for testing in a real-time environment.
Example: When software testing is performed for the limited number of people.

Stubs and Drivers in the Software Testing


In Software Testing, the words stub and drivers described as a replica of the modules that
operate as an alternative to the new or missing modules.
Stubs are mainly used in top-down integration testing; on the other hand, drivers are mainly
used in bottom-up integration testing individually and designed the enhance the testing
process.
To sustain the essential requirements of the inaccessible modules or components, we are
precisely established the stubs and drives and extremely beneficial in getting the anticipated
outcomes.

Both Stubs and drivers are the essential part of the basic software development and software
testing process.

Stubs
o A stub is a replica of a module that collects the data and develops many possible data.
However, it executes like an actual module and is mainly used to test modules.
o Generally, stubs are created by software developers in order to use them instead of
modules if the particular modules are miss or not yet developed.
o By using these test stubs, the test engineers can simulate the performance of the lower-
level modules, which are not yet joined with the software. Furthermore, it helps us to
accelerate the activity of the missing modules.

Types of Stubs
In the top-down approach of incremental integration testing, the stubs are divided into four
essential parts, which are as follows:
o Demonstrate the trace message.
o Exhibits the parameter values.
o Returns the consistent values, which are handled by the modules or the components.
o Returns the values of the specific parameters, which were utilized by testing components
or modules.

Drivers
o The drivers establish the test environments and take care of the communication,
estimates results and also send the reports.
o These are just like stubs and used by software test engineers in order to accomplish the
missing or incomplete modules / components requirements.
o The drivers are mainly developed in the Bottom-up approach of incremental integration
testing.
o Generally, drivers are bit complex as compared to the stubs.
o These can test the lower levels of the code when the upper-level modules or codes are
not developed or missing.
o In other words, we can say that the Drivers perform as pseudo-codes, which are mainly
used when the stub modules are completed; however, the initial modules/components are
not prepared.
Examples of Stubs and Drivers
Suppose we have one web application that contains four different modules, such as:
o Module-P
o Module-Q
o Module-R
o Module-S

And all the modules, as mentioned earlier, are responsible for some individual activities or
functionality, as we can observe in the following table:
Different Modules Individual Activities

Module-P Login page of the web application

Module-Q Home-page of the web application

Module-R Print Setup

Module-S Log out page

It is always a better approach to implement the testing or development of all the modules
equivalently. The minute each gets developed, they can be combined and tested according to
their similar dependencies with a module.

Once Module-P is developed, it will go through the testing process. But, to perform and
validate the testing methods regarding Module-P, they need Module-Q, which is not yet
developed entirely and still in the developing process.

And it is not possible to test Module-P on the lack of Module-Q. Thus, in such scenarios, we
will take the help of Stubs and Drivers in the software testing process.

The Stubs and drivers will replicate all the basic functionality and features displayed by the
real Module-Q. And subsequently, it is being combined with Module-P in order to execute
the testing process effectively.

Now, we can validate the estimated functionality of the Login page, which was in Module-
P, only if it is going to the Home Page, which is the activity of Module-Q as per the correct
and valid inputs.

In the same way, stubs and drivers are used to accomplish the requirements of other
modules, such as the Sign-out page, which comes under the Module-S and needs to be
aimed to the Login page (Module-P) after effectively logging out from the particular
application.

Likewise, we may also use Stubs or Drivers instead of Module-R and Module-S if they are
not available.

As the result of the inaccessibility of Module-P, stubs and drivers will work as an alternate
for it to perform the testing of Module-S.
Compare drivers and Stubs functionality
We can say that both stubs and drivers carry out similar features and objectives. Both of them
act as an alternative for the missing or absent module. But, the change between them can be
pictured throughout the integration testing process.

Stubs vs Drivers
Sn Stubs Drivers

1. A section of code that imitates the A section of code that imitates the calling
called function is known as Stubs. function is known as Drivers.

2. It is used to test the functionality of When the main module is prepared or


modules and test modules and also ready, we will take the help of drivers.
replicate the performance of the lower- Generally, drivers are a bit more complex
level module which are not yet merged, as compared to the stubs.
and the activity of the missing
module/components.

3. The stubs are developed during the The drivers are developed during the
Top-down approach of incremental bottom-up approach of incremental
integration testing. integration testing.

4. Stubs replicate the activity of not Drivers authorize test cases to another
developed and missing modules or system and which refer the modules under
components. testing.

5. The stubs are created by the team of Mostly, the drivers are created by the
test engineers. developer and the unit Test engineers.

6. Stubs are developed when high-level Drivers are acquired when lower-level
modules are tested, and lower-level modules are tested, and higher-level
modules are not formed. modules are not yet developed.

7. These are parallel to the modules of the On the other hand, the drivers are used to
software, which are under development reminding the component, which needs to
process. be tested.

8. The stubs signify the low-level The drivers signify the high-level
modules. modules.

9. Fundamentally, the Stubs are also The Drivers are also known as the calling
known as a called program and initially program and are mainly used in bottom-
used in the Top-down integration up integration testing.
testing.

10. These are reserved for testing the The drivers are used if the core module of
feature and functionality of the the software isn't established for testing.
modules.

Test Data Suit Preparation


A Test Suite is a collection of test cases or scripts organized to test a software
application. It serves as a container for tests, aiding in execution and reporting, ensuring
the application functions as expected. Structured to validate various scenarios, Test Suites
are pivotal in automation testing, facilitating organized and comprehensive testing efforts.

In other words, it is a container encompassing a collection of test cases to perform test


execution and report its status. In context of unit testing it can be a class, module, or
another piece of code created to form a collection of unit tests.

Grouping tests into test suites helps in managing, executing, and reporting the test results
efficiently. Effectively acting as a container for these test cases, the suite showcases
precise details and objectives for each individual test case. Furthermore, it includes vital
information regarding the system configuration necessary for the testing process.

For a product purchase scenario, a well-structured test suite may encompass an array of
crucial test cases, seamlessly contributing to the overall validation process:
 Test Case 1: Login.
 Test Case 2: Adding Products.
 Test Case 3: Checkout.
 Test Case 4: Logout
There might be some instances when they are used to collect relevant test cases.
Depending on the system, it may be for all of the system's functionality or a smoke test
suite that includes smoke tests. Additionally, it may consist of all tests and indicate if a
test should be used as a smoke test or for a particular functionality.

As depicted in image below, a test plan is segregated into test suites, which may be
further segmented based on number of test cases.
Characteristics of a test suite
It provides several benefits for the testing team and the organization. Some of the
essential characteristics are:
 They are developed after a test plan.
 There are several tests and test cases included in it.
 It explains the aims and objectives of the test cases.
 Test parameters like application, environment, version, and others are incorporated in it.
 These can be created based on the test cycle and scope.
 It includes several kinds of testing, including functional testing and non-functional
testing.
 It offers a way to test and evaluate the applications quickly.
 It may be used with many automation testing tools, including JUnit and Selenium.

Types of Test suites


(Differences: Test plan Vs Test scenario Vs Test case Vs Test suite)
Test plan Test scenario Test case Test suite

It is the
functionality of any A test case is a
Defines the scope, Test cases make up
software significant document
aim, and strategy of a test suite created
application's that contains necessary
testing. after the test plan.
features that may be testing-related details.
tested.

It has three types:


It is carried out It has two types: formal It has two types:
master test plan, type-
from the end user's test cases and informal abstract and
specific, and level-
perspective. test cases. executable.
specific.

It is created from a These are created using


They are developed The team can
use case document, a the Software
from the use cases benefit from
product description, Requirement
and ensure separate test suites,
or a software Specification and
exhaustive test which make testing
requirement. generated based on the
coverage. simple and flexible.
Specifications (SRS). test scenarios (SRS).

It outlines the It specifies a set of It specifies the


It adheres to a
different operations requirements that help purpose and
standard template that
performed by the verify if the software objectives of test
provides information
team on the application complies cases created to test
about the testing
software with the specified the software
process.
application. functionality. application.

Organization of test cases in test suite


Test suites help organize test cases into groups and structure them logically and each one
includes a collection of test cases that are either directly related to it or are grouped under
several sub-suites.
It is more flexible to build the required structure as a tree is flexible since there are no
restrictions on the number of layers that may be constructed.
To structure and arrange a test case into logical components, a user can consider it as an
application module, component, or feature set. It is easier to find a specific set of test
cases.
QA teams can easily plan their testing by developing a test suite for different testing
purposes, such as regression or smoke test cases. In addition, QA teams can either add or
remove test cases from them.

Best practices to make a good test suite


A good test suite doesn't take a long to execute. It ensures your software application
works as intended. If it encounters a bug, it will automatically return feedback to help you
identify the bug's source and help you fix it. Following are the properties which makes a
test suite good fit to use for software developers:
 Fast: If a test suite includes an extensive collection of integration tests and a few unit
tests, it can take much longer to execute it, whereas a fast one will give feedback more
quickly and make your development process even more efficient.
 Complete: If your test suite covers 100% of your codebase, it will identify any errors
arising from tweaks to your code application. Therefore, a complete test suite gives you
confidence that your software applications are working fine as intended.
 Reliable: It provides consistent feedback, irrespective of changes that can occur outside
the test scope, whereas an unreliable one can have tests that fail intermittently, with no
valuable feedback about changes you've done to your application.
 Isolated: It runs test cases without hampering other tests in the suite. However, you may
require cleaning up existing test data after running a test case in your suite.
 Maintainable: A maintainable test suite that is organized is easy to manipulate. You
easily add, change, or remove test cases. To maintain your test suite, you can follow best
coding practices and develop a uniform process that suits you and your team.
 Expressive: If your test suites are easy-to-read, they can be good for documentation.
Always write test scripts that are descriptive of the features you are testing. Also, try to
create a descriptive and understandable for a developer to read.

Differences between bug, defect, error, fault, and failure


Comparison Bug Defect Error Fault Failure
basis

Definition It is an informal The Defect is An Error is a The Fault is a If the software


name specified the difference mistake made in state that causes has lots of
to the defect. between the the code; that's the software to defects, it leads
actual outcomes why we cannot fail to to failure or
and expected execute or accomplish its causes failure.
outputs. compile code. essential
function.
Raised by The Test The Testers The Developers Human The failure finds
Engineers identify the and mistakes cause by the manual
submit the bug. defect. And it automation test fault. test engineer
was also solved engineers raise through the
by the the error. development
developer in the cycle.
development
phase or stage.
Different Different type Different type Different type Different type -----
types of bugs are as of Defects are of Error is as of Fault are as
follows: as follows: below: follows:
o Logic bugs Based o Syntactic o Business
o Algorithmic on priority: Error Logic Faults
bugs o High o User o Functional
o Resource o Medium interface and Logical
bugs o Low error Faults
o Flow o Faulty GUI
And based on control error o Performance
the severity: o Error Faults
o Critical handling o Security
o Major error Faults
o Minor o Calculation o Software/
error hardware
o Trivial
o Hardware fault
error
o Testing
Error

Reasons Following are The below The reasons for The reasons Following are
behind reasons which reason leads to having behind some of the
may cause the defects: an error are as the fault are as most important
the bugs: Giving incorrect follows: follows: reasons behind
Missing coding and wrong Errors in the A Fault may the failure:
Wrong coding inputs. code. occur by an Environmental
Extra coding Dilemmas and The Mistake of improper step in condition
errors in the some values. the initial stage, System usage
outside behavior If a developer is process, or data Users
and inside unable to definition. Human error
structure and compile or run Inconsistency or
design. a program issue in the
An error in successfully. program.
coding or logic Confusions and An irregularity
affects the issues in or loophole in
software and programming. the software that
causes it to Invalid login, leads the
breakdown or loop, and software to
the failure. syntax. perform
Inconsistency improperly.
between actual
and expected
outcomes.
Blunders in
design or
requirement
actions.
Misperception
in
understanding
the
requirements of
the application.
Way to Following are With the help of Below are ways The fault can be The way to
prevent the way to stop the following, to prevent prevented with prevent failure
the the bugs: we can prevent the Errors: the help of the are as follows:
reasons Test-driven the Defects: Enhance the following: Confirm re-
development. Implementing software quality Peer review. testing.
Offer several with system Assess the Review the
programming innovative review and functional requirements
language programming programming. necessities of and revisit the
support. methods. Detect the the software. specifications.
Adjusting, Use of primary issues and Execute the Implement
advanced, and and correct prepare a detailed code current
operative software suitable analysis. protective
development development mitigation plan. Verify the techniques.
procedures. techniques. Validate the correctness of Categorize and
Evaluating the Peer review fixes and verify software design evaluate errors
code It is executing their quality and and and issues
systematically. consistent code precision. programming.
reviews to
evaluate its
quality and
correctness.

Static Testing Strategies


Static Testing is a type of a Software Testing method which is performed to check the
defects in software without actually executing the code of the software application.
Whereas in Dynamic Testing, the code is executed to detect the defects. Static testing is
performed in early stage of development to avoid errors as it is easier to find sources of
failures and it can be fixed easily. The errors that cannot be found using Dynamic Testing,
can be easily found by Static Testing.

Static Testing Techniques:

1. Review: In static testing, review is a process or technique that is performed to find the
potential defects in the design of the software. It is process to detect and remove errors
and defects in the different supporting documents like software requirements
specifications. People examine the documents and sorted out errors, redundancies and
ambiguities. Review is of four types:
 Informal: In informal review, the creator of the documents put the contents in
front of audience and everyone gives their opinion and thus defects are identified
in the early stage.
 Walkthrough: It is basically performed by experienced person or expert to check
the defects so that there might not be problem further in the development or testing
phase.
 Peer review (Formal Technical Review): Peer review means checking
documents of one-another to detect and fix the defects. It is basically done in a
team of colleagues.
 Inspection: Inspection is basically the verification of document by the higher
authority like the verification of software requirement specifications (SRS).

2. Static Analysis: Static Analysis includes the evaluation of the code quality that is
written by developers. Different tools are used to do the analysis of the code and
comparison of the same with the standard. It also helps in following identification of
following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax

Static Analysis is of three types:


 Data Flow: Data flow is related to the stream processing.
 Control Flow: Control flow is basically how the statements or instructions are
executed.
 Cyclomatic Complexity: Cyclomatic complexity defines the number of
independent paths in the control flow graph made from the code or flowchart so that
minimum number of test cases can be designed for each independent path.

Advantages of Static Testing


o Improved Product quality: Static testing will enhance the product quality because it
identifies the flaws or bugs in the initial stage of software development.
o Improved the efficiency of Dynamic testing: The usage of Static testing will improve
Dynamic Testing efficiency because the code gets cleaner and better after executing
Static Testing. As we understood above, static Testing needs some efforts and time to
generate and keep good quality test cases.
o Reduced SDLC cost: The Static Testing reduced the SDLC cost because it identifies the
bugs in the earlier stages of the software development life cycle. So, it needs less hard
work and time to change the product and fix them.
o Immediate evaluation & feedback: The static testing provides us immediate evaluation
and feedback of the software during each phase while developing the software product.
o Exact location of bug is traced: When we perform the static testing, we can easily
identify the bugs' exact location compared to the dynamic Testing.

Compliance with Design and Coding Standards


Different modules specified in the design document are coded in the Coding phase according
to the module specification. The main goals of the coding phase is to code from the design
document prepared after the design phase through a high-level language and then to unit test
this code.
Good software development organizations want their programmers to maintain to some well-
defined and standard style of coding called coding standards. They usually make their own
coding standards and guidelines depending on what suits their organization best and based on
the types of software they develop. It is very important for the programmers to maintain the
coding standards otherwise the code will be rejected during code review.

Purpose of Having Coding Standards:


 A coding standard gives a uniform appearance to the
codes written by different engineers.
 It improves readability, and maintainability of the
code and it reduces complexity also.
 It helps in code reuse and helps to detect error easily.
 It promotes sound programming practices and
increases efficiency of the programmers.

You might also like