Professional Documents
Culture Documents
SQM notes
SQM notes
A software defect can be regarded as any failure to address end-user requirements. Common
defects include missed or misunderstood requirements and errors in design, functional logic,
data relationships, process timing, validity checking, and coding errors.
The software defect management approach is based on counting and managing defects.
Defects are commonly categorized by severity, and the numbers in each category are used for
planning. More mature software development organizations use tools, such as defect leakage
matrices (for counting the numbers of defects that pass through development phases prior to
detection) and control charts, to measure and improve development process capability.
This approach to software quality is best exemplified by fixed quality models, such as
ISO/IEC 25010:2011. This standard describes a hierarchy of eight quality characteristics,
each composed of sub-characteristics:
1. Functional suitability
2. Reliability
3. Operability
4. Performance efficiency
5. Security
6. Compatibility
7. Maintainability
8. Transferability
ISO/IEC 25010:2011 Software Quality Model
1. Effectiveness
2. Efficiency
3. Satisfaction
4. Safety
5. Usability
A fixed software quality model is often helpful for considering an overall understanding of
software quality. In practice, the relative importance of particular software characteristics
typically depends on software domain, product type, and intended usage. Thus, software
characteristics should be defined for, and used to guide the development of, each product.
SQA tests every block of this process individually to identify issues before they become
major problems.
1. PLANNING
This is the first phase of the SDLC. During this phase is when the determination of whether a
need exists or not for a new system to improve business operations. Once the need has been
determined (or the problem identified), solutions need to be found.
Information and resources are gathered during this phase to support the need for a new
system or improvement to an already existing one. Based on the information supporting the
need, solutions are devised and submitted for approval.
During this phase is a good idea to brush up on current web development industry trends in
order to have the most up-to-date information and resources to meet the need.
This is where the proposed solutions are examined until one is found that best matches the
overall strategy and goals of the company. During this phase, planning is still conducted but
on a much deeper analytical level.
The problem and associated systems are analysed to determine the functional requirements
for the project or the solution. These would be the requirements that the new system needs to
meet in order to solve the problem and align with corporate strategy.
This is where a timeline is chosen, who is responsible for individual parts is determined, and
how the needs of the business can be met.
3. SYSTEMS DESIGN
Just as it sounds this is where the new system or software will be designed. The parameters
will be discussed with stakeholders along with the technologies to be used, project
constraints, and time and budget. After review, the best design approach is chosen that will
meet the requirements determined in the second phase.
The design approach that is chosen will need to provide a definition of all components that
need to be developed, user flows and database communications, and communications with
third-party services.
4. DEVELOPMENT
The development phase is where actual real work begins on the new system or software.
Typically, a programmer, network engineer, or database developer, or any combination will
be brought on to begin writing source code.
It’s important during this phase to have a flow chart created. This flow chart is used to ensure
that the system’s processes are properly organized.
While this phase usually pertains to the development of the actual software to be used, in the
prototype model, this is the phase in which the prototype is developed. The prototype is then
continuously developed and tested until it meets the needs of the customer and the customer
is satisfied. After that, the prototype returns one final time to this phase.
During its last trip to the development phase, it will be developed and turned into the actual
software or system to be used.
6. IMPLEMENTATION
Again, just as it sounds, this is the phase when the new system is implemented into
normal business operations. The new software or system is installed requiring more code
to be written as well as moving of any files or data to the new system.
Because of the risk of interrupting business operations during the install, this phase will
usually occur during non-peak hours. This is because of the potential for any errors with
integration or transfer. While the goal is to minimize these errors, they can occur and
when they occur during peak hours the company can lose out on productivity and
revenue.
End-users and analysts should be seeing the first glimpses of the finished system and the
changes it will bring to the company.
IT will be able to install new updates remotely, while also assisting in customizing the
system to continuously meet the needs of the company. IT is also responsible for
correcting any future errors or issues that may arise. No system is perfect and ongoing
maintenance is a necessary part of any new system or software project .
VIEWS OF QUALITY
1. VALUE BASED:- Quality depends on the amount the customer is willing to pay for it
The value-based view relates the concepts discussed in the previous section. It deals
with the multiplicity of views of various stakeholders and the fact that there is not one
view of quality that is always right. It is in line with the fitness-for-purpose definition
and another influential definition of quality that states: "Quality is all the features that
allow a product to satisfy stated or implied needs at an affordable cost" (ISO 8402).
2. USER BASED: - Quality is fitness for purpose. This view of quality evaluates the
product in a task context, how the product meets the user's needs. It can thus be a
highly personalized view. This view is often measured by reliability, usability. The
user view is grounded in product characteristics that meet the user's needs. This view
takes an evaluative view and examines the product in terms of meeting user
requirements. This view is therefore the one adopted by Crosby (1979), who has put
forth one of the most influential and well-publicized definitions of quality: Quality is
conformance to requirements.
Hierarchical models serve two purposes. One purpose is methodological; the other is
substantive.
In 1978, B.W. Boehm introduced his software quality model. The model represents a
hierarchical quality model similar to the McCall Quality Model to define software quality
using a predefined set of attributes and metrics, each of which contributes to the overall
quality of software.
Boehm’s model has three levels for quality attributes. These levels are divided based on
their characteristics. These levels are:-
The next level of Boehm’s hierarchical model consists of seven quality factors
associated with three primary uses, stated below –
TYPES OF DEFECTS: Following are some of the basic types of defects in the
software development:
1. ARITHMETIC DEFECTS: It includes the defects made by the
developer in some arithmetic expression or mistake in finding solution of
such arithmetic expression. This type of defects are basically made by the
programmer due to access work or less knowledge. Code congestion may
also lead to the arithmetic defects as programmer is unable to properly
watch the written code.
2. LOGICAL DEFECTS: Logical defects are mistakes done regarding the
implementation of the code. When the programmer doesn’t understand the
problem clearly or thinks in a wrong way then such types of defects
happen. Also, while implementing the code if the programmer doesn’t
take care of the corner cases, then logical defects happen. It is basically
related to the core of the software.
3. SYNTAX DEFECTS: Syntax defects means mistake in the writing style
of the code. It also focuses on the small mistake made by developer while
writing the code. Often the developers do the syntax defects as there might
be some small symbols escaped. For example, while writing a code in C++
there is possibility that a semicolon (;) is escaped.
Low-quality software can also hinder your ability to seize new business opportunities. If your
software is unreliable or lacks critical features, potential clients or partners may be hesitant to
collaborate with your company. This missed revenue can have a significant long-term impact
on your bottom line.
3. DAMAGE TO REPUTATION
Your company's reputation is one of its most valuable assets. Poorly developed software can
lead to negative reviews, customer complaints, and a tarnished brand image. Rebuilding trust
after a reputation hit can be a long and costly process.
5. INEFFICIENT WORKFLOWS
If your software doesn't align with your business processes or hampers productivity, your
employees may spend more time working around its limitations. This inefficiency can lead to
wasted time and resources.
DEFINITIONS USED IN SOFTWARE QUALITY
ENGINEERING
Software quality is defined as a field of study and practice that describes the desirable
attributes of software products. There are two main approaches to software quality: defect
management and quality attributes.
A software defect can be regarded as any failure to address end-user requirements. Common
defects include missed or misunderstood requirements and errors in design, functional logic,
data relationships, process timing, validity checking, and coding errors.
The software defect management approach is based on counting and managing defects.
Defects are commonly categorized by severity, and the numbers in each category are used for
planning. More mature software development organizations use tools, such as defect leakage
matrices (for counting the numbers of defects that pass through development phases prior to
detection) and control charts, to measure and improve development process capability.
SOFTWARE QUALITY ASSURANCE
Software Quality Assurance (SQA) is a process that assures that all software engineering
processes, methods, activities, and work items are monitored and comply with the defined
standards. These defined standards could be one or a combination of anything like ISO 9000,
CMMI model, ISO15504, etc.
SQA incorporates all software development processes starting from defining requirements to
coding until release. Its prime goal is to ensure quality.
Software quality assurance plays a vital role in the software development life cycle.
Enterprises are constantly churning out software applications left, right, and centre to keep up
with the increasing demand. While releasing software applications is one thing, it’s crucial to
ensure that the product works the way you want it to.
Software quality assurance (SQA) is a methodology to ensure that the quality of the software
product complies with a predetermined set of standards.
What is the purpose of software quality assurance? SQA is not just a step in the development
process; it functions in parallel with the software development life cycle. Businesses must
ascertain that every part of the software, internal and external, is up to the predefined
standard. SQA tests every block of this process individually to identify issues before they
become major problems.
To implement SQA effectively, it is essential to follow a structured approach. You can follow
the below-mentioned steps to implement SQA:
1. DEFINE QUALITY STANDARDS: Clearly define the quality standards that your
software product must meet. This includes defining requirements, acceptance criteria,
and performance metrics. These standards should be agreed upon by all stakeholders,
including the development team, management, and customers.
2. PLAN SQA ACTIVITIES: Develop a plan for the SQA activities that will be
performed throughout the software development life cycle. This plan should include
reviews, testing, and documentation activities. It should also specify who will be
responsible for each activity and when it will be performed.
5. MONITOR AND MEASURE: Monitor and measure the quality of the software
product throughout the development process. This includes tracking defects,
analysing metrics such as code coverage and defect density, and conducting root
cause analysis.
We have divided this section into parts based on the approaches to software quality
assurance.
The software quality defect management approach focuses on counting and managing
defects. The level of severity can generally categorize defects. Software development
teams use tools like defect leakage matrices and clear and concise control charts to
measure and enhance the capability of their software development process.
The software quality attributes approach works by helping software engineers analyse
the performance of a software product. This approach focuses on directing the
engineer’s attention to several quality factors. While some of these attributes may
overlap or fall under another, there are five essential quality characteristics that you
should consider:
3. RELIABILITY.
Reliability reflects the system’s ability to continue operating overtime under different
working environments and conditions. The application should consistently return
correct results.
4. USABILITY.
Software applications should be easy to learn and navigate. This user-friendliness and
effectiveness of utilizing the product are called usability.
5. EFFICIENCY.
This software QA attribute indicates how well the system uses all the available
resources. It is shown by the amount of time the system needs to finish any task.
6. MAINTAINABILITY.
It shows how easy it is to maintain different system versions and support changes and
upgrades cost-effectively.
7. PORTABILITY.
This software quality assurance attribute demonstrates the system’s ability to run
effectively on various platforms — for example, data portability, viewing, hosting,
and more.
1. TRADITIONAL APPROACH:
The traditional approach, also known as the Waterfall mode, includes a sequential
process where each phase of the software development lifecycle is completed before
moving on to the next phase. Similarly, SQA is performed at the end of each phase to
ensure that the requirements have been met before moving to the next phase. This
approach involves requirement analysis, design, coding, testing, and maintenance to
ensure that the software product is developed with minimal errors and defects and
meets the desired quality standards.
2. AGILE APPROACH:
The Agile approach to SQA is an iterative, incremental, and flexible approach that
focuses on delivering software products in small increments. This approach
emphasizes collaboration between the development team and the stakeholders for a
seamless and quick development process. Agile SQA is quite popular and focuses on
self-organizing teams, continuous integration and testing, continuous delivery, and
continuous feedback to ensure a high-quality software product.
3. DEVOPS APPROACH:
This is a data-driven approach that focuses on reducing defects and errors in the
software product. The approach uses statistical tools and techniques to measure and
improve the quality of the software product. It is suitable for projects that prioritize
reducing defects and errors.
5. LEAN APPROACH:
This is an approach that focuses on efficiency and waste reduction in the software
development process. It emphasizes continuous improvement and the elimination of
non-value-added activities. It is suitable for projects that require a focus on efficiency
and waste reduction.
This approach involves writing automated tests before writing the code to ensure that
the code meets the requirements and specifications of the software product. TDD
SQA involves various activities, such as writing unit tests, running the tests, and
refactoring the code, to ensure that the software product is of high quality.
8. RISK-BASED APPROACH:
Last but not least, the risk-based approach to SQA involves identifying and managing
the risks associated with the software product. This approach is made up of risk
assessment, risk mitigation, and risk monitoring to ensure that the software product
meets the established standards.
QUALITY CONTROL
Quality Control or commonly also abbreviated as QC which means quality control. QC can
be needed in a variety of industrial sectors, from manufacturing to hand production.
Quality Control is a process that in essence can make the entity a quality reviewer of all
factors involved in a production activity.
Quality control or quality control that can involve developing systems to ensure that products
and services are designed and produced to meet or exceed the requirements of customers and
producers themselves.
There are 3 aspects that are emphasized in this quality control, which are as follows:
No matter in the industry sector in which they work, their main goal is to control quality, test
a product according to factory specifications or in a company.
When they find a defect in a production they are authorized and can send a defective product
back for repair.
The essence of their task is to be able to test, check, research, and analyze a product quality
so that the product that can be produced is in accordance with company standards and
feasibility to be circulated in the market.
Conduct a monitoring of a production process from the beginning of the process until
it becomes finished goods.
Notify Quality Control Supervisors if there are process discrepancies.
Approve a finished product or finish goods.
Perform a sampling per stay (retain).
Make a daily process observation report.
Carry out various other duties assigned by the Quality Control supervisor.
To be able to ensure products and services that have been designed and produced so
that they have met the requirements of customers or the producers themselves.
The first step that must be done in researching Quality Control is to really understand
and be able to understand the state of both weaknesses and strengths that exist in
oneself.
Furthermore, it is able to reduce various mistakes on yourself.
After finding the cause of the problem, take from causes number 1 and 2, discard
causes number 3, 4 and so on.
Don't just look at the results but check the process one by one.
Check and convince a fact that is in the field, with products and data.
Make an observation on the average value of the data results, because there could be
an imbalance in an average value.
Do not only conduct an investigation, but also the results of the investigation are
checked one by one the process.
How to work and the sequence of work should not only be conveyed orally but
conveyed in a written form.
If you see something abnormal, immediately take an action, stop, machine, contact
maintenance, and immediately find the cause and also take corrective action.
Do not let the same mistake repeat itself.
1. Quality Control (QC) education and experience, namely a high school diploma or also
a Diploma or Bachelor in the field in accordance with the work above.
2. Must have good oral and written communication skills.
3. Must be good at arithmetic calculations and have a mechanical aptitude when
necessary.
4. More than 2 years of experience is usually required to be able to become Quality
Control (QC) in the required field.
5. The ability to be able to use computers and utilities is also a must have Quality
Control (QC).
6. With a training and certification program offered by international organizations can
help to get a job as a Quality Control (QC).
7. To be able to have a working knowledge of other departments of a company and the
rules and regulations that can help to be able to maintain quality standards in a more
effective way.
Quality Assurance (QA) focuses on defect prevention while Quality Control (QC)
focuses on identifying or finding defects.
In Quality Assurance (QA), we look for the most effective way to be able to avoid
defects while in Quality Control (QC) we strive to be able to detect defects and then
look for ways of improvement to make the quality of a product better.
Quality Assurance (QA) is a pro-active process while Quality Control (QC) is a
reactive process.
Quality Assurance (QA) is a process-based approach while Quality Control (QC) is a
product-based approach.
Quality Assurance (QA) involves the process of handling a quality problem while
Quality Control (QC) verifies the quality of the product itself (on the product).
Quality Audit is an example of a process in Quality Assurance (QA) while Inspection
and Testing of a product is an example of a process in Quality Control (QC).
SOFTWARE CONFIGURATION MANAGEMENT
Configuration Identification.
Baselines.
Change Control.
Configuration Status Accounting.
Configuration Audits and Reviews.
CONFIGURATION IDENTIFICATION: -
Configuration identification is a method of determining the scope of the software
system. With the help of this step, you can manage or control something even if you
don’t know what it is. It is a description that contains the CSCI type (Computer
Software Configuration Item), a project identifier and version information.
Identification of configuration Items like source code modules, test case, and
requirements specification.
Identification of each CSCI in the SCM repository, by using an object-oriented
approach
The process starts with basic objects which are grouped into aggregate objects.
Details of what, why, when and by whom changes in the test are made
Every object has its own features that identify its name that is explicit to all
other objects
List of resources required such as the document, the file, tools, etc.
EXAMPLE:
BASELINE
A baseline is a formally accepted version of a software configuration item. It is
designated and fixed at a specific time while conducting the SCM process. It can only
be changed through formal change control procedures.
CHANGE CONTROL
Change control is a procedural method which ensures quality and consistency when
changes are made in the configuration object. In this step, the change request is
submitted to software configuration manager.
Keeps a record of all the changes made to the previous baseline to reach a new
baseline
Identify all items to define the software configuration
Monitor status of change requests
Complete listing of all changes since the last baseline
Allows tracking of progress to next baseline
Allows to check previous releases/versions to be extracted for testing
1.Configuration Manager
2. Developer
The developer needs to change the code as per standard development activities or
change requests. He is responsible for maintaining configuration of code.
The developer should check the changes and resolves conflicts
3. Auditor
4. Project Manager:
5. User
The end user should understand the key SCM terms to ensure he has the latest version of the
software
The SCMP can follow a public standard like the IEEE 828 or organization specific
standard
It defines the types of documents to be management and a document naming.
Example Test_v1
SCMP defines the person who will be responsible for the entire SCM process and
creation of baselines.
Fix policies for version management & change control
Define tools which can be used during the SCM process
Configuration management database for recording configuration information.
Concurrency Management:
When two or more tasks are happening at the same time, it is known as concurrent
operation. Concurrency in context to SCM means that the same file being edited by
multiple persons at the same time. If concurrency is not managed correctly with SCM
tools, then it may create many pressing issues.
Version Control:
SCM uses archiving method or saves every change made to file. With the help of
archiving or save feature, it is possible to roll back to the previous version in case of
issues.
Synchronization:
Users can check out more than one files or an entire copy of the repository. The user
then works on the needed file and checks in the changes back to the repository. They
can synchronize their local copy to stay updated with the changes made by other team
members.
UNIT II
SOFTWARE TESTING
Testing is a broad and fast-growing space that is decades-old, rich and equipped
with the agility to constantly subsume new processes, tools and methodologies.
Testing follows certain foundational principles that remain unchanged irrespective
of its age, recent technology trends and domain.
Testing is all about playing the role of end-users. We could be completely distracted by
business needs and the technical implementation details, but we should always operate with a
focus on the end-user’s interest.
That calls for us to go beyond verifying a story’s acceptance criteria and explore the
application, like a typical end-user would - requiring us to understand the targeted user
personas before testing even begins.
Often, teams tend to make trade-offs on end-user needs when they are weighed against
factors like development complexity or timelines. However, the tester’s role is to
systematically leverage the end-user’s perspective and negotiate such trade-offs.
Macro-level testing uses a broader lens to cover functional flows, data propagation between
modules, integration between components and more. For example, the ‘total order amount’
calculation feature could help build the ‘order creation’ flow. And, we can proceed to test the
order creation flow by testing the database, third-party integrations, UI flows, failures in
order creation and more.
What I have realized is most teams focus only on macro-level testing, which usually results in
multiple issues in production – because this kind of testing disregards minor details. Let me
elaborate using the same ‘order creation’ example. Teams relying only on macro-level testing
would have checked for valid order creation and business error flows. But, when the item
prices in production are negative or show an unexpected number of decimals, order creation
breaks – because when testing, they did not focus on micro-level testing.
Micro-level tests can be added as unit and integration tests, while macro-level tests can be
covered as part of functional automation tests, visual tests and so on. Our recommendation is
to constantly zoom in and out of the micro and macro details while testing. Failing to focus
on any of these levels could result in a dip in team confidence due to the unexpected issues in
production.
3. FASTER FEEDBACK: -
This principle talks about the early detection of defects so the defect fixing cycle and
consecutively, the release cycle can be faster. Defects tend to become costlier when they are
discovered later in the delivery cycle.
For instance, imagine a high-priority bug found two weeks post feature development. First,
this becomes extremely time and effort intensive – creating bug cards, triaging them, tracking
them in iterations and identifying the right developer’s time to fix them. Second, in worst-
case scenarios, we might find it impossible to fix the defect without a major refactoring,
hence delaying the release. I would say, that’s the ultimate cost to pay for a defect!
There’s another notable correlation between time taken to fix a defect and how late it is
found. When a feature is in development, the developer has complete context and can easily
understand bugs’ root causes. This makes fixing them a quick process. However, when said
developer moves on to other features and the codebase continues to grow every day
(refactoring), context is lost and debugging the root cause becomes a longer and costlier
process.
You might ask how early should we test a piece of code to ensure earlier feedback cycles?
Shift-left testing is centred on faster feedback. Some shift-left testing practices that work
efficiently are dev-box testing, running automated tests on the developer’s machine and CI
and coverage metrics. Implementing the test pyramid should also produce faster feedback.
Additionally, story sign-offs by product owners and a regular cadence of showcases every
sprint to all stakeholders ensures faster feedback on missing business cases.
4. CONTINUOUS FEEDBACK
Fast feedback should always be backed by continuous feedback. It is not enough to just test a
feature once and leave it idle until release. We have to continue performing regression testing
on the feature to see if the integrations are intact and the refactoring have not hampered the
old functionalities.
These continuous feedback mechanisms help in early fixing of regression defects and prevent
release timeline disruptions. One of the prominent ways teams achieve this is by integrating
all automated tests to the CI – ideally, running all the tests for every commit. If tests take too
long to run, we will have to adopt parallelization techniques. The test pyramid will help avoid
separating the tests, allowing for continuous feedback for every commit.
5. QUANTIFY QUALITY
When trying to achieve high quality via testing, we should correctly measure it as well. Some
of the recommended metrics are defects caught by automated tests in all layers, time taken
from commit to deployment, number of automated deployments to testing environments,
regression defects caught during story testing, automation backlog based on the severity of
test cases, production defects and their severity, usability scores with end-users, failures due
to infrastructure issues and metrics around cross-functional aspects.
Many of these metrics align with the ‘Four key metrics’ that measure quality in terms of code
stability and the team’s delivery tempo. The team’s delivery tempo is derived from the time
taken from commit to deployment and the number of deployments in a day to testing
environments. For instance, one of the Four key metrics, is the ‘deployment frequency,’ and
it needs to be ‘on demand’ for a high-quality performer. Production defects will inform us of
the ‘change fail percentage’ – percentage of changes made to production that fail – which
should be 0-15% for a high performer. When tracked and discussed consistently, metrics like
these empower the team to build high-quality software.
Testing is not a siloed activity but requires adequate knowledge of business requirements,
domain knowledge, technical implementation, environment details and similar. This calls for
efficient collaboration and communication amongst all roles in a project team.
The communication could take place via agile ‘ceremonies’ like stand-ups, story kick-offs,
Iteration Planning Meetings (IPMs), dev box testing and through proper documentation like
story cards, Architecture Decision Records (ADRs), test strategies, test coverage reports, etc.
While the communication might not be synchronous within distributed teams, we should
ensure hand-overs via asynchronous mediums like video recordings, documentation, chats,
emails and more.
While testing focuses on finding issues in the application, we should strive to prevent defects
in the first place. An obvious reason to avoid defects is cost – right? I’d compare defect fixes
to painting over a rough patch on an otherwise seamlessly painted wall. Sometimes, the
newly painted patch might not fit with the existing wall, and we will have to paint the entire
wall all over again.
Similarly, software defects could lead to significant architectural changes. Which is why, we
should adopt practices, tools and methods that allow ‘defect prevention’ right from the start.
Here are a few of the practices in today’s software world that could fulfil this principle – the
Three Amigos process, IPMs, story kick-offs, ADRs, shift-left testing, test-driven
development (TDD), pair programming, showcases, story sign-offs by POs, and more.
COMPOSITION OF A TESTING TEAM
Testing Project Team’ is a team of experts dedicated to ensuring the best quality of software
products. In terms of software development, the testing project team is responsible for
maintaining high levels of quality in software applications and making sure that it meets all
user and business requirements.
It closely relates to various project roles, which are: QA project manager, Test Analyst/
Designer, Tester, Testing Developers, SQA Members. Project QA managers are the ones who
initiate tests and works together with testing leads and the client to define acceptance tests.
Here the job of the development team is to addresses and removes defects within the software
and cooperate with project testing team after analysing them.
High-quality software can give businesses a significant competitive advantage and play a key
role in the success of companies. However, in markets with tough competition high-quality
software is not just an advantage but also a basic requirement for staying relevant.
The project testing team helps companies detect hidden errors and defects applications.
Identifying and detecting these problems before the product goes live is essential for its
success. Without the project testing team, these defects can seep into the final product and
affect its performance in the live environment.
If a company does stop bugs before the production phases, the cost of resolving these errors
later is much higher. Furthermore, since these defects and errors directly impact the end-user,
it leads to bad user experience, which is a prime reason for software failure.
The project testing team not only detects these issues before they affect the user but also
helps software teams prevent bugs proactively. Needless to say, investing in a project testing
team is not only beneficial to your product but indispensable given the fierce competition in
IT.
The project testing team needs to work together to achieve the overall objective of projects.
QA Test Managers are people responsible for building these teams. These teams must work
through shared environments, high levels of interdependence, and ensure QA cooperative
agility.
Project testing teams build the testing infrastructure, related to automated test design and
integration. Since complex software products rely on several testing procedures, QA project
managers help create large, multi-dimensional project testing teams in order to maintain the
best quality of software.
Here we will give a step-by-step procedure for building the best project testing team. To build
and manage an effective a project testing team, it’s important to follow the steps below:
a) ESTIMATE DEMAND
First, QA project managers have to forecast which human resources they will need based
on different project plans. Your team composition and size can change based on the type
and requirements of the project.
If you can estimate what kind of team you need, it will be easier to shortlist software
testing team roles and responsibilities. You can also vary the number of members within
your team according to the project complexity and the volume of the project activities.
Designation Responsibilities
Prepares specified automated and manual tests Builds up the Test Cases
Tester Generate Test Suites Performs various tests Execute the tests Records the results
Clearly describes the errors Report the defect
Developers
in Test Creates program to test Creates test automation scripts
teams
SQA
Takes charge of Quality Assurance
Members
This is why you may need at least 5 members in a project testing team. Everyone has a
different role and you have to assign tasks according to each member’s core
competencies.
b) EVALUATE EXPERTISE
Evaluating the expertise of your testing resources is an essential part of human
resource planning. Unless you assign your team members with the right task, they can
fail to maintain the quality of software your client expects.
For instance, you have a developer within your team, but you use him as a tester in
the Testing Team. You assign him the execution of test cases and request him to report
defects to the QA Test Manager within a week.
However, since the developer isn’t skilled in testing, he or she will fail to provide
similar results as an experienced software tester. Being a skilled programmer, he may
resolve defects within the software more easily, but asking him to speed through
testing will delay the project.
This is why it’s essential to identify the key competencies of your team members
before you assign them any task. Not only that, but it’s also important to see what
kind of skills a certain task demands.
You should measure the team member skills and abilities based on the project goal
and mission. When you have to assign project members tasks that they lack necessary
skills for, it’s important to fill in skill gaps.
Planning Skill Upgrade
Identify Skill Gaps
Sometimes, your team members have gaps in their skills and the type of expertise
the project demands. As a QA project manager, you have to identify what skills
your testing resources lack, so you can create an appropriate training plan for
them. For instance, if some of your team members have to improve their testing
skills, you should help them close that skill gap.
Training and Assessment
In human resource planning, you should consider how existing members are
trained and developed to gain the required skills and develop expertise. QA
project managers need to create a training plan and apply it promptly to fill in the
talent gap.
c) EVALUATION
Putting your testing resources is not sufficient on its own. Instead, it’s important for
QA project managers to monitor and evaluate these programs frequently, so they can
ensure they are effective.
For instance, you can work on your developer’s training expertise by assessing his
progress and assigning him different tasks to see if he has learned from the training
initiatives. If the developer is finds testing difficult, the manager should either replace
him or consider an alternative training method.
2. BUILD THE PROJECT TESTING TEAM
After you are finished with the human resource plan, you should start building your project
team.
TEAM MISSION
Creating a team mission helps you set your testing team in the right direction. The team
mission allows all testing resources to focus on key objectives and goals, collaborate and
come to an agreement
TEAM RESPONSIBILITY
Ensuring your team members are aware of their responsibility is something you should
sort out in the very beginning. Unless testing resources have clear roles and
responsibilities defined, they can’t do what’s expected of them.
TEAM RULES
Team rules set guidelines regarding how testing resources should behave, helping them
work more collaboratively. Although it isn’t mandatory to impose rules on your testing
team, doing so can help your team reach a consensus regarding responsibilities and share
responsibilities.
TEAM MOTIVATION
Motivation is a critical factor in the workplace and a key driver behind the performance.
It enables the management to meet the organizational goals, enhances productivity, and
improves overall output.
Unmotivated employees don’t focus on work and are not as much engaged. Such
unproductive behaviour not only causes unnecessary delays for the company but also
wastes considerable resources, especially if the majority of the workforce is affected by
this. This is why companies need to pay attention to the level of motivation in
employees.
Keeping teams motivated has always been a long-standing challenge for most project
managers. To be a good project manager, you should have the expertise to manage both
the functional and emotional factors of your team.
Negligence in this matter can also cause high performing and strongly skilled employees
to exhibit negative attitudes due to their emotional burden. If you don’t manage both
factors simultaneously, you’ll end up delaying important projects that cost way higher
than what you expected.
You will need more than manual testing to get there because of the incorporation of the
newest technologies, rising software complexity, and integrations in the application.
Software testers or QA engineers should gain automation skills for testing browser
compatibility, performance, and database and integration layers since it imparts more
accuracy due to the business logic and technicality it may provide.
Additionally, there are a number of test automation tools that fully complement the
testing approach and have the capabilities to accomplish the duties efficiently.
1. TestGrid
2. RFT
3. Perfecto
4. Katalon Studio
5. Appium
6. Selenium
7. ACCELQ
2) PROFICIENCY IN PROGRAMMING LANGUAGES
You may design test cases, meet test requirements, manage resources, and do a lot more
with the aid of test management systems. To prevent errors from entering production, test
management expertise is essential. You should be familiar with the following test
management tools:
Testing specs, planning, reporting, and requirement tracking are all included in Test
Link, one of the top test management solutions.
Test Pad is a compact test planner that strives to offer sufficient test procedures
without requiring a complex test management infrastructure.
QA Deputy: It is a fully functional test management application created for small to
medium-sized teams that significantly increase testing efficiency.
TestRail is a test management platform that aids teams in organizing and monitoring
software testing activities.
The software development life cycle is referred to as SDLC. Testers must understand the
SDLC to organize testing cycles efficiently. They will be better able to understand
software complexity and prevent them in the future by having more in-depth knowledge
of the software development life cycle.
Thanks to the SDLC’s overall framework, they will be able to comprehend the tasks
involved in application development and arrange the testing cycle properly.
A thorough understanding of the SDLC cycle will also enable testers to foresee
application complexity, which will help them decide on the best course of action in
advance. Testers must also get familiar with alternative development mythologies, such
as Kanban, Waterfall, Scrum, Lean, etc.
5) AGILE METHODOLOGY
Agile testing follows the same concepts as agile software development in the software
testing process. Agile testing is consistent with an iterative development approach in
which the development aligns with the client’s needs.
Agile testing is a continuous, non-sequential process. Testing begins at the project’s
outset, and testing and development are continuously integrated. The fundamental
objective of agile development and testing is to produce high-quality products.
In this method, the team pushes themselves outside their comfort zones and produces
high-quality results. Software testers should be well-versed in agile testing instruments
like:
6) ANALYTICAL SKILLS
A successful software tester needs to be a strong analyst. In doing so, the tester can
simplify complex systems and thoroughly understand the code. Additionally, it will
support the creation of better test cases, increasing the system’s overall productivity.
The tester’s primary responsibility is to pinpoint the issue and offer the most effective
action to resolve it. To succeed, they must be analytically inclined to assess the problems,
faults, and security flaws.
As a result, you should be precise when expressing your views and ideas regarding the
problem and its solution.
You should be able to explain the flaw or process improvement to your coworkers who
test software, as well as to developers, managers, designers, clients, and occasionally
even the CEO.
Write objective bug reports that every employee in the company should be able to
understand.
You can offer actual observations and recommendations for the product’s
enhancement from the customer’s viewpoint.
8) PROJECT MANAGEMENT
Technical and business considerations are both included in software testing capabilities.
Any tester must be able to take charge of the project to manage both. This means that
after testing is finished, a tester delivers the project.
Testers will develop problem-solving skills by learning project management techniques.
In this manner, the testers will carry the responsibility and management of the end-to-end
testing project and be responsible and answerable for their work to the relevant person.
9) PROBLEM-SOLVING
Problem-solving is a crucial skill for software testers, as it allows them to identify and
resolve issues in the software they are testing. To be effective at problem-solving,
software testers must deeply understand the software they are testing and the underlying
technologies and systems that support it.
They must also be familiar with industry best practices for testing and quality assurance.
By developing this skill, software testers can improve their performance and contribute to
the quality and success of the software they are testing.
Planning and documentation are essential skills for any software tester. Effective planning
ensures that testing is carried out efficiently and effectively, while thorough
documentation allows testers to keep track of their progress and results.
Planning should include creating a test plan that outlines the scope, objectives, and
approach for testing and identifies any risks or potential issues.
Documentation should include detailed records of the tests performed, the results
obtained, and any identified defects or problems. These skills are critical for ensuring that
software is of high quality and meets users’ needs.
Continuous learning is also one of the crucial skills for software tester. Today,
technology is constantly evolving, and to be an effective tester, you should cope with the
latest developments and techniques.
This may involve attending workshops, conferences, or online courses, regularly reading
industry publications and staying updated with new tools and software.
Additionally, being open to feedback and learning from mistakes will help you improve
your skills and become a better tester. By continuously learning, you can stay at the
forefront of the field and provide valuable insights to your team.
TYPES OF TESTING
Testing is the process of executing a program to find errors. To make our software
perform well it should be error-free. If testing is done successfully, it will remove all
the errors from the software. In this article, we will discuss first the principles of testing
and then we will discuss, the different types of testing.
PRINCIPLES OF TESTING
All the tests should meet the customer’s requirements.
To make our software testing should be performed by a third party.
Exhaustive testing is not possible. As we need the optimal amount of testing
based on the risk assessment of the application.
All the tests to be conducted should be planned before implementing it
It follows the Pareto rule(80/20 rule) which states that 80% of errors come from
20% of program components.
Start testing with small parts and extend it to large parts.
TYPES OF TESTING
As software is being developed, it’s tested to ensure everything works properly and
identify bugs, vulnerabilities, or other issues. Testing can be done manually (and it often
is), but manual testing is repetitive and time-consuming. So, developers turn to
automation testing.
Automation testing is both practical and cost-effective. As the name suggests, it involves
automating the testing process and the management and application of test data and
results to improve software.
THE BENEFITS OF AUTOMATION TESTING:-
Given the benefits, how does one know when to automate the testing process? The
following are some of the test types that should be automated:
Tests that are repetitive and time-consuming.
Tests that run for multiple builds.
Tests that are vulnerable to human error.
Tests of high-risk, frequently used functions within the software.
Tests that can’t be done manually.
Tests that need to be run on multiple software or hardware configurations and
platforms.
Setting up these automated tests requires careful planning, so production teams create an
automation plan or strategy first. Different automated tests occur at various stages in the
development process, so goals and milestones must be established early to avoid haphazard
testing and redundancy.
Automated testing is commonly divided into small, manageable units with focused
objectives. This makes it easier to update, edit, or augment tests.
AUTOMATED TESTING PROCESS
Following steps are followed in an Automation Process.
Step 5) Maintenance
Execution can be performed using the automation tool directly or through the Test
Management tool which will invoke the automation tool.
Example: Quality center is the Test Management tool which in turn it will invoke QTP for
execution of automation scripts. Scripts can be executed in a single machine or a group of
machines. The execution can be done during the night, to save time.
2) MANUAL TESTING:-
It is a type of software testing in which test cases are executed manually by a tester
without using any automated tools. The purpose of Manual Testing is to identify the bugs,
issues, and defects in the software application. Manual software testing is the most
primitive technique of all testing types and it helps to find critical bugs in the software
application.
Any new application must be manually tested before its testing can be automated. Manual
Software Testing requires more effort but is necessary to check automation feasibility.
Manual Testing concepts does not require knowledge of any testing tool. One of the
Software Testing Fundamental is “100% Automation is not possible“. This makes
Manual Testing imperative.
S.
Black Box Testing Gray Box Testing White Box Testing
No.
This testing has Low This testing has a medium This testing has high-
1.
granularity. level of granularity. level granularity.
It is done by end-users
It is done by end-users and
(called user acceptance It is generally done by
2. also by the tester and
testing) and also by testers testers and developers.
developers.
and developers.
It is likely to be less
Most exhaustive
4. exhaustive than the other It is kind of in-between.
among all three.
two.
and test cases on the variety/depth in test cases with a relevant variety
functional specifications, as on account of high-level of data.
the internals are not known. knowledge of the internals.
Black-box test design Gray box test design White-box test design
techniques- techniques- techniques-
Decision table Matrix testing Control
testing Regression flow testing
10.
All-pairs testing testing Data flow
Equivalence Pattern testing testing
partitioning Orthogonal Branch
Error guessing Array Testing testing
A) FUNCTIONAL TESTING:-
Functional testing is a type of software testing in which the system is tested against
the functional requirements and specifications. Functional testing ensures that the
requirements or specifications are properly satisfied by the application. This type of
testing is particularly concerned with the result of processing. It focuses on
simulation of actual system usage but does not develop any system structure
assumptions. It is basically defined as a type of testing which verifies that each
function of the software application works in conformance with the requirement and
specification. This testing is not concerned about the source code of the application.
Each functionality of the software application is tested by providing appropriate test
input, expecting the output and comparing the actual output with the expected
output.
It helps to enhance the behaviour of the It helps to improve the performance of the
application. application.
It tests what the product does. It describes how the product does.
Examples: Examples:
1. Unit Testing 1. Performance Testing
2. Smoke Testing 2. Load Testing
3. Integration Testing 3. Stress Testing
4. Regression Testing 4. Scalability Testing
A) UNIT TESTING
B) INTEGRATION TESTING
C) SYSTEM TESTING
1) UNIT TESTING
Unit testing is a method of testing individual units or components of a
software application. It is typically done by developers and is used to
ensure that the individual units of the software are working as intended.
Unit tests are usually automated and are designed to test specific parts of
the code, such as a particular function or method. Unit testing is done at
the lowest level of the software development process, where individual
units of code are tested in isolation.
It helps to identify bugs early in the development process before they become
more difficult and expensive to fix.
It helps to ensure that changes to the code do not introduce new bugs.
It makes the code more modular and easier to understand and maintain.
It helps to improve the overall quality and reliability of the software.
Note: Some popular frameworks and tools that are used for unit testing
include JUnit, NUnit, and xUnit.
It’s important to keep in mind that Unit Testing is only one aspect of software
testing and it should be used in combination with other types of testing such as
integration testing, functional testing, and acceptance testing to ensure that the
software meets the needs of its users.
It focuses on the smallest unit of software design. In this, we test an individual unit
or group of interrelated units. It is often done by the programmer by using sample
input and observing its corresponding outputs.
Example:
b) In a program we are checking if the loop, method, or function is working
fine.
c) Misunderstood or incorrect, arithmetic precedence.
d) Incorrect initialization.
2) INTEGRATION TESTING
EXAMPLE:
(a) BLACK BOX TESTING:- It is used for validation. In this, we ignore internal
working mechanisms and focus on what is the output?
3) SYSTEM TESTING
System Testing is a type of software testing that is performed on a complete
integrated system to evaluate the compliance of the system with the
corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any
irregularity between the units that are integrated together. System testing detects
defects within both the integrated units and the whole system. The result of
system testing is the observed behaviour of a component or a system when it is
tested. System Testing is carried out on the whole system in the context of either
system requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behaviour of the system
and also the expectations of the customer.
SYSTEM TESTING PROCESS: System Testing is performed in the following
steps:
Test Environment Setup: Create testing environment for the better quality
testing.
Create Test Case: Generate test case for the testing process.
Create Test Data: Generate the data that is to be tested.
Execute Test Case: After the generation of the test case and the test data, test
cases are executed.
Defect Reporting: Defects in the system are detected.
Regression Testing: It is carried out to test the side effects of the testing
process.
Log Defects: Defects are fixed in this step.
Retest: If the test is not successful then again test is performed.
A) PERFORMANCE TESTING
B) USABILITY TESTING
C) COMPATIBILITY TESTING
1) PERFORMANCE TESTING:-
Performance testing is a form of software testing that focuses on how a system
running the system performs under a particular load. This is not about finding
software bugs or defects. Different performance testing types measures according
to benchmarks and standards. Performance testing gives developers the diagnostic
information they need to eliminate bottlenecks.
I. LOAD TESTING:-
2. Identifying Bottlenecks
Through load testing, weak points and areas that require optimization
can be detected, allowing for targeted improvements that enhance
overall performance.
3. Evaluating Scalability
As user bases and demands grow, load testing helps determine whether
a system can adapt and handle the increased workload without
compromising functionality or performance.
1. Baseline Testing
Stress testing is crucial for identifying the system’s breaking point and
maximum operating capacity, ensuring that it can recover gracefully
when pushed beyond its limits. This type of testing is crucial for
identifying the system’s breaking point, understanding its failure
behaviour, and ensuring that it can recover gracefully when pushed
beyond its limits.
3. Soak Testing
Soak testing ensures that the system can maintain its performance and
reliability over time.
4. Spike Testing
5. Volume Testing
Scalability testing lets you determine how your application scales with
increasing workload.
Determine the user limit for the Web application.
Determine client-side degradation and end user experience under load.
Determine server-side robustness and degradation.
Response Time
Screen transition
Throughput
Time (Session time, reboot time, printing time, transaction time, task
execution time)
Performance measurement with a number of users
Request per seconds, Transaction per seconds, Hits per second
Performance measurement with a number of users
Network Usage
CPU / Memory Usage
Web Server (request and response per seconds)
Performance measurement under load.
To determine the scope and objective of the testing, we must ensure that the
Application Server(s) do not crash during the Load Test executions.
To determine the Business issues, verify the system performance and load as
per end user perspective.
To assign the different Responsibilities and Roles like -Creating Test plan, Test
Case design, Test case review, Test execution, etc.
To ensure the Test deliverables within the specified time
To ensure proper Load Testing tools and experience team is present for the
same.
To measure the risk and cost involves in the testing. This will determine the
cost of each execution in terms of CPU utilization and memory.
Determine the Defect tracking and reporting and their proper mapping with
the requirements.
2) USABILITY TESTING:-
Usability testing is the practice of testing how easy a design is to use with a group
of representative users. It usually involves observing users as they attempt to
complete tasks and can be done for different types of designs. It is often
conducted repeatedly, from early development until a product’s release.
2. TEST CASE DESIGNING: The usability test cases are all manual test cases.
Finding experienced manual testing and having good design knowledge is crucial.
Usability testing is mostly R&D based task. One needs to do a thorough market
and competitor analysis.
3. TEST CASES: Test cases development to have maximum test case coverage.
This needs to ensure all the features are covered, like memorability and
efficiency, and the product is error-free. The same product in different screen
sizes should provide a good experience.
4. DATA ANALYSIS: Usability testing is all about data analysis. How much
time does the user take to reach the payment screen in multiple options of UI
derived. The impact of advertisements in particular sections of the product on
revenue. Changing certain features has helped gain/lose traffic.
5. REPORT GENERATION: Reports give a clear understanding of the
entire STLC. The gain/loss is evident. Having experimented with the user
interface in starting stage is mandatory. Otherwise, at a later stage, it could have
adverse damage and impact on the company.
Usability testing gives real-time feedback to the development team about market
validation.
Usability testing is cheaper at the development stage as once the product reaches
production, exponential money investment is required.
It helps to make the product efficient and applicable.
Usability testing increases the budget and time manifolds which is tough for any
startup.
Usability testing can lead to the leak of private information about the product
and company.
Finding the correct users is always challenging as everyone works in a fixed job.
So, finding good volunteers is tough.
3) COMPATIBILITY TESTING:-
Compatibility testing is software testing which comes under the non functional
testing category, and it is performed on an application to check its compatibility
(running capability) on different platform/environments. This testing is done
only when the application becomes stable. Means simply this compatibility test
aims to check the developed software application functionality on various
software, hardware platforms, network and browser etc. This compatibility
testing is very important in product production and implementation point of
view as it is performed to avoid future issues regarding compatibility.
1. SOFTWARE :
2. HARDWARE :
Checking compatibility with a particular size of
RAM
ROM
Hard Disk
Memory Cards
Processor
Graphics Card
3. SMARTPHONES :
Checking compatibility with different mobile platforms like android, iOS etc.
4.NETWORK :
Checking compatibility with different :
Bandwidth
Operating speed
Capacity
Along with this there are other types of compatibility testing are also performed
such as browser compatibility to check software compatibility with different
browsers like Google Chrome, Internet Explorer etc. device compatibility, version
of the software and others.
Testing the application in a same environment but having different versions. For
example, to test compatibility of Facebook application in your android mobile. First
check for the compatibility with Android 9.0 and then with Android 10.0 for the
same version of Facebook App.
Testing the application in a same version but having different environment. For
example, to test compatibility of Facebook application in your android mobile. First
check for the compatibility with a Facebook application of lower version with a
Android 10.0(or your choice) and then with a Facebook application of higher
version with a same version of Android.
WHY COMPATIBILITY TESTING IS IMPORTANT?
1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Identifying bugs during development process.
COMPATIBILITY TESTING DEFECTS:
1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to existence of broken frames.
5. Issues related to overlapping of content.
A. INCREMENTAL TESTING
B. NON-INCREMENTAL TESTING
A. INCREMENTAL TESTING:-
Like development, testing is also a phase of SDLC (Software Development
Life Cycle). Different tests are performed at different stages of the
development cyle. Incremental testing is one of the testing approaches that is
commonly used in the software field during the testing phase of integration
testing which is performed after unit testing. Several stubs and drivers are
used to test the modules one after one which helps in discovering errors and
defects in the specific modules. It’s a kind of approach where developers
sum up the modules one after one using stubs or drivers to unfold the
defects.
2. BOTTOM-UP INTEGRATION –
This type of integration testing occurs from bottom to top. Control flow
also takes place in an upward direction. Unavailable components or
systems are easily substituted by drivers.
These several methodologies for incremental Testing which include some steps
followed are discussed below
All the used modules are individually tested using the unit tests.
Each module combines and gets tested by incrementing by one.
The recent module is added to previously integrated modules and then goes
through the test process.
Then the last module gets incremented and all the modules are tested together
for a successful integration.
Each module has its specific significance. Each one gets a role to play during the
testing as they are incremented individually.
Defects are detected in smaller modules rather than denoting errors and then
editing and re-correcting large files.
It’s more flexible and cost-efficient as per requirements and scopes.
The customer gets the chance to respond to each building.
B.NON-INCREMENTAL TESTING: -
UI module,
Data Processing module
Database module
Reporting module
UI module: This module is responsible for providing the user interface for the
system.
Data Processing module: This module is responsible for processing data from the
user interface and passing it to the Database module.
Database module: This module is responsible for storing and retrieving data for the
system.
Reporting module: This module is responsible for generating reports based on data
stored in the Database module.
ADVANTAGES:
DISADVANTAGES:
4) REGRESSION TESTING
EXAMPLE
In school records, suppose we have module staff, students, and finance combining
these modules and checking if the integration of these modules works fine in
regression testing.
Smoke Testing
Smoke Testing is done to make sure that the software under testing is ready or
stable for further testing
It is called a smoke test as the testing of an initial pass is done to check if it
did not catch fire or smoke in the initial switch-on.
Example:
If the project has 2 modules so before going to the module make sure that module 1 works
properly.
Alpha Testing
Alpha testing is a type of validation testing. It is a type of acceptance testing that is
done before the product is released to customers. It is typically done by QA people.
Example:
When software testing is performed internally within the organisation.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the
software. This version is released for a limited number of users for testing in a real-
time environment.
Example:
When software testing is performed for the limited number of people.
System Testing
System Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in the context
of both. The software is tested such that it works fine for the different operating
systems. It is covered under the black box testing technique. In this, we just focus on
the required input and output without focusing on internal work. In this, we have
security testing, recovery testing, stress testing, and performance testing.
Example:
This includes functional as well as nonfunctional testing.
Stress Testing
In Stress Testing, we give unfavorable conditions to the system and check how they
perform in those conditions.
Example:
i. Test cases that require maximum memory or other resources are executed.
ii. Test cases that may cause thrashing in a virtual operating system.
ii. Test cases that may cause excessive disk requirement Performance Testing.
It is designed to test the run-time performance of software within the context of an
integrated system. It is used to test the speed and effectiveness of the program. It is
also called load testing. In it, we check, what is the performance of the system in the
given load.
Example:
Checking several processor cycles.
Object-Oriented Testing
Object-Oriented Testing testing is a combination of various testing techniques that
help to verify and validate object-oriented software. This testing is done in the
following manner:
Testing of Requirements,
Design and Analysis of Testing,
Testing of Code,
Integration testing,
System testing,
User Testing.
Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products
perform the desired tasks or not, as stated in the requirements. We use Object-
Oriented Testing for discussing test plans and for executing the projects.
Advantages of Software Testing
Improved software quality and reliability.
Early identification and fixing of defects.
Improved customer satisfaction.
Increased stakeholder confidence.
Reduced maintenance costs.
Disadvantages of Software Testing
Time-Consuming and adds to the project cost.
This can slow down the development process.
Not all defects can be found.
Can be difficult to fully test complex systems.
Potential for human error during the testing process
EVALUATING THE QUALITY OF TEST CASES
Software testing is an essential process for ensuring the quality and reliability of software
products. The efficiency of testing activities depends largely on the test case quality, which is
considered as one of the major concerns of software testing. Unfortunately, at the moment
there is no clear guideline that can be referred by software testers in producing good quality
test cases. Hence, producing a guideline is certainly required. To construct a pragmatic
guideline, it is crucial to identify the factors that lead to designing good quality test cases.
The existing test case quality factors are not comprehensive and need further investigation
and improvement. Therefore, a content analysis was conducted to identify the test case
quality factors from software testing experts' points of view available on the software testing
websites. The software testing websites provide explicit information about the quality of test
cases in order to avoid the poor design of test cases.
Software testing is important because the impact of untested or underperforming software can
have a trickle-down or domino effect on thousands of users and employees.
For example, if a web application that sells a product works too slowly, customers may get
impatient and buy a similar product elsewhere. Or, if a database within an application output
the wrong information for a search query, people may lose trust in the website or company in
general.
Software Testers help prevent these kinds of corporate faux pas. Plus, software testing can
help ensure the safety of users or those impacted by its use, particularly if an application is
used to run a critical element of a town or city’s infrastructure.
In the software engineering process, testing is a key element of the development lifecycle. In
a waterfall development system, Software Testers may be called in after an application has
been created to see if it has any bugs and how it performs. The Testers’ feedback is critical to
the process because it helps engineers fine-tune the end product.
In a DevOps environment, software testing is often done at various stages of development
because the DevOps system relies on constant feedback. In this development framework,
Testers may assess a certain aspect of the software’s function according to the team’s current
phase of development.
For example, if a web app needs to integrate well with mobile devices, one group of Software
Testers may focus on the app’s performance on iOS and Android devices, while another
group of Testers checks how it performs on macOS or Windows.
Similarly, granular elements of an application can be run through tests. This can include how
well it processes information from interactive databases or the flow and feel of the user
interface.
The input from Testers can make it easier and faster to fine-tune key elements of an
application’s performance, particularly from the perspective of an end-user.
STEP-BY-STEP GUIDE TO THE SOFTWARE TESTING TOOLS EVALUATION
PROCESS.
1. REQUIREMENT BASED
The main purpose of test suite reduction is to satisfy all the testing requirements with a
minimum number of test cases. One such way is to generate test cases based on requirements
by Requirement Optimization.
All the test cases of each testing requirement are generated, and then the greedy algorithm is
applied to the constructed test suites for reduction. The redundancy in test suites and size can
be reduced using model-checker-based techniques to create test cases. The requirement
optimization is good when dealing with a finite Boolean expression that classifies the
requirements as true or false test cases.
In order to maintain the effectiveness of fault localization, a technique called dynamic
domain reduction (DDR) is also used, which helps in keeping the system free from errors at
the same time, keeping the efficiency by reducing the number of test cases. DDR is good
when dealing with arrays, loops, and expressions.
The third one is the Ping Pong technique which uses a heuristic technique to reorder the test
cases that provides a good but not optimal solution. It takes requirements from natural
language.
2. COVERAGE BASED
The main purpose of the Coverage-based reduction technique is to ensure that the maximum
number of paths of a given program is executed. Fault detection preservation is an important
aspect of test case reduction in Regression testing.
This is done with the help of CBR (Case Base Reasoning). CBR has three classifications,
namely, Case, Auxiliary, and Pivotal.
Case-based searches for the most similar problems to solve problems, i.e., a memory.
An Auxiliary-based case can be deleted without affecting competence, but it does
affect the system’s efficiency.
A Pivotal-based case has a direct impact on the system competence if deleted.
CBR uses three methods for test case reduction:
Test Case Complexity for Filtering (TCCF): Coverage set, Reachability set, and
Auxiliary set are determined, and the complexity for each test case is calculated. The
test case with the minimum complexity value is removed.
Test Case Impact for Filtering (TCIF): The impact of the test cases is checked
based on their ability to detect faults when these test cases are removed.
Path Coverage for Filtering (PCF): It is a structure testing that chooses test cases
that determine the path to be taken within the program structure.
Fitness value depending on the coverage and runtime of test cases was calculated.
Only the tests that fit were allowed to be in the reduced suite.
This process is repeated until an optimized test suite is found.
The results showed that the proposed test suite reduction technique was cost-effective and
had generality.
One of the major advantages of this algorithm is that it helps in test case reduction along with
a simultaneous decrease in the total run time. However, it fails when an examination of the
fault detection capability along with other criteria is asked for.
4. CLUSTERING
The Data Mining approach of Clustering techniques is used to reduce the test cases in the test
suite and improve efficiency. With the help of Clustering, the program can be checked with
any one of the Clustered test cases rather than the entire test cases produced by independent
paths.
It is simply based on selecting the test cases based on coverage and distribution. The most
common techniques include Construction algorithms, Graph theoretical algorithms,
Optimization algorithms, and Hierarchical algorithms. They do produce a smaller set of test
cases but with reduced fault detection ability.
5. GREEDY ALGORITHM
It is one of the popular code–based reduction techniques and is applied to test suites obtained
from Model-based techniques. It selects the test cases which satisfy the maximum number of
unsatisfied requirements. This technique is repeated until all the test cases in the test suite
lead to the production of a reduced test suite. This algorithm works on the basis of the
relationship that exists between testing requirements and test cases.
An advantage of the Greedy algorithm is that it provides a significant reduction in the total
number of test cases, but it involves a random selection of test cases in case of a tie situation
occurs. It needs to be optimized in case of large-scale test suites.
6. FUZZY LOGIC
Another way to perform the optimization of test suites is by using fuzzy logic. This is termed
to be a safe technique as it helps in reducing the regression testing size along with the
execution time. The level of testing used here is based on an objective function, which is
quite similar to human judgment.
Genetic algorithm and Swarm optimization combined with fuzzy logic can be used to make
optimizations in the test suite for multi-objective selection criteria. Some CI-based
approaches are often used to achieve test suite optimization and test suite analysis for safe
reduction, which can then be executed using control flow graphs.
These graphs are used for traversing test cases of optimal solutions. Often recommended, this
method is considered to be safer than other methods for regression testing.
7. PROGRAM SLICING
It is a technique used to check a program over a specific property and build a slice set. This
set consists of a set of statements effect to determine a statement, i.e., it is the output
statement of a program based on some input values.
This technique helps to show the control flow for each test case in a program. There are three
types of slicing techniques:
Static Slicing
Dynamic Slicing
Relevant Slicing
Number of required test cases can be decreased using Slicing techniques thereby decreasing
the time and cost of testing.
8. HYBRID ALGORITHM
This algorithm combines the efficient approximation of the genetic algorithm with the greedy
approach to produce high-quality Pareto fronts in order to achieve multiple objectives. Here,
the objective functions are considered as a mathematical description of the test criterion.
A cost-effective version of the Greedy algorithm is used for Statement coverage and
Computational cost. For Fault detection, code coverage, fault coverage, and execution time
are also considered for optimization.
REQUIREMENTS FOR EFFECTIVE TESTING
Testing is essential because we all make mistakes. Some of those mistakes are not
important, but some are expensive or could be life-threatening. We have to test
everything that we produce because things can go wrong; humans can make mistakes
at any time.
Human errors can cause a defect or failure at any stage of the software development
life cycle. The results are classified as trivial or catastrophic, depending on the
consequences of the error.
The requirement of rigorous testing and their associated documentation during the
software development life cycle arises because of the below reasons:
To identify defects
To reduce flaws in the component or system
Increase the overall quality of the system
There can also be a requirement to perform software testing to comply with legal
requirements or industry-specific standards. These standards and rules can specify
what kind of techniques should we use for product development. For example, the
motor, avionics, medical, and pharmaceutical industries, etc., all have standards
covering the testing of the product.
The points below show the significance of testing for a reliable and easy to use
software product:
The testing is important since it discovers defects/bugs before the delivery to the
client, which guarantees the quality of the software.
It makes the software more reliable and easier to use.
Thoroughly tested software ensures reliable and high-performance software operation.
For example, assume you are using a Net Banking application to transfer the amount
to your friend's account. So, you initiate the transaction, get a successful transaction
message, and the amount also deducts from your account. However, your friend
confirms that his/her account has not received any credits yet. Likewise, your account
is also not reflecting the reversed transaction. This will surely make you upset and
leave you as an unsatisfied customer.
Now, the question arises, why did it happen? It is because of the improper testing of
the net banking application before release. Thorough testing of the website for all
possible user operations would lead to early identification of this problem. Therefore,
one can fix it before releasing it to the public for a smoother experience.
TESTING'S CONTRIBUTION TO THE SUCCESS
In the above example, we can observe that due to the presence of defects, the system
failed to perform the required operation and didn't meet the client's requirements.
Appropriate testing techniques applied to each test levels, along with a proper level of
test expertise, ensures an absolute reduction in the frequency of such software
failures.
Sometimes, we test a fully developed software product against the user requirement
and find that some basic functionality was missing. It may happen because of a
mistake in the requirement gathering or the coding phase. Then to fix such types of
errors, we may have to start the development again from scratch. Fixing such kinds of
mistakes becomes very tedious, time-consuming, and expensive. Therefore, it is
always desirable to test the software in its development phase.
Ease of use is a simple concept; it specifies how easily the intended users can use the
final product. The software testing ensures the construction of the software product in
a way that meets the user's expectations regarding compliance with the requirements
in a comfortable, satisfactory, and simplistic manner.
Software tests help developers find errors and scenarios to reproduce the error, which
in turn helps them to fix it quickly. Besides, software testers can work in parallel with
the development team, thus understanding the design, risk areas, etc. in detail. This
knowledge exchange between testers and developers accelerates the entire
development process.
TEST ORACLE
Test Oracle is a mechanism, different from the program itself, that can be used to test the
accuracy of a program’s output for test cases. Conceptually, we can consider testing a
process in which test cases are given for testing and the program under test. The output of
the two then compares to determine whether the program behaves correctly for test cases.
This is shown in figure.
Testing oracles are required for testing. Ideally, we want an automated oracle, which
always gives the correct answer. However, often oracles are human beings, who mostly
calculate by hand what the output of the program should be. As it is often very difficult to
determine whether the behavior corresponds to the expected behavior, our “human deities”
may make mistakes. Consequently, when there is a discrepancy, between the program and
the result, we must verify the result produced by the oracle before declaring that there is a
defect in the result.
The human oracles typically use the program’s specifications to decide what the correct
behavior of the program should be. To help oracle determine the correct behavior, it is
important that the behavior of the system or component is explicitly specified and the
specification itself be error-free. In other words actually specify the true and correct
behavior.
There are some systems where oracles are automatically generated from the specifications
of programs or modules. With such oracles, we are assured that the output of the oracle
conforms to the specifications. However, even this approach does not solve all our
problems, as there is a possibility of errors in specifications. As a result, a divine generated
from the specifications will correct the result if the specifications are correct, and this
specification will not be reliable in case of errors. In addition, systems that generate oracles
from specifications require formal specifications, which are often not generated during
design.
ECONOMICS OF SOFTWARE TESTING
Testing as a profession is new, and it’s part of the software world which is new itself.
Testing, however, was needed starting with the first piece of software. Back then, developers
tested the software they wrote, it was part of the job. After World War II, the software field
grew bigger. With bigger markets, came economic opportunities, but also big risks: big foul
ups can lead to reputation loss, market loss and loss of big piles of cash.
The first testers were bug-catchers and later gate-keepers. Since then, the testing profession
has grown, and it now involves many responsibilities. Testers report the status of the product
from all sides: inside, outside and sideways. They give the business tools to make business
decisions.
In order to do this valuable work, testers need to understand the risks, the market and the
users. They find ways to reduce risks, by exploring areas of uncertainty in the product,
proving and disproving assumptions, and suggesting corrections.
Software testing is a technical task, but it also involves some important considerations of
economics and human psychology. The most important considerations in software testing are
issues of psychology. One of the primary causes of poor application testing is the fact that
most programmers begin with a false definition of the term; these definitions are upside
down. Understanding the true definition of software testing can make a profound difference
in the success of your efforts. Human beings tend to be highly goal-oriented, and establishing
the proper goal has an important psychological effect on them. The myriad implications
related to the varied distorted definitions of software testing give rise to psychology
problems. It is often impractical, often impossible, to find all the errors in a program. This
fundamental problem causes implications for the economics of testing. To combat the
challenges associated with testing economics, there are particular strategies: black-box testing
and white-box testing. Apart from discussing the psychology problems of testing, this chapter
explores the testing economics strategies. It also introduces a set of vital testing principles or
guidelines. Most of these principles may seem obvious, yet they are all too often overlooked.
There is a definite economic impact of software testing. One economic impact is from the
cost of defects. This is a very real and very tangible cost. Another economic impact is from
the way we perform testing. It is possible to have very good motivations and testing goals
while testing in a very inefficient way.
WHERE DEFECTS ORIGINATE
To understand the dynamics and costs of defects, we need to know some things about them.
One of the most commonly understood facts about defects is that most defects originate in the
requirements definition phase of a project. The next runner-up is the design phase.
• The English language is ambiguous and even what we consider clear language can be
interpreted differently by different people.
We saw that most defects originate in requirements and design, but most of the testing effort
occurs in a traditional “testing” phase toward the end of the project. This is called the “big
bang” approach from the concentration of effort at one big phase. “Big bang” could also
describe the sound of the project as it fails. The problem with the big bang approach to testing
is that defects are not found until toward the end of the project. This is the most costly and
risky time to fix defects. Some complex defects may even be impossible to fix.
HANDLING DEFECTS
Defects are basically considered as destructive in all software development stages. Any
unexpected things that occur in software stages are defective in that particular software. To
establish a defect management process is the most attractive and best way to increase and
improve the quality of software. There is no such software that is present without any
defect. Defects are present in whole life of software because software is developed by
humans and “to err is human” i.e. it is natural for human beings to make mistakes. Number
of defects can be reduced by resolving or fixing but it is impossible to make a software
error or defect-free. Defect Management Process (DMP), as name suggests, is a process
of managing defects by simply identifying and resolving or fixing defects. If defect
management process is done in more efficient manner with full focus, then less buggy
software will be available more in the market.
The Defect Management Process is a systematic approach to identify, track, and resolve
defects in software development. It typically includes the following steps:
1. DEFECT IDENTIFICATION – Defects are identified through various testing
activities, such as unit testing, integration testing, and user acceptance testing.
2. DEFECT LOGGING – Defects are logged in a defect tracking system, along
with details such as description, severity, and priority.
3. DEFECT TRIAGE – The triage process involves evaluating the defects to
determine their priority and the resources required to resolve them.
4. DEFECT ASSIGNMENT – Defects are assigned to developers or testers for
resolution, based on their expertise and availability.
5. DEFECT RESOLUTION – The assigned personnel work on resolving the
defects by fixing the code, updating the documentation, or performing other
necessary actions.
6. DEFECT VERIFICATION – Once the defect is resolved, it is verified by the
tester to ensure that it has been fixed correctly and does not introduce any new
defects.
7. DEFECT CLOSURE – Once the defect has been verified, it is closed and the
status is updated in the defect tracking system.
8. DEFECT REPORTING – Regular reports on the status of defects, including
the number of open defects, the number of defects resolved, and the average
time to resolve defects, are generated to provide visibility into the defect
management process.
ADVANTAGES OF DMP :
DISADVANTAGES OF DMP :
1. If DMP is not handled properly, then there will a huge increased cost in a
creeping i.e. increase in price of product.
2. If errors or defects are not managed properly at early stage, then afterwards,
defect might cause greater damage, and costs to fix or resolve the defect will
also get increased.
3. There will be other disadvantages also like loss of revenue, loss of customers,
damaged brand reputations if DMP is not done properly.
4. OVERHEAD – The Defect Management Process requires a significant amount
of overhead, including time spent logging and triaging defects, and managing
the defect tracking system.
5. RESOURCE CONSTRAINTS – The Defect Management Process may require
a significant amount of resources, including personnel, hardware, and software,
which may be challenging for smaller organizations.
6. RESISTANCE TO CHANGE – Some stakeholders may resist the Defect
Management Process, particularly if they are used to a more informal approach
to managing defects.
7. DEPENDENCE ON TECHNOLOGY – The Defect Management Process
relies on technology, such as a defect tracking system, to manage defects. If the
technology fails, the process may be disrupted, leading to delays and
inefficiencies.
8. LACK OF STANDARDIZATION – Without a standard approach to Defect
Management, different organizations may have different processes, leading to
confusion and inefficiencies when working together on software development
projects.
RISK IN SOFTWARE TESTING
We often see situations where we have applied the best testing techniques and processes, and
yet the testing wasn't completed in time or with quality. It happens when we have not planned
for risks in our testing process.
Risk is the possibility of an event in the future, which has negative consequences. We need to
plan for these negative consequences in advance so we can either eliminate the risk or reduce
the impacts.
From the Testing perspective, a QA manager needs to be aware of these risks so he/she can
minimize the impact on the quality of the software. Does this mean that the QA manager
should address every risk that the project could face? In an ideal world, YES, but in all
practicality, he would never have the time and resources to plan for every risk. Therefore we
need to prioritize risks that would have severe consequences on software. How do we do
that? We do that by determining the level of risk.
Software testing needs time and skill. Most testers work on a deadline and follow a series of
test steps to mark a product deployment ready. But among the several features you have, how
do you know where to start the testing process? The most common practice is to work
according to different testing approaches—Analytical, model-based, methodical, risk-based
testing, and more
DIMENSIONS OF RISK
There are two dimensions of Risks that we should know.
IMPACT - Risk by its very nature has a negative impact. However, the size of the
impact varies from one risk to another. We need to determine the impact on the
project if the risk occurs. Continuing with the same example - What's the impact if the
server goes down? Well, the site will not be accessible, so the impact is very high!
While several scenarios exist to give rise to risk occurrences, they fall into varying categories
as per what they affect. Software risks can manifest in various forms, and understanding these
types of risks is crucial for effective risk management in software development.
TECHNICAL RISK
As per its name, technical risks consist of complex and often uncertain defects in
design, functionality, system performance, and data. It encompasses all the technical
aspects of a project, from challenges arising from software complexity and new or
unproven technologies to integration difficulties, performance bottlenecks, and
security vulnerabilities.
OPERATIONAL RISK
Any risk arising from operational aspects of a project, including potential operational
failures (e.g., lack of disaster recovery plans), differences between testing and
production environments, and geographical factors affecting data centers, come under
this type.
BUSINESS RISKS
Business risks are risks arising from changes in market conditions, economic factors
affecting budget or funding, and competitive forces influencing the project’s success
ORGANIZATIONAL RISK
These are the risks related to changes in project or team leadership and team
dynamics, including conflicts and communication issues.
EXTERNAL RISK
It consists of risks associated with external factors, including third-party risks linked
to vendors or services outside the project team’s direct control.
Risk analysis is a highly critical aspect that causes any software to lose its quality and
credibility if not done right. Developers and testers often analyse the source code and the
corresponding front-end features to understand the interactions between different
components. All this review results in identifying risks and figuring out the mitigation
process.
A risk’s impact is determined as per how much damage it could do to the system. A
security risk is certainly a huge red flag. Yet, we need to analyze the impact by calculating
the levels of the risks. For this, we need probability and impact value. Both these
parameters range within High, Medium, and Low values. The security risk discussed in the
above example would be awarded a High value for probability and impact, making it an
immediate threat to the system. Based on this classification, you would need to look for a
solution urgently. Let’s look at a few more possible risks and classify them accordingly for
a broader understanding:
Risk Probability Impact Risk level / Priority
Insufficient Human Resource for the High (3) High (3) High (9)
project
Testing Environment lacking similar Medium(2) High (3) Medium (6)
features as the production
environment.
Start risk analysis during project initiation. Identify and document potential risks and
uncertainties as early as possible.
Consider technical, operational, and business-related risks. Ensure you have a holistic
view of potential issues.
Categorize risks by type (technical, schedule, operational, etc.) to facilitate better risk
management and prioritization.
Conduct regular risk reviews at various project stages, including planning, execution,
and closure.
Develop clear and actionable mitigation plans for high-risk items.
Define criteria for assessing and categorizing risks, including probability, impact, and
risk levels (low, medium, high).
REQUIREMENT TRACEABILITY MATRIX
It is used to track the requirements and to check the current project requirements are met.
The Requirements Traceability Matrix (RTM) is a tool to help ensure that the project’s scope,
requirements, and deliverables remain “as is ” when compared to the baseline. Thus, it
“traces ” the deliverables by establishing a thread for each requirement- from the project’s
initiation to the final implementation.
Project Organization
As mentioned above, you’ll be doing a lot of tests to verify the viability of your software
projects. You’ll also have a list of requirements from multiple sources. In order to ensure that
you’ve tested every variable correctly, you need to use a RTM. These documents increase
productivity by reducing team errors and gathering all essential data in one place.
Improved Communication
Because every test your team conducts is recorded, RTM documents expedite
communication. Issues are easier to identify and teams can work faster on completing their
projects. A digital requirements traceability matrix can be used simultaneously by multiple
project members, and previous data is readily available. This allows teams to better delegate
tasks and transfer responsibilities without sacrificing quality.
Requirement ID
Test Case ID
Status
FORWARD TRACEABILITY:
This matrix is used to check whether the project progresses in the desired direction
and for the right product. It makes sure that each requirement is applied to the product
and that each requirement is tested thoroughly. It maps requirements to test cases.
It is used to ensure whether the current product remains on the right track. The
purpose behind this type of traceability is to verify that we are not expanding the
scope of the project by adding code, design elements, test or other work that is not
specified in the requirements. It maps test cases to requirements.
This traceability matrix ensures that all requirements are covered by test cases. It
analyzes the impact of a change in requirements affected by the Defect in a work
product and vice versa.