Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

JB PORTALS

Software Development Life Cycle

SDLC is a process followed for a software project, within a software organization. It consists
of a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process. The following figure is a graphical representation of the
various stages of a typical SDLC.

A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis


Requirement analysis is the most important and fundamental stage in SDLC. It is performed
by the senior members of the team with inputs from the customer, the sales department,
market surveys and domain experts in the industry. This information is then used to plan the
basic project approach and to conduct product feasibility study in the economical,
operational and technical areas. Planning for the quality assurance requirements and

TESTING Page | 1
JB PORTALS

identification of the risks associated with the project is also done in the planning stage. The
outcome of the technical feasibility study is to define the various technical approaches that
can be followed to implement the project successfully with minimum risks.

Stage 2: Defining Requirements


Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts.
This is done through an SRS (Software Requirement Specification) document which consists
of all the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture


SRS is the reference for product architects to come out with the best architecture for the
product to be developed. Based on the requirements specified in SRS, usually more than one
design approach for the product architecture is proposed and documented in a DDS - Design
Document Specification.

This DDS is reviewed by all the important stakeholders and based on various parameters as
risk assessment, product robustness, design modularity, budget and time constraints, the
best design approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if
any). The internal design of all the modules of the proposed architecture should be clearly
defined with the minutest of the details in DDS.

Stage 4: Building or Developing the Product


In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is performed in a
detailed and organized manner, code generation can be accomplished without much hassle.

Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate the
code. Different high level programming languages such as C, C++, Pascal, Java and PHP are

TESTING Page | 2
JB PORTALS

used for coding. The programming language is chosen with respect to the type of software
being developed.

Stage 5: Testing the Product


This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the
testing only stage of the product where product defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance


Once the product is tested and ready to be deployed it is released formally in the appropriate
market. Sometimes product deployment happens in stages as per the business strategy of
that organization. The product may first be released in a limited segment and tested in the
real business environment (UAT- User acceptance testing). Then based on the feedback, the
product may be released as it is or with suggested enhancements in the targeting market
segment. After the product is released in the market, its maintenance is done for the existing
customer base.

SDLC Models

There are various software development life cycle models defined and designed
which are followed during the software development process. These models are also
referred to as Software Development Process Models". Each process model follows a Series
of steps unique to its type to ensure success in the process of software development.
Following are the most important and popular SDLC models followed in the industry −
● Waterfall Model
● Iterative Model
● Spiral Model
● V-Model
● Big Bang Model
● Agile development model

TESTING Page | 3
JB PORTALS

The meaning of Agile is swift or versatile."Agile process model" refers to a software


development approach based on iterative development. Agile methods break tasks into
smaller iterations, or parts do not directly involve long term planning. The project scope and
requirements are laid down at the beginning of the development process. Plans regarding
the number of iterations, the duration and the scope of each iteration are clearly defined in
advance.

Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks. The division of the entire project into smaller parts
helps to minimize the project risk and to reduce the overall project delivery time
requirements. Each iteration involves a team working through a full software development
life cycle including planning, requirements analysis, design, coding, and testing before a
working product is demonstrated to the client.

TESTING Page | 4
JB PORTALS

Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering

2. Design the requirements

3. Construction/ iteration

4. Testing/ Quality assurance

5. Deployment

6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders
to define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance
and looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.

TESTING Page | 5
JB PORTALS

Agile Testing Methods:

○ Scrum

○ Crystal

○ Dynamic Software Development Method(DSDM)

○ Feature Driven Development(FDD)

○ Lean Software Development

○ eXtreme Programming(XP)

Scrum

So, why is it called Scrum? People often ask, “Is Scrum an acronym for something?”
and the answer is no. It is actually inspired by a scrum in the sport of rugby. In rugby, the
team comes together in what they call a scrum to work together to move the ball forward. In
this context, Scrum is where the team comes together to move the product forward.

SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.

There are three roles in it, and their responsibilities are:

○ Scrum Master: The scrum can set up the master team, arrange the meeting and
remove obstacles for the process

○ Product owner: The product owner makes the product backlog, prioritizes the delay
and is responsible for the distribution of functionality on each repetition.

○ Scrum Team: The team manages its work and organizes the work to complete the
sprint or cycle.

TESTING Page | 6
JB PORTALS

Scrum Framework

The scrum framework outlines a set of values, principles, and practices that scrum
teams follow to deliver a product or service. It details the members of a scrum team and their
accountabilities, “artifacts” that define the product and work to create the product, and
scrum ceremonies that guide the scrum team through work.

Members of a scrum team

A scrum team is a small and nimble team dedicated to delivering committed product
increments. A scrum team’s size is typically small, at around 10 people, but it’s large enough
to complete a substantial amount of work within a sprint. A scrum team needs three specific
roles: product owner, scrum master, and the development team. And because scrum teams
are cross-functional, the development team includes testers, designers, UX specialists, and
ops engineers in addition to developers.

The scrum product owner

Product owners are the champions for their product. They are focused on
understanding business, customer, and market requirements, then prioritizing the work to
be done by the engineering team accordingly.

TESTING Page | 7
JB PORTALS

Effective product owners:


● Build and manage the product backlog.
● Closely partner with the business and the team to ensure everyone understands the
work items in the product backlog.
● Give the team clear guidance on which features to deliver next.
● Decide when to ship the product with a predisposition towards more frequent
delivery.

The product owner is not always the product manager. Product owners focus on ensuring
the development team delivers the most value to the business. Also, it's important that the
product owner be an individual. No development team wants mixed guidance from multiple
product owners.

Scrum master

Scrum masters are the champions of scrum within their teams. They coach teams, product
owners, and the business on the scrum process, and look for ways to fine-tune their practice
of it.An effective scrum master deeply understands the work being done by the team and can
help the team optimize their transparency and delivery flow. As the facilitator-in-chief,
he/she schedules the needed resources (both human and logistical) for sprint planning,
stand-up, sprint review, and the sprint retrospective.

Scrum development team

Scrum teams get s*%& done. They are the champions for sustainable development practices.
The most effective scrum teams are tight-knit, co-located, and usually five to seven members.
One way to work out the team size is to use the famous ‘two pizza rule’ coined by Jeff Bezos,
the CEO of Amazon (the team should be small enough to share two pizzas). Team members
have differing skill sets, and cross-train each other so no one person becomes a bottleneck
in the delivery of work. Strong scrum teams are self-organizing and approach their projects
with a clear ‘we’ attitude. All members of the team help one another to ensure a successful
sprint completion.

TESTING Page | 8
JB PORTALS

The scrum team drives the plan for each sprint. They forecast how much work they believe
they can complete over the iteration using their historical velocity as a guide. Keeping the
iteration length fixed gives the development team important feedback on their estimation
and delivery process, which in turn makes their forecasts increasingly accurate over time.

Scrum artifacts

Scrum artifacts are important information used by the scrum team that helps define
the product and what work to be done to create the product. There are three artifacts in
scrum: product backlog, a sprint backlog, and an increment with your definition of “done”.
They are the three constants a scrum team should reflect on during sprints and over time.

● Product Backlog is the primary list of work that needs to get done and maintained by
the product owner or product manager. This is a dynamic list of features,
requirements, enhancements, and fixes those acts as the input for the sprint backlog.
It is, essentially, the team’s “To Do” list. The product backlog is constantly revisited,
re-prioritized and maintained by the Product Owner because, as we learn more or as
the market changes, items may no longer be relevant or problems may get solved in
other ways.
● Sprint Backlog is the list of items, user stories, or bug fixes, selected by the
development team for implementation in the current sprint cycle. Before each sprint,
in the sprint planning meeting (which we’ll discuss later in the article) the team
chooses which items it will work on for the sprint from the product backlog. A sprint
backlog may be flexible and can evolve during a sprint. However, the fundamental
sprint goal – what the team wants to achieve from the current sprint – cannot be
compromised.
● Increment (or Sprint Goal) is the usable end-product from a sprint. At Atlassian, we
usually demonstrate the “increment” during the end-of-sprint demo, where the team
shows what was completed in the sprint. You may not hear the word “increment” out
in the world, as it’s often referred to as the team’s definition of “Done”, a milestone,
the sprint goal, or even a full version or a shipped epic. It just depends on how your

TESTING Page | 9
JB PORTALS

teams defines “Done” and how you define your sprint goals. For example, some teams
choose to release something to their customers at the end of every sprint. So their
definition of ‘done’ would be ‘shipped’. However, this may not be realistic of other
types of teams. Say you work on a server-based product that can only ship to your
customers every quarter. You may still choose to work in 2-week sprints, but your
definition of ‘done’ may be finishing part of a larger version that you plan to ship
together. But of course, the longer it takes to release software, the higher the risk that
software will miss the mark.

Sprint

The heart of Scrum is a Sprint, a time-box of two weeks or one month during which a
potentially releasable product increment is created. A new Sprint starts immediately after
the conclusion of the previous Sprint. Sprints consist of the Sprint planning, daily scrums, the
development work, the Sprint review, and the Sprint retrospective.

In Sprint planning, the work to be performed in the Sprint is planned collaboratively by the
Scrum Team. The Daily Scrum Meeting is a 15-minute time-boxed event for the Scrum Team
to synchronize the activities and create a plan for that day.

A Sprint Review is held at the end of the Sprint to inspect the Increment and make changes
to the Product Backlog, if needed. The Sprint Retrospective occurs after the Sprint Review
and prior to the next Sprint Planning. In this meeting, the Scrum Team is to inspect itself and
create a plan for improvements to be enacted during the subsequent Sprint.

TESTING Page | 10
JB PORTALS

Introduction to Software Testing

Testing is a group of techniques to determine the correctness of the application under


the predefined script but, testing cannot find all the defects of application. The main intent
of testing is to detect failures of the application so that failures can be discovered and
corrected. It does not demonstrate that a product functions properly under all conditions but
only that it is not working in some specific conditions.

Testing furnishes comparison that compares the behavior and state of software against
mechanisms because the problem can be recognized by the mechanism. The mechanism may
include past versions of the same specified product, comparable products, and interfaces of
expected purpose, relevant standards, or other criteria but not limited up to these.

Testing includes an examination of code and also the execution of code in various
environments, conditions as well as all the examining aspects of the code. In the current
scenario of software development, a testing team may be separate from the development
team so that Information derived from testing can be used to correct the process of software
development.

Role of Software Testing

Rigorous testing is necessary during software development and maintenance to


● Identify defects
● Reduce failures in the operational environment
● Increase quality of the operational system.
● meet contractual or legal requirements
● Meet industry-specific standards, which may specify the type of techniques that
must be used or the percentage of the software code that must be executed.

Objectives of Software Testing


Following are the objectives of software testing:
● Finding defects which prevent the probability of their occurrence in production
● Gaining confidence in the quality of the software application.
● Providing information helps GO or NO GO decision-making while moving to the
next phase.
● Defect analysis in one phase can also help identify the root cause and prevent
defects in the subsequent phases.

TESTING Page | 11
JB PORTALS

Why software testing is important?

Few can argue against the need for quality control when developing software. Late
delivery or software defects can damage a brand’s reputation — leading to frustrated and
lost customers. In extreme cases, a bug or defect can degrade interconnected systems or
cause serious malfunctions.

Consider Nissan having to recall over 1 million cars due to a software defect in the airbag
sensor detectors. Or a software bug that caused the failure of a USD 1.2 billion military
satellite launch. The numbers speak for themselves. Software failures in the US cost the
economy USD 1.1 trillion in assets in 2016. What’s more, they impacted 4.4 billion customers.

Though testing itself costs money, companies can save millions per year in development and
support if they have a good testing technique and QA processes in place. Early software
testing uncovers problems before a product goes to market. The sooner development teams
receive test feedback, the sooner they can address issues such as:

● Architectural flaws
● Poor design decisions
● Invalid or incorrect functionality
● Security vulnerabilities
● Scalability issues

When development leaves ample room for testing, it improves software reliability and high-
quality applications are delivered with few errors. A system that meets or even exceeds
customer expectations leads to potentially more sales and greater market share.

Verification and validation

Verification is the process of checking that a software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not. It
verifies whether the developed product fulfills the requirements that we have. Verification
is static testing.

Are we building the product right?

Validation is the process of checking whether the software product is up to the mark
or in other words the product has high level requirements. It is the process of checking the
validation of a product i.e. it checks what we are developing is the right product. It is
validation of actual and expected product. Validation is dynamic testing.
Validation means Are we building the right product?

TESTING Page | 12
JB PORTALS

Difference between verification and validation testing

Verification Validation

We check whether we are developing the We check whether the developed product is

right product or not. right.

Verification is also known as static Validation is also known as dynamic


testing. testing.

Verification includes different methods Validation includes testing like functional


like Inspections, Reviews, and testing, system testing, integration, and User
Walkthroughs. acceptance testing.

It is a process of checking the work- It is a process of checking the software


products (not the final product) of a during development or at the end of the
development cycle to decide whether the development cycle to decide whether the

product meets the specified requirements. software follows the specified business
requirements.

Quality assurance comes under Quality control comes under validation


verification testing. testing.

The execution of code does not happen in In validation testing, the execution of code
the verification testing. happens.

In verification testing, we can find the bugs In the validation testing, we can find those
early in the development phase of the bugs, which are not caught in the verification

product. process.

TESTING Page | 13
JB PORTALS

Verification testing is executed by the Validation testing is executed by the testing

Quality assurance team to make sure that team to test the application.
the product is developed according to
customers' requirements.

Verification is done before the validation After verification testing, validation testing
testing. takes place.

In this type of testing, we can verify that In this type of testing, we can validate that
the inputs follow the outputs or not. the user accepts the product or not.

What is Debugging?

In the development process of any software, the software program is religiously


tested, troubleshot, and maintained for the sake of delivering bug-free products. There is
nothing that is error-free in the first go.

So, it's an obvious thing to which everyone will relate that as when the software is created,
it contains a lot of errors; the reason being nobody is perfect and getting error in the code is
not an issue, but avoiding it or not preventing it, is an issue!

All those errors and bugs are discarded regularly, so we can conclude that debugging is
nothing but a process of eradicating or fixing the errors contained in a software program.

Debugging works stepwise, starting from identifying the errors, analyzing followed by
removing the errors. Whenever a software fails to deliver the result, we need the software
tester to test the application and solve it.

Since the errors are resolved at each step of debugging in the software testing, we can
conclude that it is a tiresome and complex task regardless of how efficient the result was.

TESTING Page | 14
JB PORTALS

Types of Testing

Manual testing

The process of checking the functionality of an application as per the customer needs
without taking any help of automation tools is known as manual testing. While performing
the manual testing on any application, we do not need any specific knowledge of any testing
tool, rather we have a proper understanding of the product so we can easily prepare the test
document.

Manual testing can be further divided into three types of testing, which are as follows:

○ White box testing

○ Black box testing

○ Gray box testing

TESTING Page | 15
JB PORTALS

White-box testing

The white box testing is done by the Developer, where they check every line of a code
before giving it to the Test Engineer. Since the code is visible for the Developer during the
testing, that's why it is also known as White box testing.

Black box testing

The black box testing is done by the Test Engineer, where they can check the
functionality of an application or the software according to the customer /client's needs. In
this, the code is not visible while performing the testing; that's why it is known as black-box
testing.

Gray Box testing

Gray box testing is a combination of white box and Black box testing. It can be
performed by a person who knows both coding and testing. And if the single person performs
white box, as well as black-box testing for the application, it is known as Gray box testing.

How to perform Manual Testing

○ First, the tester observes all documents related to software, to select testing areas.

○ Tester analyzes requirement documents to cover all requirements stated by the


customer.

○ Tester develops the test cases according to the requirement document.

○ All test cases are executed manually by using Black box testing and white box testing.

○ If bugs occurred then the testing team informs the development team.

○ The Development team fixes bugs and hands software to the testing team for a retest.

Automation testing

Automation testing is a process of converting any manual test cases into the test
scripts with the help of automation tools, or any programming language is known as
automation testing. With the help of automation testing, we can enhance the speed of our
test execution because here, we do not require any human efforts. We need to write a test
script and execute those scripts.

TESTING Page | 16
JB PORTALS

Manual testing vs. Automation testing

Software testing is performed to discover bugs in software during its development. The key
difference between automation and manual testing are as follows:

Manual testing Automation testing

Testing in which a human tester In automation testing, automation tools are used

executes test cases to execute the test cases

In this testing, human resources are It is much faster than the manual testing

involved, that's why it is time-


consuming

TESTING Page | 17
JB PORTALS

It is repetitive and error-prone Here automated tools are used that make it

interesting and accurate

BVT (build verification testing) is time- It's easy to build verification testing

consuming and tough in manual testing

Instead of frameworks, this testing Frameworks like keyword, hybrid, and data

uses a checklist, guidelines, and drive to accelerate the automation process.


stringent process for drafting test
cases.

The process turnaround time is higher It completes a single round of testing within
than the automation testing process record time; therefore, a process turnaround
(one testing cycle takes lots of time) time is much lower than a manual testing

process.

The main goal of manual testing is Automation testing can only guarantee a

user-friendliness or improved positive customer experience and user-


customer experience. friendliness.

It is best for usability, exploratory and It is widely used for performing testing, load
adhoc testing testing and regression testing.

TESTING Page | 18
JB PORTALS

Low return on investment The high return on investment

Software Testing Life Cycle (STLC)

The Software Testing Life Cycle (STLC) is a systematic approach that defines the
testing activities and processes to ensure the quality and reliability of software applications.
It encompasses the planning, preparation, execution, and closure of testing activities.

The main goal of the STLC is to identify and document any defects or issues in the software
application as early as possible in the development process. This allows for issues to be
addressed and resolved before the software is released to the public.

Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to release
high-quality software that meets the needs of their customers, ultimately leading to
customer satisfaction and business success.

TESTING Page | 19
JB PORTALS

The following are the typical phases of the Software Testing Life Cycle:

Requirement Analysis: In this phase, testers analyze the software requirements,


specifications, and user stories to gain a clear understanding of the application's
functionalities, expected behavior, and scope of testing. Testers collaborate with
stakeholders to clarify any ambiguities and identify potential risks.

Test Planning: Test planning involves defining the overall testing objectives, strategies, and
test deliverables. Testers identify the test scope, determine the test levels (e.g., unit testing,
integration testing, system testing), and create a test plan document. The test plan outlines
the test approach, test environments, test schedules, and resource allocation.

Test Design: In this phase, test cases are designed based on the requirements and
specifications. Testers create test scenarios, identify test conditions, and define test data and
expected results. The test cases should cover various functional and non-functional aspects
of the software application.

Test Environment Setup: Testers set up the required test environments, which may include
hardware, software, networks, and databases, to execute the test cases effectively. They
ensure that the test environments are stable, consistent, and reflect the production
environment as closely as possible.

Test Execution: Test execution involves running the test cases in the test environment and
comparing the actual results with the expected results. Testers record the test outcomes, log
any defects or issues encountered during testing, and communicate the results to the
relevant stakeholders.

Defect Management: Defect management involves tracking, reporting, and managing


defects found during the testing process. Testers log defects into a defect tracking system,
assign them to the development team, and monitor their resolution. They also perform defect
retesting to verify that the reported issues have been fixed.

Test Closure: In the test closure phase, testers summarize the testing activities and evaluate
the overall test coverage, quality, and effectiveness. They prepare test closure reports, which
include the test summary, key findings, metrics, and recommendations for future testing
efforts. The test closure phase helps stakeholders make informed decisions about the
application's readiness for release.

Test Cycle Evaluation: Test cycle evaluation involves assessing the effectiveness and
efficiency of the testing process. Testers analyze the test metrics, evaluate the test coverage,
and identify areas for improvement. Lessons learned from the testing cycle are documented
and used to enhance future testing activities.

TESTING Page | 20
JB PORTALS

It's worth noting that the Software Testing Life Cycle may vary depending on the specific
methodologies or approaches used, such as waterfall, agile, or DevOps. The key objective of
the STLC is to ensure that comprehensive and systematic testing activities are performed to
deliver high-quality software products.

Test Case Design Techniques

Test case design techniques help ensure that test cases cover a wide range of
scenarios and adequately validate the software application. Here are some commonly used
test case design techniques:

Equivalence Partitioning: This technique divides the input data into equivalence classes or
groups, where each class should exhibit similar behavior. Test cases are then designed to
cover representative values from each equivalence class, reducing the redundancy of test
cases.

Boundary Value Analysis: Boundary value analysis focuses on testing the boundaries or
limits of input data. Test cases are designed to cover values at the lower and upper
boundaries, as well as just below and above these boundaries. This technique helps identify
defects that are often found near the boundaries.

Decision Table Testing: Decision tables are used to represent complex business logic or
rules. Test cases are derived by considering different combinations of conditions and
corresponding actions or outcomes. This technique helps ensure that all possible
combinations of conditions are tested.

State Transition Testing: State transition testing is useful for applications that have
different states or modes. Test cases are designed to validate the transition between states
and the behavior of the application in each state. This technique ensures comprehensive
coverage of all possible state transitions.

Error Guessing: Error guessing is an informal technique where testers use their experience
and intuition to anticipate and design test cases based on potential errors or defects. Testers
think from the perspective of a user or a developer and create test cases that focus on error-
prone areas.

Exploratory Testing: Exploratory testing is a technique where testers dynamically explore


the software application without predefined test cases. Testers interact with the application,

TESTING Page | 21
JB PORTALS

try different inputs, and observe the behavior. This technique allows for flexibility and
creativity in finding defects.

How to Write Test Cases

Writing effective test cases is crucial for ensuring comprehensive and reliable testing.
Here are some key steps to consider when writing test cases:

Understand the Requirements: Gain a clear understanding of the software requirements,


user stories, or specifications. Identify the key functionalities to be tested and the expected
behavior.

Define Test Scenarios: Identify different test scenarios that cover a range of possible
interactions and conditions. Each test scenario should focus on a specific aspect or
functionality of the application.

Determine Test Data: Define the necessary test data required for each test case. Test data
should cover various combinations, boundary values, and edge cases to ensure thorough
testing.

Write Test Steps: Document the step-by-step instructions for executing the test case.
Clearly specify the inputs to be provided, the actions to be performed, and the expected
results for each step.

Include Preconditions and Postconditions: Specify any necessary preconditions, such as


system configuration or data setup, that need to be in place before executing the test case.
Also, document any postconditions or cleanup activities that need to be performed after the
test execution.

Keep Test Cases Independent: Ensure that each test case is independent and does not
depend on the execution or outcome of other test cases. This allows for better isolation and
identification of defects.

Be Clear and Concise: Write test cases in a clear and concise manner, using simple language
and avoiding ambiguity. Use bullet points or numbering to make the test steps easy to follow.

Provide Expected Results: Clearly state the expected results for each test step or test case.
The expected results should be specific, measurable, and verifiable.

Review and Validate: Review the test cases to ensure they cover all the necessary scenarios
and requirements. Validate the test cases with stakeholders to confirm their accuracy and
completeness.

TESTING Page | 22
JB PORTALS

Maintain Test Documentation: Keep the test cases well-organized and up to date.
Regularly review and update them as the application evolves or new requirements emerge.

By following these steps and leveraging appropriate test case design techniques, you can
create effective test cases that provide comprehensive coverage and help identify defects in
the software application.

The Objective of Writing Test Cases in Software Testing

• To validate specific features and functions of the software.


• To guide testers through their day-to-day hands-on activity.
• To record a catalog of steps undertaken, which can be revisited in the event of a bug
popping up.
• To provide a blueprint for future projects and testers so they don’t have to start work
from scratch.
• To help detect usability issues and design gaps early on.

• To help new testers and devs quickly pick up testing, even if they join in the middle
of an ongoing project.

Standard Test Case Format

Test Case ID
Test Scenario
Test Steps
Prerequisites
Test Data
Expected/Intended Results
Actual Results
Test Status – Pass/Fail

TESTING Page | 23
JB PORTALS

While writing test cases, remember to include:

● A reasonable description of the requirement


● A description of the test process
● Details related to testing setup: version of the software under test, data points, OS,
hardware, security clearance, date, time, prerequisites, etc.
● Any related documents or attachments testers will require
● Alternative to prerequisites, if they exist

How to write Test Cases (Test Case Example)

Let’s build a test case example based on a specific scenario. Here is a sample case.

Test Case ID: #BST001


Test Scenario: To authenticate a successful user login on Gmail.com
Test Steps:

The user navigates to Gmail.com.


The user enters a registered email address in the ’email’ field.
The user clicks the ‘Next’ button.
The user enters the registered password.
The user clicks ‘Sign In.’

Prerequisites: A registered Gmail ID with a unique username and password.


Browser: Chrome v 86. Device: Samsung Galaxy Tab S7.
Test Data: Legitimate username and password.
Expected/Intended Results: Once username and password are entered, the web
page redirects to the user’s inbox, displaying and highlighting new emails at the top.
Actual Results: As Expected
Test Status – Pass/Fail: Pass

TESTING Page | 24
JB PORTALS

Test Criteria

Test criteria, also known as exit criteria or completion criteria, are the conditions or
standards that need to be met in order to determine when testing activities can be
considered complete. These criteria help assess whether the software application is ready
for release or the next phase of the development process.

Test criteria may include factors such as:

Test Coverage: The extent to which the software application has been tested. This includes
functional coverage, code coverage, and requirements coverage.

Defect Density: The number of defects found and their severity. The criteria may specify a
maximum allowed defect density or a certain level of defect resolution.

Test Case Execution: The completion of test case execution, ensuring that all planned test
cases have been executed.

Test Environment: The availability and stability of the test environment, including
hardware, software, networks, and databases.

Performance Targets: If performance testing is included, the test criteria may specify
certain performance targets or thresholds that need to be met.

Documentation: The completion and accuracy of test documentation, including test plans,
test cases, test scripts, and test reports.

Test Plan

A test plan is a formal document that outlines the approach, objectives, scope, and
schedule of testing activities for a specific software project or release. It provides a roadmap
for the testing effort and guides the testing team throughout the project.

A test plan typically includes the following components:

Introduction: An overview of the software project, including the purpose and scope of
testing, project goals, and stakeholders involved.

Test Objectives: The specific goals and objectives of the testing effort, such as ensuring
functionality, performance, reliability, and security.

Test Scope: The areas or modules of the software application that will be tested, as well as
any excluded areas or functionalities.

TESTING Page | 25
JB PORTALS

Test Approach: The overall approach to testing, including the test levels (e.g., unit testing,
integration testing, system testing) and the types of testing (e.g., functional, non-functional,
regression).

Test Deliverables: The list of test artifacts or deliverables that will be produced during the
testing process, such as test plans, test cases, test scripts, and test reports.

Test Schedule: The timeline and sequencing of testing activities, including milestones,
resource allocation, and dependencies on other project activities.

Test Environment: The hardware, software, and network configurations required for
testing, including any specific tools or test management systems.

Test Execution: The procedures and guidelines for executing test cases, capturing test
results, and reporting defects. This may include test data, test procedures, and the roles and
responsibilities of the testing team.

Test Risks and Mitigation: The identification of potential risks and issues that may impact
the testing process, along with mitigation strategies or contingency plans.

Test Strategy

A test strategy is a high-level document that defines the overall approach, goals, and
guidelines for testing activities across multiple projects or releases. It provides a framework
for the testing process and sets the direction for the testing team.

A test strategy typically includes the following elements:

Test Levels: The different levels of testing to be performed, such as unit testing, integration
testing, system testing, and user acceptance testing.

Test Types: The various types of testing to be conducted, including functional, non-
functional, performance, security, and usability testing.

Test Techniques: The specific techniques or methodologies to be used for test case design,
such as equivalence partitioning, boundary value analysis, or state transition testing.

Test Automation: The extent of test automation to be implemented, including the tools and
frameworks to be used and the criteria for selecting test cases for automation.

Test Environment: The requirements and setup of the test environment, including
hardware, software, networks, and databases.

TESTING Page | 26
JB PORTALS

Test Data Management: The approach for managing test data, including the creation,
selection, and maintenance of test data sets.

Defect Management: The process for logging, tracking, and managing defects, including the
tools and systems to be used and the roles and responsibilities of the stakeholders involved.

Test Metrics and Reporting: The metrics to be collected during testing, such as test
coverage, defect density, and test execution progress. The strategy also outlines the
reporting mechanisms and frequency of test status updates.

Test Team Organization: The roles and responsibilities of the testing team members,
including the coordination with other project stakeholders, such as developers, business
analysts, and project managers.

Both the test plan and test strategy are important documents in the testing process. The test
plan provides detailed information about the testing activities for a specific project, while
the test strategy outlines the overall approach and guidelines for testing across multiple
projects or releases. These documents help ensure a structured and systematic approach to
testing, leading to improved software quality and successful project delivery.

Defect tracking tool

A defect tracking tool, also known as a bug tracking tool or issue tracking tool, is
software designed to help track and manage software defects or issues throughout their life
cycle. It provides a centralized platform for capturing, documenting, and monitoring defects
from identification to resolution.

Defect tracking tools offer features such as:

Defect Logging: The ability to log defects with detailed information, including the steps to
reproduce, the expected and actual results, severity, priority, and assigned resources.

Workflow Management: Defect tracking tools facilitate the management of defect


workflows, including assigning defects to specific team members, tracking the status and
progress of each defect, and ensuring proper escalation and resolution.

Collaboration and Communication: These tools enable collaboration among team


members by allowing them to add comments, attach files, and discuss specific defects. They
also provide notifications and alerts for updates and changes to defects.

Defect Prioritization: Defect tracking tools allow users to prioritize defects based on their
severity, impact on the system, and customer needs. This helps in efficiently allocating
resources and addressing critical defects first.

TESTING Page | 27
JB PORTALS

Reporting and Metrics: These tools generate reports and metrics related to defect trends,
defect density, defect resolution time, and other key performance indicators. This data helps
stakeholders assess the quality of the software and make data-driven decisions.

Integration: Defect tracking tools often offer integration capabilities with other
development and testing tools, such as project management systems, version control
systems, test management tools, and continuous integration tools.

Defect Life Cycle

Defect Life Cycle, also known as Bug Life Cycle, refers to the various stages that a
defect goes through from its identification to its closure. The specific stages may vary
depending on the organization and the defect tracking process in place.

However, the general stages in the defect life cycle include:

New: The defect is initially reported and logged into the defect tracking tool.

Assigned: The defect is assigned to a developer or a team member responsible for


investigating and resolving the issue.

Open: The defect is being actively worked on by the assigned person/team.

Fixed: The defect has been fixed by the developer and is ready for retesting.

Verified: The fixed defect is retested to ensure that it has been resolved successfully.

Closed: The defect is considered closed if it has been fixed, verified, and approved for
closure. The defect is no longer active in the defect tracking system.

Reopened: If the defect is found again or if it is not resolved satisfactorily, it may be


reopened for further investigation and resolution.

Deferred: In some cases, a defect may be deferred for a future release or iteration if it is not
critical or cannot be addressed immediately.

The defect life cycle helps in tracking the progress of defect resolution, monitoring the status
of defects, and facilitating effective communication among team members.

TESTING Page | 28
JB PORTALS

Thunder Client Extension in VS Code

Thunder Client is an alternative to the famous Postman tool used for testing client
APIs. The Thunder Client VS Code extension is lightweight and lets you test APIs on the fly
within the editor.

You might not want to download another tool to test the APIs you're building. Instead, how
about downloading an extension in VSCode that offers a wide range of functionalities like:

● collections,

● environment variables,

● support for standard HTTP verbs,

● navigation tabs (Query, Headers, Auth, Body, Test), and

● Support for JSON Responses

Thunder Client vs Postman


Thunder Client is lightweight and is suitable for users who want a simple user interface and
fantastic user experience with zero complexity. It also runs flawlessly offline and provides
documentation with markdown support . Keep in mind that Postman is more robust and
has a broader range of features built to industry standards. It allows a community of
developers to explore the largest network of APIs, workspace, and collections all over the
world. It also has features like creating teams, reporting, monitors (periodically checking for
APIs performance and response), and mock servers (leverages mock servers that help
simulate endpoints and their corresponding responses without a backend).

How Thunder Client Works


If you want to use Thunder Client, you'll need to go to the VS Code marketplace to download
the extension and then launch it. Once you've done that, here are a few basic things you can
use the extension to do:

Track Activity: Thunder Client keeps track of recent API requests a user has made in the
past. You can also filter the activity to narrow it down to a preferred activity search. It is also
called History.

TESTING Page | 29
JB PORTALS

Use Collections: You can organize APIs so it's easier to access them. Collections are a group
of APIs, so you can create a User collection to include APIs like create user, edit user, delete
user, and so on.

Environment Variables: With Envs, you can store credentials like tokens, base URLs, and
public and private keys and then use the variables within the request body.

Make Requests: You can specify your preferred HTTP verb to go along with the request, like
POST, then the endpoint. With the request Thunder Client, there is also support for Query
Parameters, HTTP Headers (Raw or Not), Authentication (None, Basic, Bearer, OAuth 2, AWS
and NTLM Authentication), Body (Payload attached to individual request) and Test (you
select the test type which can be a response code and set a value to assert).

Responses: Thunder Client offers a well-crafted response section with the response body,
response status, and size and time it took for the request. It also lets users add markdown
supported documentation, making it even more enjoyable.

How to Download and Install Thunder Client


To download Thunder Client, you can find it on VS Code marketplace. Just search for
"Thunder Client" when you're prompted and then install it.

Search Thunder Client On Marketplace


NOTE: I have mine installed already, so the uninstall option is showing in the image.
TESTING Page | 30
JB PORTALS

Install the Thunder Client extension by clicking on the install button.

Install Thunder Client

How to Launch Thunder Client


Click on the new icon that's been added in VS Code to launch Thunder Client.

Launch Thunder Client


Then you can start using Thunder Client.

TESTING Page | 31
JB PORTALS

How to Make a Client Request


Depending on the type of Request, Thunder Client offers a list of HTTP VERBS for requests
such as GET, POST, PUT, DELETE, and PATCH.

HTTP Verbs in Thunder Client


There is also support for Query parameters, Headers, Authorization, Body and Tests. At the
time of writing, there is no support for file attachments for requests yet. You can check the
upcoming release notes here.

TESTING Page | 32
JB PORTALS

Query Parameters allow you to append query parameters to the request.

Query Params
Headers let you set HTTP headers like authorization, content-type, origin, user-agent,
accept-language, referrer, and so on.

If you want any headers to be optional, just make sure to leave them unchecked for the
request. There is also an autocomplete suggestion enabled for your preferred type of header.

TESTING Page | 33
JB PORTALS

Http Headers
To access resources, you need to have tokens that authenticate them. With Thunder Client,
the Auth tab lets you select your preferred type of Auth and add credentials.

In my case, I choose Bearer; then, I have a token pasted into the text area and an auto-
generated token prefix for the request.

TESTING Page | 34
JB PORTALS

Authentication
You can include a payload when making a request. To add the payload, select the Body tab,
and you will see different data formats supported by the extension.

Request Payload

TESTING Page | 35
JB PORTALS

Sample Request and Response


The image below shows a sample request with query parameters and a sample JSON
response.

Sample Request & Response

TESTING Page | 36

You might also like