Professional Documents
Culture Documents
Testing Report
Testing Report
SDLC is a process followed for a software project, within a software organization. It consists
of a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process. The following figure is a graphical representation of the
various stages of a typical SDLC.
TESTING Page | 1
JB PORTALS
identification of the risks associated with the project is also done in the planning stage. The
outcome of the technical feasibility study is to define the various technical approaches that
can be followed to implement the project successfully with minimum risks.
This DDS is reviewed by all the important stakeholders and based on various parameters as
risk assessment, product robustness, design modularity, budget and time constraints, the
best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if
any). The internal design of all the modules of the proposed architecture should be clearly
defined with the minutest of the details in DDS.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate the
code. Different high level programming languages such as C, C++, Pascal, Java and PHP are
TESTING Page | 2
JB PORTALS
used for coding. The programming language is chosen with respect to the type of software
being developed.
SDLC Models
There are various software development life cycle models defined and designed
which are followed during the software development process. These models are also
referred to as Software Development Process Models". Each process model follows a Series
of steps unique to its type to ensure success in the process of software development.
Following are the most important and popular SDLC models followed in the industry −
● Waterfall Model
● Iterative Model
● Spiral Model
● V-Model
● Big Bang Model
● Agile development model
TESTING Page | 3
JB PORTALS
Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks. The division of the entire project into smaller parts
helps to minimize the project risk and to reduce the overall project delivery time
requirements. Each iteration involves a team working through a full software development
life cycle including planning, requirements analysis, design, coding, and testing before a
working product is demonstrated to the client.
TESTING Page | 4
JB PORTALS
1. Requirements gathering
3. Construction/ iteration
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders
to define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance
and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
TESTING Page | 5
JB PORTALS
○ Scrum
○ Crystal
○ eXtreme Programming(XP)
Scrum
So, why is it called Scrum? People often ask, “Is Scrum an acronym for something?”
and the answer is no. It is actually inspired by a scrum in the sport of rugby. In rugby, the
team comes together in what they call a scrum to work together to move the ball forward. In
this context, Scrum is where the team comes together to move the product forward.
SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.
○ Scrum Master: The scrum can set up the master team, arrange the meeting and
remove obstacles for the process
○ Product owner: The product owner makes the product backlog, prioritizes the delay
and is responsible for the distribution of functionality on each repetition.
○ Scrum Team: The team manages its work and organizes the work to complete the
sprint or cycle.
TESTING Page | 6
JB PORTALS
Scrum Framework
The scrum framework outlines a set of values, principles, and practices that scrum
teams follow to deliver a product or service. It details the members of a scrum team and their
accountabilities, “artifacts” that define the product and work to create the product, and
scrum ceremonies that guide the scrum team through work.
A scrum team is a small and nimble team dedicated to delivering committed product
increments. A scrum team’s size is typically small, at around 10 people, but it’s large enough
to complete a substantial amount of work within a sprint. A scrum team needs three specific
roles: product owner, scrum master, and the development team. And because scrum teams
are cross-functional, the development team includes testers, designers, UX specialists, and
ops engineers in addition to developers.
Product owners are the champions for their product. They are focused on
understanding business, customer, and market requirements, then prioritizing the work to
be done by the engineering team accordingly.
TESTING Page | 7
JB PORTALS
The product owner is not always the product manager. Product owners focus on ensuring
the development team delivers the most value to the business. Also, it's important that the
product owner be an individual. No development team wants mixed guidance from multiple
product owners.
Scrum master
Scrum masters are the champions of scrum within their teams. They coach teams, product
owners, and the business on the scrum process, and look for ways to fine-tune their practice
of it.An effective scrum master deeply understands the work being done by the team and can
help the team optimize their transparency and delivery flow. As the facilitator-in-chief,
he/she schedules the needed resources (both human and logistical) for sprint planning,
stand-up, sprint review, and the sprint retrospective.
Scrum teams get s*%& done. They are the champions for sustainable development practices.
The most effective scrum teams are tight-knit, co-located, and usually five to seven members.
One way to work out the team size is to use the famous ‘two pizza rule’ coined by Jeff Bezos,
the CEO of Amazon (the team should be small enough to share two pizzas). Team members
have differing skill sets, and cross-train each other so no one person becomes a bottleneck
in the delivery of work. Strong scrum teams are self-organizing and approach their projects
with a clear ‘we’ attitude. All members of the team help one another to ensure a successful
sprint completion.
TESTING Page | 8
JB PORTALS
The scrum team drives the plan for each sprint. They forecast how much work they believe
they can complete over the iteration using their historical velocity as a guide. Keeping the
iteration length fixed gives the development team important feedback on their estimation
and delivery process, which in turn makes their forecasts increasingly accurate over time.
Scrum artifacts
Scrum artifacts are important information used by the scrum team that helps define
the product and what work to be done to create the product. There are three artifacts in
scrum: product backlog, a sprint backlog, and an increment with your definition of “done”.
They are the three constants a scrum team should reflect on during sprints and over time.
● Product Backlog is the primary list of work that needs to get done and maintained by
the product owner or product manager. This is a dynamic list of features,
requirements, enhancements, and fixes those acts as the input for the sprint backlog.
It is, essentially, the team’s “To Do” list. The product backlog is constantly revisited,
re-prioritized and maintained by the Product Owner because, as we learn more or as
the market changes, items may no longer be relevant or problems may get solved in
other ways.
● Sprint Backlog is the list of items, user stories, or bug fixes, selected by the
development team for implementation in the current sprint cycle. Before each sprint,
in the sprint planning meeting (which we’ll discuss later in the article) the team
chooses which items it will work on for the sprint from the product backlog. A sprint
backlog may be flexible and can evolve during a sprint. However, the fundamental
sprint goal – what the team wants to achieve from the current sprint – cannot be
compromised.
● Increment (or Sprint Goal) is the usable end-product from a sprint. At Atlassian, we
usually demonstrate the “increment” during the end-of-sprint demo, where the team
shows what was completed in the sprint. You may not hear the word “increment” out
in the world, as it’s often referred to as the team’s definition of “Done”, a milestone,
the sprint goal, or even a full version or a shipped epic. It just depends on how your
TESTING Page | 9
JB PORTALS
teams defines “Done” and how you define your sprint goals. For example, some teams
choose to release something to their customers at the end of every sprint. So their
definition of ‘done’ would be ‘shipped’. However, this may not be realistic of other
types of teams. Say you work on a server-based product that can only ship to your
customers every quarter. You may still choose to work in 2-week sprints, but your
definition of ‘done’ may be finishing part of a larger version that you plan to ship
together. But of course, the longer it takes to release software, the higher the risk that
software will miss the mark.
Sprint
The heart of Scrum is a Sprint, a time-box of two weeks or one month during which a
potentially releasable product increment is created. A new Sprint starts immediately after
the conclusion of the previous Sprint. Sprints consist of the Sprint planning, daily scrums, the
development work, the Sprint review, and the Sprint retrospective.
In Sprint planning, the work to be performed in the Sprint is planned collaboratively by the
Scrum Team. The Daily Scrum Meeting is a 15-minute time-boxed event for the Scrum Team
to synchronize the activities and create a plan for that day.
A Sprint Review is held at the end of the Sprint to inspect the Increment and make changes
to the Product Backlog, if needed. The Sprint Retrospective occurs after the Sprint Review
and prior to the next Sprint Planning. In this meeting, the Scrum Team is to inspect itself and
create a plan for improvements to be enacted during the subsequent Sprint.
TESTING Page | 10
JB PORTALS
Testing furnishes comparison that compares the behavior and state of software against
mechanisms because the problem can be recognized by the mechanism. The mechanism may
include past versions of the same specified product, comparable products, and interfaces of
expected purpose, relevant standards, or other criteria but not limited up to these.
Testing includes an examination of code and also the execution of code in various
environments, conditions as well as all the examining aspects of the code. In the current
scenario of software development, a testing team may be separate from the development
team so that Information derived from testing can be used to correct the process of software
development.
TESTING Page | 11
JB PORTALS
Few can argue against the need for quality control when developing software. Late
delivery or software defects can damage a brand’s reputation — leading to frustrated and
lost customers. In extreme cases, a bug or defect can degrade interconnected systems or
cause serious malfunctions.
Consider Nissan having to recall over 1 million cars due to a software defect in the airbag
sensor detectors. Or a software bug that caused the failure of a USD 1.2 billion military
satellite launch. The numbers speak for themselves. Software failures in the US cost the
economy USD 1.1 trillion in assets in 2016. What’s more, they impacted 4.4 billion customers.
Though testing itself costs money, companies can save millions per year in development and
support if they have a good testing technique and QA processes in place. Early software
testing uncovers problems before a product goes to market. The sooner development teams
receive test feedback, the sooner they can address issues such as:
● Architectural flaws
● Poor design decisions
● Invalid or incorrect functionality
● Security vulnerabilities
● Scalability issues
When development leaves ample room for testing, it improves software reliability and high-
quality applications are delivered with few errors. A system that meets or even exceeds
customer expectations leads to potentially more sales and greater market share.
Verification is the process of checking that a software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not. It
verifies whether the developed product fulfills the requirements that we have. Verification
is static testing.
Validation is the process of checking whether the software product is up to the mark
or in other words the product has high level requirements. It is the process of checking the
validation of a product i.e. it checks what we are developing is the right product. It is
validation of actual and expected product. Validation is dynamic testing.
Validation means Are we building the right product?
TESTING Page | 12
JB PORTALS
Verification Validation
We check whether we are developing the We check whether the developed product is
product meets the specified requirements. software follows the specified business
requirements.
The execution of code does not happen in In validation testing, the execution of code
the verification testing. happens.
In verification testing, we can find the bugs In the validation testing, we can find those
early in the development phase of the bugs, which are not caught in the verification
product. process.
TESTING Page | 13
JB PORTALS
Quality assurance team to make sure that team to test the application.
the product is developed according to
customers' requirements.
Verification is done before the validation After verification testing, validation testing
testing. takes place.
In this type of testing, we can verify that In this type of testing, we can validate that
the inputs follow the outputs or not. the user accepts the product or not.
What is Debugging?
So, it's an obvious thing to which everyone will relate that as when the software is created,
it contains a lot of errors; the reason being nobody is perfect and getting error in the code is
not an issue, but avoiding it or not preventing it, is an issue!
All those errors and bugs are discarded regularly, so we can conclude that debugging is
nothing but a process of eradicating or fixing the errors contained in a software program.
Debugging works stepwise, starting from identifying the errors, analyzing followed by
removing the errors. Whenever a software fails to deliver the result, we need the software
tester to test the application and solve it.
Since the errors are resolved at each step of debugging in the software testing, we can
conclude that it is a tiresome and complex task regardless of how efficient the result was.
TESTING Page | 14
JB PORTALS
Types of Testing
Manual testing
The process of checking the functionality of an application as per the customer needs
without taking any help of automation tools is known as manual testing. While performing
the manual testing on any application, we do not need any specific knowledge of any testing
tool, rather we have a proper understanding of the product so we can easily prepare the test
document.
Manual testing can be further divided into three types of testing, which are as follows:
TESTING Page | 15
JB PORTALS
White-box testing
The white box testing is done by the Developer, where they check every line of a code
before giving it to the Test Engineer. Since the code is visible for the Developer during the
testing, that's why it is also known as White box testing.
The black box testing is done by the Test Engineer, where they can check the
functionality of an application or the software according to the customer /client's needs. In
this, the code is not visible while performing the testing; that's why it is known as black-box
testing.
Gray box testing is a combination of white box and Black box testing. It can be
performed by a person who knows both coding and testing. And if the single person performs
white box, as well as black-box testing for the application, it is known as Gray box testing.
○ First, the tester observes all documents related to software, to select testing areas.
○ All test cases are executed manually by using Black box testing and white box testing.
○ If bugs occurred then the testing team informs the development team.
○ The Development team fixes bugs and hands software to the testing team for a retest.
Automation testing
Automation testing is a process of converting any manual test cases into the test
scripts with the help of automation tools, or any programming language is known as
automation testing. With the help of automation testing, we can enhance the speed of our
test execution because here, we do not require any human efforts. We need to write a test
script and execute those scripts.
TESTING Page | 16
JB PORTALS
Software testing is performed to discover bugs in software during its development. The key
difference between automation and manual testing are as follows:
Testing in which a human tester In automation testing, automation tools are used
In this testing, human resources are It is much faster than the manual testing
TESTING Page | 17
JB PORTALS
It is repetitive and error-prone Here automated tools are used that make it
BVT (build verification testing) is time- It's easy to build verification testing
Instead of frameworks, this testing Frameworks like keyword, hybrid, and data
The process turnaround time is higher It completes a single round of testing within
than the automation testing process record time; therefore, a process turnaround
(one testing cycle takes lots of time) time is much lower than a manual testing
process.
The main goal of manual testing is Automation testing can only guarantee a
It is best for usability, exploratory and It is widely used for performing testing, load
adhoc testing testing and regression testing.
TESTING Page | 18
JB PORTALS
The Software Testing Life Cycle (STLC) is a systematic approach that defines the
testing activities and processes to ensure the quality and reliability of software applications.
It encompasses the planning, preparation, execution, and closure of testing activities.
The main goal of the STLC is to identify and document any defects or issues in the software
application as early as possible in the development process. This allows for issues to be
addressed and resolved before the software is released to the public.
Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to release
high-quality software that meets the needs of their customers, ultimately leading to
customer satisfaction and business success.
TESTING Page | 19
JB PORTALS
The following are the typical phases of the Software Testing Life Cycle:
Test Planning: Test planning involves defining the overall testing objectives, strategies, and
test deliverables. Testers identify the test scope, determine the test levels (e.g., unit testing,
integration testing, system testing), and create a test plan document. The test plan outlines
the test approach, test environments, test schedules, and resource allocation.
Test Design: In this phase, test cases are designed based on the requirements and
specifications. Testers create test scenarios, identify test conditions, and define test data and
expected results. The test cases should cover various functional and non-functional aspects
of the software application.
Test Environment Setup: Testers set up the required test environments, which may include
hardware, software, networks, and databases, to execute the test cases effectively. They
ensure that the test environments are stable, consistent, and reflect the production
environment as closely as possible.
Test Execution: Test execution involves running the test cases in the test environment and
comparing the actual results with the expected results. Testers record the test outcomes, log
any defects or issues encountered during testing, and communicate the results to the
relevant stakeholders.
Test Closure: In the test closure phase, testers summarize the testing activities and evaluate
the overall test coverage, quality, and effectiveness. They prepare test closure reports, which
include the test summary, key findings, metrics, and recommendations for future testing
efforts. The test closure phase helps stakeholders make informed decisions about the
application's readiness for release.
Test Cycle Evaluation: Test cycle evaluation involves assessing the effectiveness and
efficiency of the testing process. Testers analyze the test metrics, evaluate the test coverage,
and identify areas for improvement. Lessons learned from the testing cycle are documented
and used to enhance future testing activities.
TESTING Page | 20
JB PORTALS
It's worth noting that the Software Testing Life Cycle may vary depending on the specific
methodologies or approaches used, such as waterfall, agile, or DevOps. The key objective of
the STLC is to ensure that comprehensive and systematic testing activities are performed to
deliver high-quality software products.
Test case design techniques help ensure that test cases cover a wide range of
scenarios and adequately validate the software application. Here are some commonly used
test case design techniques:
Equivalence Partitioning: This technique divides the input data into equivalence classes or
groups, where each class should exhibit similar behavior. Test cases are then designed to
cover representative values from each equivalence class, reducing the redundancy of test
cases.
Boundary Value Analysis: Boundary value analysis focuses on testing the boundaries or
limits of input data. Test cases are designed to cover values at the lower and upper
boundaries, as well as just below and above these boundaries. This technique helps identify
defects that are often found near the boundaries.
Decision Table Testing: Decision tables are used to represent complex business logic or
rules. Test cases are derived by considering different combinations of conditions and
corresponding actions or outcomes. This technique helps ensure that all possible
combinations of conditions are tested.
State Transition Testing: State transition testing is useful for applications that have
different states or modes. Test cases are designed to validate the transition between states
and the behavior of the application in each state. This technique ensures comprehensive
coverage of all possible state transitions.
Error Guessing: Error guessing is an informal technique where testers use their experience
and intuition to anticipate and design test cases based on potential errors or defects. Testers
think from the perspective of a user or a developer and create test cases that focus on error-
prone areas.
TESTING Page | 21
JB PORTALS
try different inputs, and observe the behavior. This technique allows for flexibility and
creativity in finding defects.
Writing effective test cases is crucial for ensuring comprehensive and reliable testing.
Here are some key steps to consider when writing test cases:
Define Test Scenarios: Identify different test scenarios that cover a range of possible
interactions and conditions. Each test scenario should focus on a specific aspect or
functionality of the application.
Determine Test Data: Define the necessary test data required for each test case. Test data
should cover various combinations, boundary values, and edge cases to ensure thorough
testing.
Write Test Steps: Document the step-by-step instructions for executing the test case.
Clearly specify the inputs to be provided, the actions to be performed, and the expected
results for each step.
Keep Test Cases Independent: Ensure that each test case is independent and does not
depend on the execution or outcome of other test cases. This allows for better isolation and
identification of defects.
Be Clear and Concise: Write test cases in a clear and concise manner, using simple language
and avoiding ambiguity. Use bullet points or numbering to make the test steps easy to follow.
Provide Expected Results: Clearly state the expected results for each test step or test case.
The expected results should be specific, measurable, and verifiable.
Review and Validate: Review the test cases to ensure they cover all the necessary scenarios
and requirements. Validate the test cases with stakeholders to confirm their accuracy and
completeness.
TESTING Page | 22
JB PORTALS
Maintain Test Documentation: Keep the test cases well-organized and up to date.
Regularly review and update them as the application evolves or new requirements emerge.
By following these steps and leveraging appropriate test case design techniques, you can
create effective test cases that provide comprehensive coverage and help identify defects in
the software application.
• To help new testers and devs quickly pick up testing, even if they join in the middle
of an ongoing project.
Test Case ID
Test Scenario
Test Steps
Prerequisites
Test Data
Expected/Intended Results
Actual Results
Test Status – Pass/Fail
TESTING Page | 23
JB PORTALS
Let’s build a test case example based on a specific scenario. Here is a sample case.
TESTING Page | 24
JB PORTALS
Test Criteria
Test criteria, also known as exit criteria or completion criteria, are the conditions or
standards that need to be met in order to determine when testing activities can be
considered complete. These criteria help assess whether the software application is ready
for release or the next phase of the development process.
Test Coverage: The extent to which the software application has been tested. This includes
functional coverage, code coverage, and requirements coverage.
Defect Density: The number of defects found and their severity. The criteria may specify a
maximum allowed defect density or a certain level of defect resolution.
Test Case Execution: The completion of test case execution, ensuring that all planned test
cases have been executed.
Test Environment: The availability and stability of the test environment, including
hardware, software, networks, and databases.
Performance Targets: If performance testing is included, the test criteria may specify
certain performance targets or thresholds that need to be met.
Documentation: The completion and accuracy of test documentation, including test plans,
test cases, test scripts, and test reports.
Test Plan
A test plan is a formal document that outlines the approach, objectives, scope, and
schedule of testing activities for a specific software project or release. It provides a roadmap
for the testing effort and guides the testing team throughout the project.
Introduction: An overview of the software project, including the purpose and scope of
testing, project goals, and stakeholders involved.
Test Objectives: The specific goals and objectives of the testing effort, such as ensuring
functionality, performance, reliability, and security.
Test Scope: The areas or modules of the software application that will be tested, as well as
any excluded areas or functionalities.
TESTING Page | 25
JB PORTALS
Test Approach: The overall approach to testing, including the test levels (e.g., unit testing,
integration testing, system testing) and the types of testing (e.g., functional, non-functional,
regression).
Test Deliverables: The list of test artifacts or deliverables that will be produced during the
testing process, such as test plans, test cases, test scripts, and test reports.
Test Schedule: The timeline and sequencing of testing activities, including milestones,
resource allocation, and dependencies on other project activities.
Test Environment: The hardware, software, and network configurations required for
testing, including any specific tools or test management systems.
Test Execution: The procedures and guidelines for executing test cases, capturing test
results, and reporting defects. This may include test data, test procedures, and the roles and
responsibilities of the testing team.
Test Risks and Mitigation: The identification of potential risks and issues that may impact
the testing process, along with mitigation strategies or contingency plans.
Test Strategy
A test strategy is a high-level document that defines the overall approach, goals, and
guidelines for testing activities across multiple projects or releases. It provides a framework
for the testing process and sets the direction for the testing team.
Test Levels: The different levels of testing to be performed, such as unit testing, integration
testing, system testing, and user acceptance testing.
Test Types: The various types of testing to be conducted, including functional, non-
functional, performance, security, and usability testing.
Test Techniques: The specific techniques or methodologies to be used for test case design,
such as equivalence partitioning, boundary value analysis, or state transition testing.
Test Automation: The extent of test automation to be implemented, including the tools and
frameworks to be used and the criteria for selecting test cases for automation.
Test Environment: The requirements and setup of the test environment, including
hardware, software, networks, and databases.
TESTING Page | 26
JB PORTALS
Test Data Management: The approach for managing test data, including the creation,
selection, and maintenance of test data sets.
Defect Management: The process for logging, tracking, and managing defects, including the
tools and systems to be used and the roles and responsibilities of the stakeholders involved.
Test Metrics and Reporting: The metrics to be collected during testing, such as test
coverage, defect density, and test execution progress. The strategy also outlines the
reporting mechanisms and frequency of test status updates.
Test Team Organization: The roles and responsibilities of the testing team members,
including the coordination with other project stakeholders, such as developers, business
analysts, and project managers.
Both the test plan and test strategy are important documents in the testing process. The test
plan provides detailed information about the testing activities for a specific project, while
the test strategy outlines the overall approach and guidelines for testing across multiple
projects or releases. These documents help ensure a structured and systematic approach to
testing, leading to improved software quality and successful project delivery.
A defect tracking tool, also known as a bug tracking tool or issue tracking tool, is
software designed to help track and manage software defects or issues throughout their life
cycle. It provides a centralized platform for capturing, documenting, and monitoring defects
from identification to resolution.
Defect Logging: The ability to log defects with detailed information, including the steps to
reproduce, the expected and actual results, severity, priority, and assigned resources.
Defect Prioritization: Defect tracking tools allow users to prioritize defects based on their
severity, impact on the system, and customer needs. This helps in efficiently allocating
resources and addressing critical defects first.
TESTING Page | 27
JB PORTALS
Reporting and Metrics: These tools generate reports and metrics related to defect trends,
defect density, defect resolution time, and other key performance indicators. This data helps
stakeholders assess the quality of the software and make data-driven decisions.
Integration: Defect tracking tools often offer integration capabilities with other
development and testing tools, such as project management systems, version control
systems, test management tools, and continuous integration tools.
Defect Life Cycle, also known as Bug Life Cycle, refers to the various stages that a
defect goes through from its identification to its closure. The specific stages may vary
depending on the organization and the defect tracking process in place.
New: The defect is initially reported and logged into the defect tracking tool.
Fixed: The defect has been fixed by the developer and is ready for retesting.
Verified: The fixed defect is retested to ensure that it has been resolved successfully.
Closed: The defect is considered closed if it has been fixed, verified, and approved for
closure. The defect is no longer active in the defect tracking system.
Deferred: In some cases, a defect may be deferred for a future release or iteration if it is not
critical or cannot be addressed immediately.
The defect life cycle helps in tracking the progress of defect resolution, monitoring the status
of defects, and facilitating effective communication among team members.
TESTING Page | 28
JB PORTALS
Thunder Client is an alternative to the famous Postman tool used for testing client
APIs. The Thunder Client VS Code extension is lightweight and lets you test APIs on the fly
within the editor.
You might not want to download another tool to test the APIs you're building. Instead, how
about downloading an extension in VSCode that offers a wide range of functionalities like:
● collections,
● environment variables,
Track Activity: Thunder Client keeps track of recent API requests a user has made in the
past. You can also filter the activity to narrow it down to a preferred activity search. It is also
called History.
TESTING Page | 29
JB PORTALS
Use Collections: You can organize APIs so it's easier to access them. Collections are a group
of APIs, so you can create a User collection to include APIs like create user, edit user, delete
user, and so on.
Environment Variables: With Envs, you can store credentials like tokens, base URLs, and
public and private keys and then use the variables within the request body.
Make Requests: You can specify your preferred HTTP verb to go along with the request, like
POST, then the endpoint. With the request Thunder Client, there is also support for Query
Parameters, HTTP Headers (Raw or Not), Authentication (None, Basic, Bearer, OAuth 2, AWS
and NTLM Authentication), Body (Payload attached to individual request) and Test (you
select the test type which can be a response code and set a value to assert).
Responses: Thunder Client offers a well-crafted response section with the response body,
response status, and size and time it took for the request. It also lets users add markdown
supported documentation, making it even more enjoyable.
TESTING Page | 31
JB PORTALS
TESTING Page | 32
JB PORTALS
Query Params
Headers let you set HTTP headers like authorization, content-type, origin, user-agent,
accept-language, referrer, and so on.
If you want any headers to be optional, just make sure to leave them unchecked for the
request. There is also an autocomplete suggestion enabled for your preferred type of header.
TESTING Page | 33
JB PORTALS
Http Headers
To access resources, you need to have tokens that authenticate them. With Thunder Client,
the Auth tab lets you select your preferred type of Auth and add credentials.
In my case, I choose Bearer; then, I have a token pasted into the text area and an auto-
generated token prefix for the request.
TESTING Page | 34
JB PORTALS
Authentication
You can include a payload when making a request. To add the payload, select the Body tab,
and you will see different data formats supported by the extension.
Request Payload
TESTING Page | 35
JB PORTALS
TESTING Page | 36