Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

Software testing

By Lavinia Netoi

V1.4 7th Sept 2021


Contents
● What is testing?
● Test types
● Test phases
● Test roles
● Test artifacts
● Test case management
● Defect management
● Risk management
● Test reporting
● Test management
What is testing?

● Software testing is testing software applications to check that they meet the
business requirements.
● To achieve this goal, different types of testing can be performed using certain
processes and tools.
● The IT industry will not evolve anymore without software testing, as software
testing offers a guarantee that the product is in a good shape before deployment to
production.
● Software testing is a very good career choice.
Test types

There are different types of testing based on different criteria.

● Manual testing vs automated testing:


○ Manual testing is testing a software product following the business requirements without writing
code.
○ Automated testing is testing by writing code and is done by testers with programming skills.
Automated testing is mostly used for test scenarios with a large volume of test data, multiple
combinations of test data, regression testing and performance testing.
Test types

● Black box testing vs white box testing:


○ Black box testing is testing a software product without being knowledgeable of the code that was
written to develop the product. The code is a black box for the tester. He tests by following the
business scenarios.
○ White box testing is testing a software product while being knowledgeable of the code that was
written to develop the product.
Test types

Functional vs non-functional testing:

● Functional testing tests the features of a software product following the business
requirements.
● Non-functional testing tests other aspects of the software product: security,
performance, portability, accessibility etc.
Test types
● Security testing checks whether the product is vulnerable to security attacks. Example:
a user should not be able to write a SQL statement in an input field of a web
appplication that retrieves sensitive information from the database.
● Performance testing checks that the product is performing at expected parameters,
e.g. web page loading time, CPU usage, memory usage should fall under certain values
etc.
● Portability testing checks that the product performs as expected on different
platforms, e.g. operating systems, browsers, phones, tablets etc.
● Accessibility testing checks that the product is available for use to people with
physical impairment, e.g. audio content for people with visual impairment.
Test types

Planned testing vs exploratory testing:

○ Planned testing means writing test cases following the business


requirements and executing them. Planned testing is done by professional
testers.
○ Exploratory testing means testing without a test plan following your
intuition. Exploratory testing is done by professional testers when there is not
enough time for a structured approach, is done by non-professional testers or
by business users.
Test types

Positive testing vs negative testing vs edge case scenario testing:

○ Positive testing means testing the most frequent positive scenario e.g. logging in an
ecommerce web application with a correct username, a correct password and pressing
the Login button.
○ Negative testing means checking how the application reacts to an unexpected user
behaviour e.g. user tries to log in with correct username, but no password.
○ Edge case scenario testing (or corner case testing) means testing rare scenarios e.g.
creating a new user with a long name or a name with special characters.
Test phases
● Unit testing
● Component testing
● Component integration testing
● System testing
● System integration testing
● End-to-end testing
● Regression testing
● UAT (User Acceptance Testing)
● Production testing
Test phases
● Unit testing checks the smallest components of the product at code level. It is done by
developers.
● Component testing (CT) checks the behaviour of a component of the product in
isolation. For example testing the product page of an e-commerce website.
● Component integration testing (CIT) tests the interaction between different
components of the product e.g the product page and the Wishlist page.
● System testing (ST) checks the behaviour of a system of the product, e.g. the CRM
system
● System integration testing (SIT) checks the interaction between different systems of
the product, e.g. the interaction between the ecommerce system and the CRM system
Test phases
● End-to-end testing (e2e) checks that the end-to-end business scenarios that span the
whole product are working as expected e.g. from user login on an ecommerce website,
to selecting 3 products and purchasing them.
● Regression testing checks that the old main features of the product are still working as
expected after the latest enhancements and/or defect fixing.
● UAT (User Acceptance Testing) is done by the internal or external customer. They test
the application from a business point of view.
● Production testing is a test done on production and is generally a short test which is
focused on the main features of the product.
Test phases
Notes:

● Sometimes the terms component and system are used in an interchangeable way
in the testing literature.
● Based on available time, budget and team members, a test team will decide if they
apply all phases of testing or just some of them. The approach should be
pragmatic, not dogmatic.
Test phases
Other test phases:
● Sanity check: a short test that checks the stability and validity of a new
feature.
● Smoke test: a short test that checks the critical features of an application.
Test roles
The testing roles can depend on methodology (waterfall vs agile) and company culture.

Testing roles:

● Test manager
● Tester
Test roles
● Test manager: coordinates a team of testers, assigns work to each tester,
collaborates with the other stakeholders of the project (development team lead,
architect, project manager, customer etc) and reports to the stakeholders. The test
manager can strictly coordinate or can be hands-on.
● Tester: is a member of the test team. He is assigned testing activities: writing test
cases, executing test cases, opening defects, reporting to the test manager.
Test roles
Testers can be involved in different activities or can be specialized in a certain activity
and thus their title can be specific:

● Manual tester
● Automation tester
● Security tester
● Performance tester
● etc
Test artifacts
Test artifacts are the following:

● Test cases
● Traceability matrix - a mapping of test cases to business requirements to check
whether we have 100% test coverage
● Automated scripts
● Defect record
● Test Strategy
● Reports
Test artifacts
Test strategy:

● Is a document written by the test manager that contains the high-level approach to
testing.
● Test strategy is presented to the project team and updated as per feedback.
● Test strategy can be written as a .doc, .ppt or Confluence page based on company and
project culture.
Test artifacts
Test strategy chapters:

● Project goal ● Test types

● Technical architecture ● Test tools


● Test goal ● Test team membership
● Test schedule
● Test reporting
● Test environments
● Assumptions and dependencies
● Test data
● Risks and constraints
● Test phases
Test case management
● A test case is a set of steps that the tester has to take to check that the application
behaves as expected as per the business requirements.
● A test case has a title and a number of steps. A test case can have prerequisites
(e.g. user is already logged in) and for some test cases the tester has to prepare test
data.
● A test case needs to be written in a coherent, clear, logical and detailed way.
Test case management
Example of a test case for the Login page of a web application:

Test Case Step # Action Test Data Expected result

1 User accesses the Url={url} The Login page of


web application using the web application
its url. loads.
2 User inserts Username=Mary The homepage of the
username, password Password=Pass123$ web application loads
and clicks the Login and the user is
button. logged in.
Test case management
● Example of a test case for sorting TVs on an ecommerce website ascending by
price (prerequisite - user is already logged in):
Test Case Step # Action Test Data Expected result

1 User chooses Products from TVs


category TVs in main category are listed.
menu.
2 User chooses option TVs are displayed
Sort Ascending By ascending by price.
Price in display option
menu.
Test case management
● Example of a test case for adding a TV to the Wishlist on an ecommerce website
(prerequisite - user is already logged in):
Test Case Step # Action Test Data Expected result

1 User chooses Products from TVs


category TVs in main category are listed.
menu.
2 User chooses option The Wishlist counter
Add To Wishlist for a increases in value with
product from TVs one unit.
category.
3 User clicks Wishlist Wishlist page loads
link in page header. and it contains the TV.
Test case management
● Test cases are written and kept in a test case repository that is available to the test
team for reuse and updates.
● Test management tools: Quality Center, XRay addon for Jira etc.
● The test cases are written during the test case design phase.
● Test cases are executed/run during the test case execution phase.
● During test case execution, test case status should be changed to reflect current
status: Not run, Passed, Failed, Blocked, In progress.
Defect management
● A defect is an unexpected behaviour of a product e.g. the search mechanism on the
homepage of an ecommerce website is not working.
● A defect should be opened in a defect management tool like Jira and tracked until
fixed and closed.
● Throughout the lifecycle of the defect, a defect can have following status values:
Open, In review, Rejected, In testing, Reopened, Fixed, Closed. There can also
be other status values based on the default tool settings or customized settings.
● The defect is usually opened by a tester or business person and is fixed by a
developer.
Defect management
When logging a defect, following aspects should be specified:

● Title: should be short and clear


● Description: should contain the steps followed until the defect was found. These
steps will help the developer to reproduce the defect and fix it.
● Severity
● Priority
● Operating system, browser, device etc
● Attached printscreens, log files
Defect management
Severity: a technical classification of the defects which shows the functional impact of
the defect on the product. Severity can be classified as: Minor, Medium, Major,
Blocker. Severity values can depend on the default tool settings or customized settings.

Severity is usually set by a technical person, be it the tester or the developer.


Defect management
Example of defects of different severity values:

● Minor: spelling mistake


● Medium: a link has a wrong url
● Major: search mechanism is not working on a web page
● Blocker: When accessing the web application url, the application is crashing and
consequently no feature can be used
Defect management
● Priority is the business urgency with which the defect should be fixed because of
reasons related to end user experience, reputation, money, law etc.
● Priority is a business classification of defects.
● Priority values can be: Low, Moderate, High, Critical. The priority values can
depend on the default tool settings or customized settings.
● Priority can be set by a tester or by a business representative.
Defect management
Example of defects of different priority values:

● Low: incorrect spelling on a page of a web application that is not frequently


accessed
● Moderate: the sorting mechanism is not working on an administrative page
accessed by a limited number of users
● High:
● the search mechanism is not working on the homepage of an ecommerce website
● incorrect spelling of the web application name on homepage
● Critical:
○ The web appplication crashes when accessing its url
Defect management
The isn’t a direct relationship between severity and priority:

Major severity, Low priority Major severity, High priority

The search mechanism isn’t The search mechanism isn’t


working on an administrative page working on the homepage of an
accessed by 3 admins. ecommerce website.
Severity Minor severity, Low priority Minor severity, High priority

Typo on a page of a web The website name is misspelled


application that is infrequently on homepage.
accessed by users.

Priority
Risk management
● A risk is a probability for a negative event to happen
● Risk management is the activity of identifying risks and taking measures ahead of
time to reduce/eliminate its probability of occurrence or impact.
● Risk management is a section in the test strategy which lists the risks identified at
the beginning of the project and throughout the project when major updates are
made to the Test Strategy.
● Risk management can also be done as a continuous activity during project
iterations, be it waterfall iterations, agile scrum sprints or SAFe (Scaled Agile
Framework) program increments (PIs).
Risk management
● When logging a risk, following aspects should be mentioned: title, description,
priority, mitigation plan, owner and deadline.
● A risk log is a list of known risks. The risk log should be constantly updated with
current information.
Risk management
Risk log example

Risk title Description Priority Mitigation plan Owner Deadline

Not enough There is not enough Medium Write high-level test cases Test dd.mm.yy
time to write time to write detailed with a relevant title and team
detailed test test cases as deadline steps with most important
cases of delivery to actions.
production is in 2
weeks.

No security There is no security High A developer with John dd.mm.yy


testing testing specialist in cybersecurity skills will do
specialist the test team. security testing.
Test reporting
● Test reporting is reporting the status of testing to stakeholders to assess if testing
activities are on track. Test reports also highlight the High and Critical priority
defects that must be handled first.
● Based on company culture and project culture, the frequency of reporting can be
daily, weekly, at the end of each iteration etc.
● Reporting can be done manually by email or in a ppt uploaded in a sharepoint or it
can be done automatically through tools like Jira, XRay which generate reports
based on existing artifacts and their current status.
● The risk log should be part of the test report.
Test reporting
KPIs (Key Performance Indicators) to report:

● Test case statistics by status %, #


○ Overall and per test phase
○ Example: Passed 45%, Failed 30%, In progress 25%
● Defect statistics by status %, #
○ Overall and per test phase
○ Example: Open 50%, Rejected 5%, Closed 45%
● Defect statistics by priority %, #
○ Overall and per test phase
○ Example: Low 5%, Moderate 65%, High 25%, Critical 5%
Test reporting
Example of a test report: Test Case Execution Defect Status Statistics

● RAG: Amber
● Test case statistics: Failed Open
10% 5%
Passed 90%, Failed 10%
Closed
● Defect statistics: Passed 95%
90%
Open 5%, Closed 95%

● Risk log:
Risk name Risk description Mitigation plan Priority Owner Deadline

Not enough time to There is not enough time to test the new Deploy the Wishlist High Mary dd.mm.yyy
test the new Wishlist Wishlist feature till deployment to feature in the next
feature. production deadline. release.
Test management
○ Test management is the activity of managing the test process: reading the
business requirements, writing test cases, executing test cases, opening
defects, test reporting, communicating with project stakeholders, participating
in a Go/No-Go decision before the product deployment to production based on
the test status and open risks.
○ Test management is performed by the test manager in a traditional test team
or by test engineers or DevOps engineers in agile teams.

You might also like