Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

Quality - Testing

Introduction
QA-Introduction
1. Quality
2. Quality Control vs Quality
Assurance
3. Software Testing
4. Testing Principles
5. Testing Life’s cycle
6. Software Defects
7. Defect’s Life Cycle
8. Severity and Priority
9. Defect Reporting
Quality
What is Quality?

The ability of a set of inherent characteristics of a product, system or process


to fulfill requirements of customers and other interested parties

[ISO 9000:2000]
What is Software Quality?

Software Quality is conformance to:

• Functional and performance requirements explicitly established

• Explicitly documented development standards

• Implicit characteristics that are expected from every professionally

developed software.
Mc Call’s Quality Factors
Mc Call’s Quality Factors

• Correctness: The extent to which a program satisfies its specifications and

fulfills the users mission objectives.

• Reliability: The extent to which a program can be expected to perform its

intended function with required precision.


Mc Call’s Quality Factors

• Maintainability: The effort required to locate and fix a defect in an

operation program.

• Usability: The effort required to learn, operate, prepare input, and interpret

output of a program.

• Flexibility: The effort required to modify an operational program.


Mc Call’s Quality Factors

• Testability: The effort required to test a program to ensure that it performs

its intended functions.

• Interoperability: The effort required to couple one system with another.


Mc Call’s Quality Factors

• Portability: The effort required to transfer a program from one hardware

configuration and/or software system environment to another.

• Integrity: The extent to which illegal access to the programs and data of a

product can be controlled.


Mc Call’s Quality Factors

• Reusability: The extent to which parts of a software system can be reused

in other applications.

• Efficiency: The amount of computing resources and code required by a

program to perform a function.


Quality
Assurance
What is Software Quality Assurance?

Software Quality Assurance (SQA) is a set of activities (facilitation, training,

measurement and analysis) required to provide adequate confidence that the

processes are established and continuously improved in order to produce a

product or service that meets the requirements and is ready to be used.


Software Quality Assurance Activities

• Procedure definition

o Configuration Management

o Nonconformities Report

o Corrective Actions Report

o Testing

o Formal Inspections
Software Quality Assurance Activities

• Standards Definitions

o Standards documentation

o Standards design

o Standards Code
Software Quality Assurance Activities

• Improve methodologies and analysis, design, program

and testing tools.

• Review and inspection procedures


Software Quality Assurance Activities

• Test Strategy definition


• Measurement definition
• Controls
• Process Audit
o Documentation
• Validate and Verify
o Changes

o Configuration

Management
Software Quality Assurance Characteristics

• Help establish process

• Establish measure programs to evaluate process.

• Identify process weakness and improve them.

• Manager responsibility

• Assess control quality process

• Related with all process’ products.


Quality
Control
What is Software Quality Control?

Software's Quality Control is defined as the process and methods used to

compare the product's quality against the requirements and standards meet

and the action taken when nonconformities are detected.


Software Quality Control Activities

• Keep the process under control.

• Eliminate the bug causes.


Software Quality Control Characteristics

• Related to a product or services.

• Verify a particular product or service’s attribute.

• Identify defects and its main goal is correct them

• Work responsibility.
Verification
vs Validation
What is Verification in Software Testing?

Verification makes sure that the product is designed to deliver all

functionality to the customer.

• Verification is done from the starting of the development process. It

includes reviews and meetings, walk-throughs, inspection, etc. to

evaluate documents, plans, code, requirements and specifications.


What is Verification in Software Testing?

• It answers the questions like: Are we building the product right?

• Are we accessing the data right (in the right place; in the right way).

• It is a low level activity.

• Performed during development on key artifacts, like walkthroughs,

reviews and inspections, mentor feedback, training, checklists and

standards.
What is Verification in Software Testing?

• Demonstration of consistency, completeness, and correctness of the

software at each stage and between each stage of the development

life cycle.
[ISTQB]
What is Validation in Software Testing?

Validation is determining if the system complies with the requirements and

performs functions for which it is intended and meets the organization’s

goals and user needs.

• Validation is done at the end of the development process for the

artifact being validated and takes place after verifications are

completed.
What is Validation in Software Testing?

• It answers the question like: Did we build the right product?


• It is a High level activity.

• Performed after a work product is produced against established

criteria ensuring that the product integrates correctly into the

environment.
What is Validation in Software Testing?

• Determination of correctness of the final software product by a

development project with respect to the user needs and

requirements [ISTQB]
Difference
between SQA
and SQC
Quality Assurance Quality Control
Definition QA is a set of activities QC is a set of activities for
for ensuring quality in the ensuring quality in products.
processes by which The activities focus on
products are developed. identifying defects in the
actual products produced.
Focus on QA aims to prevent QC aims to identify (and
defects with a focus on correct) defects in the
the process used to finished product. Quality
make the product. It is a control, therefore, is a
proactive quality process. reactive process.
Goal The goal of QA is to The goal of QC is to identify
improve development defects after a product is
and test processes so developed and before it's
that defects do not arise released.
when the product is being
developed.
Quality Assurance Quality Control
How Establish a good quality Finding & eliminating
management system and sources of quality problems
the assessment of its through tools & equipment
adequacy. Periodic so that customer's
conformance audits of the requirements are continually
operations of the system. met.
What Prevention of quality The activities or techniques
problems through planned used to achieve and
and systematic activities maintain the product quality,
including documentation. process and service.
Responsibi Everyone on the team Quality control is usually
lity involved in developing the the responsibility of a
product is responsible for specific team that tests the
quality assurance. product for defects.
Quality Assurance Quality Control
Example Verification is an example of Validation/Software Testing is
QA an example of QC

As a tool QA is a managerial tool QC is a corrective tool

Orientation QA is process oriented QC is product oriented


Software
Testing
What is Testing?

It is the process of evaluating a system or components of a system manually or

automatically to verify that it satisfies specific requirements or to verify

differences between current and expected results.

Testing is comparison between current behavior and product specifications.


What is Software Testing?

Software testing is a process of executing a program or application with the

intent of finding the software bugs.


[ISTQB]
Which are Software Testing’s goals?

•Finding bugs

•Gaining confidence about the level of quality

•Providing information for decision making

•Preventing defects

[ISTQB]
Testing
Principles
1. Testing shows presence of defects

Testing can show the defects are present, but cannot prove that

there are no defects. Even after testing the application or product

thoroughly we cannot say that the product is 100% defect free.

Testing always reduces the number of undiscovered defects

remaining in the software but even if no defects are found, it is not a

proof of correctness. [ISTQB]


2. Exhaustive testing is impossible

Testing everything including all combinations of inputs and

preconditions is not possible. So, instead of doing the exhaustive

testing we can use risks and priorities to focus testing efforts. So,

accessing and managing risk is one of the most important activities

and reason for testing in any project.

[ISTQB]
3. Early testing

In the software development life cycle testing activities

should start as early as possible and should be focused on

defined objectives. [ISTQB]


4. Defect clustering

A small number of modules contains most of the defects

discovered during pre-release testing or shows the most

operational failures. [ISTQB]


5. Pesticide paradox
If the same kinds of tests are repeated again and again, eventually the

same set of test cases will no longer be able to find any new bugs. To

overcome this “Pesticide Paradox”, it is really very important to review the

test cases regularly and new and different tests need to be written to

exercise different parts of the software or system to potentially find more

defects. [ISTQB]
6. Testing is context dependent

Testing is basically context dependent. Different kinds of sites

are tested differently. For example, safety – critical software is

tested differently from an e-commerce site.

[ISTQB]
7. Absence of errors fallacy

If the system built is unusable and does not fulfil the user’s

needs and expectations then finding and fixing defects does

not help. [ISTQB]


To find out more watch this video – Testing Principles
Testing Principles

If you cannot reproduce the video from here, find this


video attached at the Confluence page:
https://confluence.avantica.com:8443/display/AVANTE
SE/3.+QA+Introduction
Quality Assurance and Testing

On-going
Audits and improvements White-box
sampling Performance
Test testing

Standards Regression Test


Define Implementation
Procedures Load Testing

Software
Coordinate
QA Process
Control Testing
Sanity Test

Peer Reviews Black-box


testing
Functional
Define Test
Metrics
Analyze
Testing Bug
Software Verification Smoke Test
Testing
Testing Life
Cycle
Testing Life Cycle

[ISTQB]
1. Planning and Control
Planning is the activity of defining the objectives of testing and the

specification of test activities in order to meet the objectives and mission

Control is the ongoing activity of comparing actual progress against the

plan and reporting status, including deviations from the plan.


2. Analysis and Design
It is the activity during which general testing objectives are transformed

into tangible test conditions and test cases.


Activities:

• Creating bi-directional traceability between test basis and test cases

• Review the test basis

• Evaluating testability

• Identifying and prioritizing test conditions

• Designing and prioritizing high level test cases

• Identifying necessary test data

• Designing the test environment set up


3. Implementation and Execution
It is the activity where test procedures or scripts are specified by

combining the test cases in a particular order and including any other

information needed for test execution, the environment is set up and the

tests are run.


Activities:

• Finalizing, implementing and prioritizing test cases

• Developing and prioritizing test procedures, creating test data

• Creating test suites

• Verifying and updating bi-directional traceability between the test basis and

test cases

• Executing test procedures


Activities:

• Logging the outcome of test execution

• Compare actual results with expected results

• Reporting discrepancies as incidents and analyzing them

• Repeating test activities as a result of action taken for each

discrepancy.
4. Evaluating Exit Criteria and Reporting
It is the activity where test execution is assessed against the defined

objective.

• Checking test logs against the exit criteria specified in test planning

• Assessing if more tests are needed or if the exit criteria specified

should be changed

• Writing a test summary report for stakeholders


5. Closure activities
Collect data from completed test activities to consolidate experience, test

ware, facts and numbers. Test closure activities occur at project

milestones such as when a software is released, a test project is

completed, a milestone has been achieved or a maintenance release has

been completed.
Activities:
• Checking which planned deliverables have been delivered

• Closing incident reports or raising change records for any that remain

open

• Documenting the acceptance of the system

• Finalizing and archiving test ware, the test environment and the test

infrastructure for late reuse


Activities:

• Handing over the test ware to the maintenance organization

• Analyzing lessons learned to determine changes needed for future

releases and projects

• Using the information gathered to improve test maturity


Test Case
What is a Test Case?

A set of test inputs, execution conditions, and expected results developed

for a particular objective, such as: to exercise a particular program path

or to verify compliance with a specific requirement.


Format of a Test Case
• ID • * Expected result
• * Title • Status
• * Description • Created by
• * Prerequisites / • Date of creation
preconditions • Executed by
• * Steps • Date of execution
• Input data • Priority
• Action

* Basic structure of a test case


Test Case Example

Test Case created in Testlink tool


Basic Test Case Fields
3. Summary: Test Case description
1. Test Case ID: Unique Test Case
Identification Number. 4. Precondition: The required state of

2. Title: What is being tested on a short a test item and its environment prior to

sentence. execution.

2
1

4
5. Steps: Execution guidance

6. Expected Result: Correct outcome for each step or test case.

5 6
Test Cases Characteristics

• Precise: show, what do you test?

• Effective: Finding bugs.

• Traceable: related with requirements

• Evolutionary: can be adaptable

• Efficient: Only necessary steps

• Initial Status: return to original test’s environment status

[ISTQB]
Good Test Cases

• Unambiguous: Based on currently customer requirement

• Atomic: Unique, simple and clear

• Should not be repeated

• Must show presence of defect. High coverage rate.

• Must use test techniques

• Peer review.
To find out more watch this video – How to Write a
Test Case

Source: https://www.guru99.com/test-case.html
How to write a Test Case

If you cannot reproduce the video from here, find this


video attached at the Confluence page:
https://confluence.avantica.com:8443/display/AVANTE
SE/3.+QA+Introduction
Error, Defect
and Failure
Error, Defect and Failure. What is the difference?

Error: The mistakes made by programmer is known as an ‘Error’. This

could happen because of the following reasons:

• Because of some confusion in understanding the functionality of the

software

• Because of some miscalculation of the values

• Because of misinterpretation of any value, etc. [ISTQB]


Defect: The bugs introduced by programmer inside the code are

known as a defect.

This can happen because of some programmatically mistakes. In different

organizations it’s called differently like bug, issue, incidents or problem.

[ISTQB]
Failure: If under certain circumstances these defects get executed by

the tester during the testing then it results into the failure which is

known as software failure.

[ISTQB]
Error, Defect and Failure

• Error: human action that produces an

incorrect result.

• Defect: result of coding error or fault in the

program.

• Failure: deviation from expected result.


To find out more watch this video – Difference
between Error, Defect and Failure
Difference between Error, Defect and Failure

If you cannot reproduce the video from here, find this


video attached at the Confluence page:
https://confluence.avantica.com:8443/display/AVANTE
SE/3.+QA+Introduction
Causes of Software Defects
Defects occur because human beings are fallible and because there

is pressure, complex code, complexity of infrastructure, changing

technologies and/or many system interactions.

• Human error

• Environmental conditions

• Changes in hardware

• Executing Software (e.g. Antivirus)


Defect’s Life
Cycle
Defect life cycle is a cycle which a defect goes through during its lifetime. It

starts when defect is found and ends when a defect is closed, after ensuring

it’s not reproduced. Defect life cycle is related to the bug found during

testing.
Defect’s Life Cycle
Cycle

1. Defect is found and the tester proceeds to created it. Its status is

‘OPEN’.

2. The team lead assigns the defect to a developer, so the status is

‘ASSIGNED’
3. The developer proceeds to review the bug assigned. If the issue is

considered bug; the dev. starts to work on the defect in order to have fix

it; but If the developer feels that the bug is not genuine, he rejects the

bug. Then the state of the bug is changed to “REJECTED


Note: If the bug is rejected, it should be assigned to the QA in order to

validate the resolution and agrees that the appropriate resolution has

been taken, then it will be closed.

4. Once the bug is on the status “FIXED”, it will be assigned to the QA

team to execute the bug verification.


5. If the issue was fixed, the tester proceeds to close it, so the status is

“CLOSED”.

6. If the issue was not fixed, then the tester proceeds to re-open the bug,

so the status is “RE-OPEN” and the cycle will be repeated.


Defect’s Status vs. Defect’s Resolutions

Status: Represents the state of an issue at a particular point in a specific

workflow. Example: New, Open, In Progress, Close, Re Open, etc.

Resolutions: They are the ways in which an issue can be closed.


Defect’s Resolutions

Some of the most common resolutions are:

• Fixed: “Fixed” implies that there really was a problem in the code and

it has been addressed now.

• Invalid: “Invalid” means The problem described is not a bug.

• Wont Fix: The problem described is a bug which will never be fixed.
• Duplicate: The problem is a duplicate of an existing bug.

Marking a bug duplicate requires the bug of the duplicating bug

and will at least put that bug number in the description field.
• Worksforme or Cannot Reproduce: All attempts at

reproducing this bug were futile, and reading the code produces

no clues as to why the described behavior would occur. If more

information appears later, the bug can be reopened.

• As Designed: The program works as it’s supposed to.


• Enhancement: “Enhancement” implies that the tester has not

found a defect per se, but that the issue is a new feature or

feature modification request. In other words this is not a defect

but it has been implemented in the current release. This

information is valuable for the future, as these records can then

be distinguished from the others for easy collection and

inclusion in the requirements document and help files.


Severity and
Priority
There are two key things in defects of
the software testing. They are:

1) Severity
2) Priority

But… What is the difference between


Severity and Priority?
Severity

It is the extent to which the defect can affect the software. In other words it defines the

impact that a given defect has on the system. For example: If an application or web

page crashes when a remote link is clicked, in this case clicking the remote link by an

user is rare but the impact of application crashing is severe. So the severity is high but

priority is low.

[ISTQB]
The severity is absolute. It is the measure of the impact in the

application of the issue found without take in consideration the pending

activities and defined schedule.

The severity does not change during the defect life cycle.
Priority
It defines the order in which we should resolve a defect. How much important

is to fix the defect before the next release. Should we fix it now, or can it

wait? This priority status is set by the tester to the developer mentioning the

time frame to fix the defect. If high priority is mentioned then the developer has

to fix it at the earliest. The priority status is set based on the customer

requirements. For example: If the company name is misspelled in the home

page of the website, then the priority is high and severity is low to fix it.

[ISTQB]
The priority is relative. It is a subjective evaluation base on the importance

of the issue found vs. the pending activities and the defined schedule. It is a

decision from the point of view of the business.

The priority can change during the defect life cycle


Example
Example Explanation

This bug is considered as low severity because it has not impact on the

functionalities of the program, everything will work correctly with or without it.

But it is a high priority bug because the logo is an important part of a

company and a wrong logo could cause a bad impression of the company.
Activity

Please, investigate what types of

severity and priority exist and explain

each of them.

Send it to your coach for the

review and feedback.


Defect
Reporting
Defect Reporting • Steps to reproduce

Bug report must contains: • Current Results

• Summary • Expected Results

• Related component • Version

• Priority and Severity • Evidence (screenshots, videos, logs,

• Environment information (OS, DB, etc.)

Server, Browse… • Additional information / comments


Defect Reporting Template
Defect
Example

Example taken from Jira https://jira.avantica.net


Activity #1

See the instructions in the Confluence page


https://confluence.avantica.com:8443/display/
AVANTESE/3.+QA+Introduction

You might also like