Download as pdf or txt
Download as pdf or txt
You are on page 1of 77

 Saves Money

 Security
 Product Quality
 Customer Satisfaction
 Imagine you just downloaded a banking app and it has one of the smallest
commissions around the market for sending money fast. You tried to “Sign
Up” and an “Error” message showed up. Now due to that problem, not
only you, but many other users can’t “Sign Up” and use that product. So
their business has already lost money because transactions are not made
and unresolved the issue in the very start. Thus, users will go and find
another similar app that works and probably will never return to that one
because of the bad experience they had. Testing essentially helps you save
time and money in the long run because issues are resolved before bigger
problems occur. The maintenance costs are also lower and in the end if
product works 100% as it should, no exponential damage is done to your
business cost-wise.
 In May of 1996, a software bug caused the bank accounts of 823
customers of a major U.S. bank to be credited with 920 million US dollars.
 Suncorp Bank – a malfunction during a routine upgrade caused the
disappearance of money from customers’ bank accounts. Additional
customer complaints included overdrawn and locked out accounts.
 Cairns Hospital – A catastrophic glitch affecting five Australian hospitals
was introduced during the application of security patches designed to
counter potential future cyber-attacks. It required more than two weeks
for the hospitals to recover their electronic medical record systems.
 China Airlines Airbus A300 crashed due to a software bug on April 26,
1994, killing 264 innocent lives.
 British Airways – For the sixth time last year – a major IT
software failure led to massive cancellations on local flights
and significant delays on international flights. According to
NPR.org – it took over three days of cancellation chaos to
resolve the problems that plagued BA during this outage.
MYTH: Quality Control = Testing.
 FACT: Testing is just one component of software quality control.
Quality Control includes other activities such as Reviews.
MYTH: The objective of Testing is to ensure a 100% defect- free
product.
 FACT: The objective of testing is to uncover as many defects as
possible while ensuring that the software meets the requirements.
Identifying and getting rid of all defects is impossible.
MYTH: Testing is easy.
 FACT: Testing can be difficult and challenging (sometimes, even
more so than coding).
MYTH: Anyone can test.
 FACT: Testing is a rigorous discipline and requires many kinds of
skills.
MYTH: There is no creativity in testing.
 FACT: Creativity can be applied when formulating test approaches, when
designing tests, and even when executing tests.
MYTH: Automated testing eliminates the need for manual testing.
 FACT: 100% test automation cannot be achieved. Manual Testing, to some
level, is always necessary.
MYTH: When a defect slips, it is the fault of the Testers.
 FACT: Quality is the responsibility of all members/ stakeholders, including
developers, of a project.
MYTH: Software Testing does not offer opportunities for career growth.
 FACT: Gone are the days when users had to accept whatever product was
dished to them; no matter what the quality. With the abundance of
competing software and increasingly demanding users, the need for
software testers to ensure high quality will continue to grow.
The Evolving Profession of Software Engineering

• Software systems are becoming more challenging


to build.
• software developers are playing an increasingly
important role in society.
• People with software development skills are in
demand.
• New methods, techniques, and tools are available
to support development and maintenance tasks.
• Software has an important role in our lives both
economically and socially.
• Pressure for software professionals to focus on
quality issues.
▪ Poor quality software that can cause loss of life
or property is no longer acceptable to society.
▪ Failures can result in catastrophic losses.
• Highly qualified staff ensure that
▪ Software products are built on time, within
budget.
▪ Highest quality with respect to attributes such as
➢ Reliability
➢ Correctness
➢ Ability to meet all user requirements.
In response to the demand for high-quality software,
and the need for well-educated software
professionals, there is a movement to change the way
software is developed and maintained, and the way
developers and maintainers are educated.
In fact, the profession of software engineering is
slowly emerging as a formal engineering discipline.
As a new discipline it will be related to other
engineering disciplines, and have associated with it a
defined body of knowledge, a code of ethics, and a
certification process.
The education and training of engineers in each
engineering discipline is based on
• the teaching of related scientific principles
• engineering processes
• standards, methods
• tools
• measurement
• best practices
These quality attributes need to be ensured
✓ Usability
✓ Testability
✓ Maintainability
✓ Reliability
 In software engineering, usability is the degree to
which a software can be used by specified consumers
to achieve quantified objectives with effectiveness,
efficiency, and satisfaction in a quantified context of use.
 Software testability is the degree to which a software
artifact (i.e. a software system, software module,
requirements- or design document) supports testing in a
given test context. If the testability of the software artifact
is high, then finding faults in the system (if it has any) by
means of testing is easier.
 Many software systems are untestable, or not
immediately testable.
 Software is not static. If you build a valuable product that works
perfectly but is difficult to modify and adapt to new
requirements, it will not survive in today’s market.
Maintainability is a long-term aspect that describes how easily
software can evolve and change, which is especially important in
today’s agile environment.
 Maintainability refers to the ease with which you can repair,
improve and understand software code. Software maintenance is
a phase in the software development cycle that starts after the
customer has received the product.
 Developers take care of maintainability by continuously
adapting software to meet new customer requirements and
address problems faced by customers. This includes fixing
bugs, optimizing existing functionality and adjusting code to
prevent future issues. The longevity of a product depends on
a developer’s ability to keep up with maintenance
requirements.
 Software reliability is defined as: – The probability of
failure-free software operation for a specified period of
time in a specified environment. It can be an important
factor affecting system reliability.
✓ Project planning,
✓ Requirements management
✓ Development of formal specifications
✓ Structured design with use of information hiding and
encapsulation
✓ Design and code reuse.
✓ Inspections and reviews.
✓ Product and process measures
✓ Education and training of software professionals.
✓ Development and application of CASE tools
✓ Use of effective testing techniques.
✓ Integration of testing activities into the entire life cycle.
Process, in the software engineering domain, is the set of
methods, practices, standards, documents, activities, policies,
and procedures that software engineers use to develop and
maintain a software system and its associated artifacts, such
as project and test plans, design documents, code, and
manuals.
✓ CMM (Capability Maturity Model)
✓ SPICE (Software Process Improvement and Capability Determination)
✓ BOOTSTRAP
✓ TMM (Testing Maturity Model)
CMM (Capability Maturity Model)
• Most software engineers would agree that testing is a vital
component of a quality software process and is one of the
most challenging and costly activities carried out during
software development and maintenance.
• CMM (Capability Maturity Model) The term "maturity"
relates to the degree of formality and optimization of
processes, from ad hoc practices, to formally defined
steps, to managed result metrics, to active optimization of
the processes.
• In 2006 the Software Engineering Institute at Carnegie
Mellon University developed the Capability Maturity Model
Integration, which has largely superseded the CMM.
SPICE
• Software Process Improvement and Capability
Determination (SPICE) is an international framework to
assess software development processes. SPICE was
developed jointly by International Organization for
Standardization (ISO) and the International
Electrotechnical Commission (IEC). SPICE is specified in
ISO/IEC 15504.
BOOTSTRAP

• The BOOTSTRAP methodology for software process


assessment and improvement was initially developed by
taking the original SEI model as a starting point and
extending it with features based on the guidelines from ISO
9000 quality standards and ESA (European Space Agency)
process model standards.
Testing Maturity Model (TMM)

• Testing Maturity Model (TMM) is a framework to determine the


maturity of the software testing process. The main reason of
using a TMM is to determine maturity and provide targets or
goals for improving the software testing process to obtain
progress.
 The software development process has been
described as a series of phases, procedures, and
steps that result in the production of a software
product.
Testing itself is related to two other processes called :
Verification vs Validation
✓ Verification:
 "Are we building the product right?”

The software should conform to its specification.


Defn: Verification is the process of evaluating a
software system or component to determine whether
the products of a given development phase satisfy the
conditions imposed at the start of that phase.
✓ Validation:
 "Are we building the right product?”

The software should do what the user really requires.


Defn: Validation is the process of evaluating a software system or
component during, or at the end of, the development cycle in
order to determine whether it satisfies specified requirements.
✓ Testing
Defn1:Testing is a group of procedures carried out to
evaluate some aspect of a piece of software.

Defn 2 :Testing is a process used for revealing defects in


software, and for establishing that the software has attained a
specified degree of quality with respect to selected attributes.

✓ Debugging: Debugging, or fault localization is the process of


locating the fault or defect, repairing the code, and retesting
the code.
✓ Error: An error is a mistake, misconception, or
misunderstanding on the part of a software developer.
✓ Fault: A fault (defect) is introduced into the software as the
result of an error. It is an anomaly in the software that
may cause it to behave incorrectly, and not according to
its specification.
✓ Failure: A failure is the inability of a software system or
component to perform its required functions within
specified performance requirements.
✓ Test Case: A test case in a practical sense is a test-related item
which contains the following information:
▪ A set of test inputs. These are data items received from an
external source by the code under test. The external source
can be hardware, software, or human.
▪ Execution conditions. These are conditions required for
running the test, for example, a certain state of a database, or
a configuration of a hardware device.
▪ Expected outputs. These are the specified results to
be produced by the code under test.
✓ Test set: A test is a group of related test cases, or a group of
related test cases and test procedures
Group of related tests that are associated with database and
are usually run together is sometimes called as test suite.
✓ Test Oracle: A test oracle is a document, or piece of software
that allows testers to determine whether a test has been
passed or failed.
✓ Test Bed: A test bed is an environment that contains all the
hardware and software needed to test a software component or
a software system.
✓ Software Quality:
▪ Quality relates to the degree to which a system, system
component, or process meets specified requirements.
▪ Quality relates to the degree to which a system, system
component, or process meets customer or user needs, or
expectations.
✓ Metric : A metric is a quantitative measure of the degree to
which a system, system component, or process possesses a
given attribute.
▪ Product metric (Software size, LOC)
▪ Process metric (Costs, Time)
✓ Quality Metric : A quality metric is a quantitative
measurement of the degree to which an item possesses a
given quality attribute.
▪ Quality Attributes:
• Correctness
• Reliability
• Usability
• Integrity
• Portability
• Maintainability
• Interoperability
✓ Software Quality Assurance Group(SQAG): is a team of
people with the necessary training and skills to ensure that
all necessary actions are taken during the development
process so that the resulting software conforms to
established technical requirements.
✓ Review : A review is a group meeting whose purpose is
to evaluate a software artifact or a set of software
artifacts.
A principle can be defined as:
1. A general or fundamental, law, doctrine, or assumption;
2. A rule or code of conduct;
3. The laws or facts of nature underlying the working of an
artificial device.
✓ Principle 1. Testing is the process of exercising a software
component using a selected set of test cases with the intent of
(i) revealing defects, and (ii) evaluating quality.
✓ Principle 2. When the test objective is to detect defects, then
a good test case is one that has a high probability of revealing
a yet undetected defect(s).
✓ Principle 3. Test results should be inspected meticulously.
✓ Principle 4. A test case must contain the expected output or
result.
✓ Principle 5. Test cases should be developed for both valid and
invalid input conditions.
✓ Principle 6. The probability of the existence of additional
defects in a software component is proportional to the
number of defects already detected in that component
✓ Principle 7. Testing should be carried out by a group that is
independent of the development group.
✓ Principle 8. Tests must be repeatable and reusable.
✓ Principle 9. Testing should be planned.
✓ Principle 10. Testing activities should be integrated into the
software life cycle.
✓ Principle 11. Testing is a creative and challenging task.
✓ Reveal defects
✓ Find weak points
✓ Inconsistent behavior
✓ Circumstances where the software does not work as
expected.
✓ Cooperating with code developers
✓ Work along with requirements engineers
✓ Work with designers to plan for integration and unit test
✓ Test managers will need to cooperate with project
managers to develop reasonable test plans
• Education: The software engineer did not have the proper
educational background to prepare the software artifact. She
did not understand how to do something. For example, a
software engineer who did not understand the precedence
order of operators in a particular programming language could
inject a defect in an equation that uses the operators for a
calculation.
• Communication: The software engineer was not informed
about something by a colleague. For example, if engineer 1
and engineer 2 are working on interfacing modules, and
engineer 1 does not inform engineer 2 that a no error checking
code will appear in the interfacing module he is developing,
engineer 2 might make an incorrect assumption relating to the
presence/absence of an error check, and a
• defect will result.
• Oversight: The software engineer omitted to do something.
For example, a software engineer might omit an initialization
statement.
• Transcription: The software engineer knows what to do, but
makes a mistake in doing it. A simple example is a variable
name being misspelled when entering the code.
• Process: The process used by the software engineer
misdirected her actions. For example, a development process
that did not allow sufficient time for a detailed specification to
be developed and reviewed could lead to specification
defects.
Our goal as testers is to discover these defects preferably before
the software is in operation.

One of the ways we do this is by designing test cases that have a


high probability of revealing defects. How do we develop these
test cases? One approach is to think of software testing as an
experimental activity.
The results of the test experiment are analysed to determine
whether the software has behaved correctly.
✓ formulate hypotheses
✓ design test cases;

✓ design test procedures;

✓ assemble test sets;

✓ select the testing levels (unit, integration, etc.)


appropriate for the tests;
✓ evaluate the results of the tests.
► A successful testing experiment will prove the
hypothesis is true, that is, the hypothesized defect was
present. Then the software can be repaired (treated).
► Fault model: A fault (defect) model is a link between the error
made (e.g., a missing requirement, a misunderstood design
element, a typographical error), and the fault/defect in the
software.
✓ Requirement/Specification Defect Classes
✓ Design Defect Classes
✓ Coding Defect Classes
✓ Testing Defect Classes
 Defects can be classified in many ways. It is important for an
organization to adapt a single classification scheme and apply
it to all projects. No matter which classification scheme is
selected, some defects will fit into more than one class or
category.
 The defect types and frequency of occurrence should be used
to guide test planning, and test design. Execution-based testing
strategies should be selected that have the strongest possibility
of detecting particular types of defects. It is important that
tests for new and modified software be designed to detect the
most frequently occurring defects.
Requirement/Specification Defect Classes
Defects injected in early phases can persist and be very difficult to
remove in later phases. Since many equirements documents are
written using a natural language representation, there are very often
occurrences of ambiguous, contradictory, unclear, redundant, and
imprecise requirements.
Some specific requirements/specification defects are:
1. Functional Description
Defects The overall description of what the product does, and
how it should behave (inputs/outputs), is incorrect, ambiguous,
and/or incomplete.
2. Feature Defects
Features may be described as distinguishing characteristics of a
software component or system. Features refer to functional
aspects of the software that map to functional requirements as
described by the users and clients. Features also map to quality
requirements such as performance and reliability.
3. Feature Interaction Defects
These are due to an incorrect description of how the features
should interact. For example, Suppose one feature of a software
system supports adding a new customer to a customer database.
4. Interface Description Defects
These are defects that occur in the description of how the target
software is to interface with external software, hardware, and
users. For detecting many functional description defects, black box
testing techniques, which are based on functional specifications
of the software, offer the best approach.
5. Random testing and error guessing are also useful for detecting
these types of defects. Black box–based tests can be planned at
the unit, integration, system, and acceptance levels to detect
requirements/specification defects.
Design Defects
This covers defects in the design of algorithms, control, logic, data
elements, module interface descriptions, and external software/
hardware/ user interface descriptions. When describing these
defects we assume that the detailed design description for the
software modules is at the pseudo code level with processing steps,
data structures, input/output parameters, and major control
structures defined. If module design is not described in such detail
then many of the defects types described here may be moved into
the coding defects class.
Design Defects
1. Algorithmic and Processing Defects
2. Control, Logic, and Sequence Defects
3. Data Defects
4. Module Interface Description Defects
5. Functional Description Defects
6. External Interface Description Defects
Coding Defects
Coding defects are derived from errors in implementing the
code. Coding defects classes are closely related to design defect
classes especially if pseudo code has been used for detailed
design. Some coding defects come from a failure to understand
programming language constructs, and miscommunication with
the designers.
Testing Defect
1. Test Harness Defects
In order to test software, especially at the unit and integration
levels, auxiliary code must be developed. This is called the test
harness or scaffolding code. The test harness code should be
carefully designed, implemented, and tested since it a work
product and much of this code can be reused when new
releases of the software are developed.
2. Test Case Design and Test Procedure Defects
These would encompass incorrect, incomplete, missing,
inappropriate test cases, and test procedures. These defects are
again best detected in test plan reviews.
▪ suppose one feature of a software system supports adding a
new customer to a customer database.
▪ This feature interacts with another feature that categorizes
the new customer.
▪ The classification feature impacts on where the storage
algorithm places the new customer in the database, and also
affects another feature that periodically supports sending
advertising information to customers in a specific category.
▪ Algorithmic error: omission of error condition checks such as
division by zero
1.Extreme(Money deducted from account but cash not dispensed)
2.Critical ( In the email service provider like Yahoo or Gmail, after
typing the correct Username and the password, instead of
logging in, the system crashes or throws the error message, this
defect is classified as critical as this defect makes the whole
application unusable)
3.Important(In the email service provider like Yahoo or Gmail,
when you are not allowed to add more than one recipient in the
CC section i.e add multiple Users)
4.Minor(In the email service provider like Yahoo or Gmail, there is
option called “Terms and Conditions” and in that option there
will be multiple links regarding the terms and condition of the
website, When one among the multiple links, is not working
fine, it is called as Minor severity)
5.Cosmetic(spelling mistakes or misalignment in the page)
❑ The COIN Problem
✓ Pre condition
✓ Post condition
✓ Functional description
✓ Interface description
✓ Code defect
✓ Data defect
✓ Algorithmic and processing defect
✓ External Interface defect
✓ Control, logic, and sequence defects
✓ Algorithmic and processing defects
✓ Data Flow defects
✓ Data Defects
✓ External Hardware, Software Interface Defects
✓ Code Documentation Defects
✓ The Stakeholder Axiom ✓ The Prioritization Axiom
✓ The Test Basis ✓ The Execution
✓ The Test Oracle Axiom Sequencing Axiom
✓ Fallibility ✓ The Design Axiom

✓ The Scope Management ✓ The Repeat-Test Axiom


Axiom ✓ Good enough
✓ The Coverage Axiom ✓ Never finished.
✓ The Delivery Axiom ✓ Value.
✓ The Environment Axiom
✓ The Event Axiom
encompasses the
technical
activities and
tasks that, when
applied,
constitute quality
testing practices

involves
commitment
and ability to
perform
activities and
tasks related defined as a
to improving cooperating,
testing or
capability supporting,
view

You might also like