Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 44

Software quality

 Meet customer expectations


 Meet customer requirements
 Cost to purchase
 Time to release
SQA & SQC
 The monitoring and measuring the
strength of development process is
called software quality assurance

 The validation of final product before


release to customer is called software
quality control
Software development life cycle

ANALYSIS PHASE

DESIGN PHASE

CODING PHASE

SYSTEM TESTING

MAINTAINANCE
Analysis Phase
 BRS:
The business requirement specification defines
requirement of the customer to be develop a
new software.
 SRS:

The software requirement specification


defines the functional requirements to be
developed and system requirements to be
used.
Reviews in Analysis Phase
 Reviews are conducted on the documents for
completeness and correctness
 Reviews:
Are they right requirements ?
Are they complete requirements ?
Are they achievable requirements ?
Are they reasonable requirements ?
Are they testable requirements ?
Design Phase
 HLD : The high level design document defines
overall architecture of system
Architectural design or external design
LOGIN

Example:
MAILING CHATTING

LOGOUT
LLD : the low level design document defines the internal
structure of every module or functionality.
USER
 Example :

login Database
Invalid
valid
Next Page
Reviews in Design
 Are they understandable design ?
 Are they meet right requirements ?
 Are they complete designs ?
 Are they follow able design (easy to
convert to programs) ?
 Are they handle errors ?
Coding Phase
 In the coding phase the software
requirements are converted into some
programming language.
 Here , each program is tested using
White Box Testing
White Box testing
 It is a program level testing technique.
In this technique the responsible people
are verifying the internal structure of
corresponding program.
 White Box testing can also called as
open box testing /glass box testing /
clear box testing.
White Box testing Techniques:
Unit testing
 Basis path testing
 Control structure testing
 Program technique testing
 Mutation testing technique
Basis path testing
 Execution of the programs with out any runtime
errors.
 Approach:

write the program with respect to LLD


Draw a flow graph for that program.
Calculate cyclomatic complexity.
Run the program more than one time to
cover all executable areas.
Control Structure testing
 Concentration on the correctness of the
program.

Testing whether program taking correct


input and returning the correct output.
Program technique testing
 Concentration on the speed of a
program.
 If the execution speed is not reasonable
then programmers are performing
structure of that program without
disturbing functionality.
 Ex: c=a, a=b , b=c
a=a+b,b=a-b a=a-b
Mutation testing
 During this test the corresponding programmers are
estimating the completeness and correctness of a
program testing.
 tests tests tests

change change

passed passed failed


(incomplete tests) (complete testing)
Integration testing
 After the completion of dependent
programs development and unit testing,
the programmers are integrating them.
 After completion of integration of
programs programmers are verifying
the interconnection of that program in
any on the 4 ways:
Top down approach
 The interconnection of main program and some of the sub programs is
called top down approach.
 Programmers are using temporary programs instead of under constructive
sub programs called stubs.
main

stub
sub1

sub2

(under construction)
Bottom up approach
 The interconnection of internal sub programs
with out using main program is called bottom
up approach.
 In this approach programmers are using a
temporary program instead of under
constructive main program. This temporary
program is also called as driver program or
calling program
main

driver

sub1

sub2
Hybrid approach
 This approach is the combination of the both
top down approach and bottom approach.
 This is also called as sand witch approach.
main

driver

sub1

stub
sub2
sub3
System approach
 In this approach the programmers are
interconnecting programs after completion
of all programs development and unit
testing.
 This is also known as Big-bang approach.
 A final integration of all the programs is
called as Build or AUT(application under
test).
Testing
 After the completion of integration testing, the
separate testing team is receiving the software build
from the development team.
 This separate testing team follows a set of Black Box
testing to validate that software build.
 This system testing is classified into three categories:
 Usability testing
 Functional testing
 Non – functional testing
 Usability testing:
In general the separate testing team is
starting test execution with usability testing.
During this test separate team is
concentrating on user friendliness of the
software build screen.

This usability testing is consisting of two sub


desks:
 User interface testing.

 Manual support testing.


User interface testing and manual
support testing
 UI testing : Testing every screen of the
software build in the following areas :
Ease fo use(understability)
Look and feel(attractiveness)
Speed in interface(short navigations)
 Manual support testing:
It is also known as help document testing. During
this test, the separate testing team is
concentrating on correctness and completeness of
the help document or user manual.
Functional testing
 It is a mandatory testing level. During this
tests the testing team is concentrating on
customer requirements in terms of
functionality.
 During this test the separate testing is
applying below sub test on software build:
 Functionality testing
 Sanitation testing
Functional testing
 During this test the separate testing team
is concentrating on correctness of every
functionality with respect to customer
requirements.
 In this test the testing team is following

below coverages :
GUI or behavioral coverage (changes in
the properties of objects in screens)
Error handling coverage (preventing wrong
operations)
Input domain coverage ( taking correct input
or not)
Manipulations coverage (returning correct
output or not)
Back – end coverage (impact of front end on
the back-end database or not)
Order of functionalities coverage.
 Sanitation testing: it is also known as
garbage testing .
 During this test the testing team is
identifying the extra functionalities in
software field with respect to
customers requirements.
Non functional testing
 Recovery testing: It is also known as reliability
testing. During this test the testing tester is
validating that whether the software build is
changing from abnormal mode to normal mode or
not ?

Abnormal state

normal state
 Compatibility testing: it is also known as
portability testing. During this test the testing
team is validating that whether our software
build is running on customer expected
platforms or not ?
Platform means that operating system,
compilers, browsers and other system
software.
 Configuration testing: it is also known as
hardware compatibility testing. During this
test the testing team is validating that
whether our software field is supporting
different technologies hardware devices or
not ?
Example: different printer technologies,
different network technologies, different
network topologies.
 Inter system testing : it is known as
end to end testing. During this test, the
testing team is validating that whether
our software build is co-existence with
other software to share common
resources or not ?
Account d/b

login d/b
 Load testing : the execution of our
software build in customer expected
configured environment and customer
expected load to estimate performance
is called load testing or scale testing.
The load or scale means that the
number of concurrent users access our
application build.
 Stress testing: The execution of our software
build in customer expected configured
environment and various levels of load to
estimate reliability is called stress testing.
 Data volume testing: during this test the
testing team is estimating the peak limit of
data handled by our software field.
A/C
Front-end d/b 2GB (max)
 Parallel testing: it is also known as
comparative testing or competitive testing.
During this test the testing team is comparing
our software build with previous version of
same software or with competitive software
in market to estimate combativeness.
This testing is only applicable for software
products.
User acceptance testing
 After the completion of system testing and their
modifications the project management is
concentrating on user acceptance testing to collect
feedback.
Two ways:
 Alpha testing: By real customer
In development site
suitable for applications
 Beta testing: By model customer
In model customer site
suitable for products
Release and maintenance
 After completion of user acceptance
test and their modifications, the project
management is concentrating on
release team formation. This release
tem consists of few programmers, few
test engineers and few hardware
engineers.
 Port testing: the corresponding release team is
conduction port testing in customer site . During
this test, the release team is observing below
factors:
 Compact installation
 Overall functionality
 Input devices handling (keyboard, mouse……)
 Output devices handling (monitor, printer,…)
 Secondary storage devices handling (floppy drive, cd-
rom)
 o/s error handling
 Co-existence with other software.
 After the completion of the port testing the
responsible release team is providing required
training sessions to customer site people and then
coming back to organization.
 Test software changes: During the utilization
of the software, the customer site people are
sending change request(CR) to our
organization.
 To receive these change requests, the
organization is establishing a special team
along with few programmers, few test
engineers, and project manager category
persons. This team is called as “ change
control board”.
TESING PHASE RESPONSABILITY TESTING TECH
Reviews in Business analyst Walk through
analysis Inspection

Peer review

Reviews in design designer -------DO--------


Unit testing programmer White box testing
Integration testing programmer Top down app.
Bottom up app.

Hybrid app.

System app.
System testing Test engineer Black box testing

User acceptance Real or model Alpha testing


testing customer Beta testing

Release Release team Port testing

Test software Change control Regression test


changes in board
maintenance
Testing terminologies
 Build: A final integration of all the programs is
called build or AUT(application under testing)
 Test strategy: An approach to be followed by
testing team in testing
 Test plan: A schedule to be followed by
testing team in testing.
 Test case: A test condition to be applied on
build.
 Test log : A result of test case in terms of
passed or failed.
 Error, defect, bug: A mistake in coding
is called error. This error detected by
tester during testing called defect or
issue. This defect or issue accepted by
programmer to resolve called bug.
 Summary reports: Defines work
progress. Ex: daily reports, weekly
report, monthly report.
 Sanity testing: (smoke testing, tester
acceptance testing, build verification testing)
After receiving initial build from development
team , the test engineers are estimating
stability of the build first. This preliminary
check up is called sanity testing
 Regression testing: the re-execution of the
selected test on modified build to estimate
completeness and correctness of modification
is called regression testing.

You might also like