Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 52

Testing process

Test closure

Test initiation

Test execution
Test planning Test design

Test reporting
Test initiation
 System testing is starting with test initiation. In this stage
project manger category people are selecting reasonable
test to be applied.
 After selecting reasonable test, the project manager is
preparing test strategy document. This document is also
known as “test methodology”.

SRS input output


Strategy Test initiation

Risks
Test strategy document

The test strategy document consists of below


components:
 Scope and objective: the purpose of testing n this project.
 Business issue : budget allocation for testing.
 Approach: selected list of reasonable tests with respect to
requirements in project, scope of that requirements and
risks involved in the project.
 Roles and responsibilities: the names of the jobs in testing
team and their responsibilities.
 Communication and status reporting: the required
negotiation in between every two jobs in testing team
 Testing measurements and metrics : a list of quality , management and
capability measurements.
 Change and configuration management: maintains deliverables for
future references. E.g.: test plan, test cases, test log, defect reports and
other summary reports.
 Test automation and testing tools: Test automation in corresponding
project testing and available testing tools in your organization.
 Defect reporting and tracking: required negotiation in between testing
team and development team in test reporting
 Risks and assumptions: a list of analyzed risks and solution to
overcome.
 Training plan: the required number of training sessions to testing team
for understanding the project requirement
Test factors
 Authorization: a software is allowing valid users and preventing invalid users.
 Access control: authorities of valid users to use specific functionality.
 Audit trail: maintains meta data about user operations (internal data)
 Data integrity: taking correct type and size of inputs.
 Correctness: returning correct outputs.
 Continuity of processing: integration of internal functionalities.
 Coupling: co-existence with other software to share common resources.
 Ease of use: user-friendly screens.
 Ease of operate: installation and uninstallation.,downloading etc…,
 Portable: run on different platforms.
 Performance: speed of processing.
 Reliability: recover from abnormal situation.
 Service levels: order of functionalities to service to customer.
 Maintainable: serviceable to customer long time.
 Methodology: whether the testing team is following standards or not while
testing.
Responsible persons Responsibility
C.E.O Software quality

Project manager/Team Test factors


manager
Test lead Testing techniques

Testing team Test cases


Test factors Vs Testing techniques

 Authorization : security testing


 Access control: security testing
 Audit trail: functionality testing
 Data integrity : functionality testing
 Correctness : functionality testing
 Continuity of processing: integration testing( by
developers)
 Coupling: inter system testing
 Ease of use: user interface testing
Manual support testing
 Ease of operate: installation testing 15 test factors-4
 Portable: Compatibility testing Requirements=
Configuration testing
11 + 2 scope of
 Performance: load testing
Requirements=
Stress testing
Data volume testing 13-4 risks=9
 Reliability: Recovery testing (for single user)finalized
Stress testing (more than one user)
 Service levels: functionality testing
 Methodology: compliance testing (whether our teams are
following company standards or
nor ?)
Test planning

 After the selection for reasonable tests to applied,


the project manager or team manager is releasing
test strategy documents with all details to test
lead.
 The test lead is concentrating on test plans
preparation. In this stage the test lead is preparing
one system test plan and multiple detail test plans.
Test strategy Team formation

Identify risks
Development Test plan(s)
documents
Prepare test plan(s)

Development
plan Review test plan(s)
 Team formation: the test planning process is
starting with testing ream formation. In this
phase ,the test lead is depending in below factor:
Project size
Availability of test engineers
Test duration
Availability of test environment
resources
 Identify risks: after completion of reasonable testing team
formation the test lead is concentrating on risks at that
team level.
Risks:
Lack of knowledge on domain of testing team
Lack of time
Lack of resources
Lack of documentation
Delay in delivery
Lack of development process rigor
Lack of communication
 Prepare test plans: after the testing formation and
risk analysis, the test lead is preparing test plan
documents.
The test plan will give what to test, how to test,
who to test, when to test.
Test plan document

 Test plan id; the identification number or name


 Introduction: about project What
 Test items: names of modules or functionalities or
services. TO
 Features to be tested: the names of modules which
selected for testing Test
 Features not to be tested: the names of remaining
tested modules.
 Approach: the selected list of testing techniques with respect to test
strategy(selected by PM)
 Test environment: required hardware and software to apply selected test on
specified features.
 Entry criteria:
Prepare complete and correct test case.
Establish test environment.
Receive stable build form developers. How
 Suspension criteria:
Test environment is not supporting To
Show stopper defect occurred
Pending defects are more(quality gap) Test
 Exit criteria:
All modules are tested
Meet test duration
All major bugs dissolved
 Test deliverables: the names of test documents to be prepared by test engg.
ex: test scenarios, test case documents, test logs, defect reports ,
summary reports
 Staff and training needs: the names of selected test
engineers and required training sessions for them
W
 Responsibilities: the mapping in between test
H engineers and responsible testing areas.
O
 Schedule :dates and times When to test
 Risks and assumptions: list of previously analyzed
risks and assumptions.
 Approvals: the signatures of test lead and project
manager.
 Review test plan: after completion of test plan documents
preparation the test lead is conducting a review meeting
for completeness and correctness checking
Requirements oriented review
Testing techniques oriented review
Risks oriented review
After the review , training sessions are conducted for all
the test engineers and all are responsible for
understanding complete project requirements.
Test design
 After the completion of required number of training
sessions, the responsible test engineers are concentrating
on test cases selection.
 Every test case defines a unique test condition to validate
software build in terms of usability, function and non
functional aspects.
 There are three methods of preparing test cases:
 Functional and system specification based test
design
 Use cases based test case design
 Application or prototyped test case design.
Functional and system specification
based test case design
 The test engineers are preparing maximum test cases
depending on functional and system specifications in
SRS.
 BRS
Prepare
SRS Test cases

HLD/LLD Ru n

Coding(build)
From the above diagram test engineers are preparing test
cases depending on SRS through below approach:
1. Collect responsible functional and system specifications
including dependencies.
2. Select one specification from the collected list.
3. Study that specification in terms of base state, inputs,
outputs, normal flow , end state, alternative flows and
exceptions.
4. Prepare test case title/test scenarios with respect to above
studied information.
5. Review test case titles for completeness and correctness.
6. Prepare test case documents.
7. Got step2 until all responsible specifications studied.
Test case documentation
 After the completion of reasonable test scenarios or titles,
the test engineers are documenting that test cases. In this
phase test engineers are following a standard format:
1. Test case Id : Unique number or name for future reference.
( Tc_module_name_date_number)
2. Test case name : the title of test case
3. Feature: the name of corresponding module
4. Test suite – Id: the name of test batch, this case is a
member in that batch.
5. Priority: the importance of test case in terms of
functionality. P0 basic functionality test case
P1 General functionality
P3Cosmetic functionality
6. Test effort: persons/hour: (Avg. 20 min to execute
one test case)
7. Test environment: required hardware and
software to execute this case.
8.Test duration: date and time to execute this case
on built.
9.Test setup/pre-condition : the necessary tasks to
do before start this case execution on build.
10.Test procedure./ Data matrix:
Test Procedure:
Step Actio I/p expect Actu result Defec comm
No. n required ed al t ID ents
Data Matrix:
ECP (type) BVA (size/range)
Input object
Valid invalid Min Max
11.Test case pass or fail criteria: the final result of
test case after execution.

 If our test case is covering and object test


engineers are preparing data matrix. If our test
case is covering an operation, then test engineers
are preparing test procedure.
E.g.. Verify pin number (data matrix)
Verify login operation ( test procedure).
Review test cases
 After the completion of all reasonable test cases selection
test lead is conducting a final review fro completeness
and correctness of that test cases. In this review test lead
is depending on below factors:
Requirements oriented test cases.
Testing techniques oriented test cases review.

 After the completion of reviews and their modification


test team is concentrating on test execution.
Test execution

Initial meeting and version control:


After the completion of test design and reviews
the test team is conduction a formal meeting with
development team. In this meeting the
development team and testing team are
concentrating on:
Build release process
Build version control
Defect tracing system.
for app.
projects
Configuration (repository) only
Develop
Develop Testing
ment Builds
ment doc Internet
Doc.

Environ
ment Customer
Test environment site
 From the above model the testing people are downloading
build from configuration repository in server with
permission.
 In this repository development people are maintaining
old build coding and modified build coding in this
repository to distinguish old build and modified build
development people are assigning unique version number
to that build. For this build version control and
development team is using version control tools.
 Development team will send release note for testing team
for every modified build. This release note is providing
information about changes in build.
Levels of Test Execution
DEVELOPMENT TESTING
INITIAL BUILD LEVEL-0(SANITY)
STABLE BUILD
LEVEL-1
FIXING DEFECT REPORT (COMPREHENSIVE
TESTING OR REAL
TESTING)
RESOLVING MODIFIED BUILD LEVEL-2
(REGRESSION)

LEVEL -3 (FINAL REGRESSION)


Levels of test execution VS Test
 Level-0  P0 test cases (basic functionality)
 cases
Level-1  All P0,P1 & P2 test cases
 Level-2 selected P0,P1 & P2 test cases with
respect to modified build
 Level-3  selected P0,P1 & P2 test cases with
respect to bug density
P0 priority test cases indicating functionality testing
P1 priority test cases indicating non functionality testing.
P2 priority test cases indicating usability testing.

Level-0(sanity) on initial build.


Level-1(comprehensice) on stable build.
Level-2(regression) on modified build
Level-3(final regression) on master build (which is ready to release to customer)
User Acceptance testing on golden build (which is ready to release and
accepted by customer)
Level -0 (sanity testing)
 After testing environment is established with required software and
hardware, the testing team is conducting sanity test to estimate
“testability” of that initial software build. If the initial build is not
testable, then the testing team reject that build and waiting for stable
build to receive from developers.
 If the build is testable with out any functionalities missing than the
testing team is concentrating on level-1 (real or comprehensive testing).
 In this level-0 sanity testing, the testing team is estimating testability
with the help of below factors:
 Understandability , simplicity, operability, observable, consistency,
maintainability, automatable, controllable.
 Above factors coverage on the build is estimating testability.
 This level-0 sanity testing is also known as smoke testing (or) testability
testing (or) tester acceptance testing (or) Build verification testing.
Level -1 (comprehensive testing)
 In this stage the testers are arranging the dependent tests as batches .
Every test batch is also known as “test suite” or “test set” or “test belt”
or “test chain”.
 After completion of batches creation, the test engineers are executing
case by case in every batch through manually or through test
automation.
 During test execution either in manual or in automation test engineers
are preparing test log document. This document consists of three types
of entries:
 Passed: all expected values in test cases are equal actual values of
build.
 Failed: any one expected is not equal to actual.
 Blocked: test case execution postponed due to incorrect parent
functionality.
skip passed
Arrange test
Execution failed
Cases in Queue
batches (M/A)
closed
blocked
Partial
P/F/
warning
 The final status of every test case is either passed,
skip or closed.
Level -2 (Regression testing)
 During level-1 comprehensive testing, the testing team is reporting
mismatches to development team as defect reports. After accepting that
defects, the developers are performing changes in build coding and then
release that modified build including release note. The testing team is
studying that release note and then plan regression testing on that build to
ensure completeness and correctness of that modification.
 In level-2 regression testing, the test engineers are execution previously
executed tests on modified build to ensure modifications.. The selection of
test cases to be re-executed is depending on release note.
 Defect severity w.r.t release note of modified build:
High: all p0,p1 carefully selected p2 test cases.
Medium: all p0 , carefully selected p1 and some of p2 test cases.
Low: some p0, some p1, some p2 test cases.
If there is sudden changes in the customer requirements then testers are
executing p0,p1 and selected p2 test cases.
Test reporting
 During level-1 and level-2 test execution , test engineers are
reporting mismatches to development team . In this report test
engineers are following below like defect report format.
1. Defect ID: the unique number or name.
2. Description: the summary of the defect including type
3. Build version-Id: the version number of build, in this build test
engineers detected that defect.
4. Feature: the name of module or functionality, in that module test
engineers detected that defect.
5. Test case name: the name of failed test case, in this test case
execution test engineer detected that defect.
6. Reproducible: Yes/No
yes  defect appears every time in test execution
No defect appears rarely in test execution.
7. If Yes, attach test procedure:
8. If NO, attach test procedure and snapshots:
9. Severity: the seriousness of defect in terms of functionality.
High/show stopper: mandatory to resolve and not able to continue
testing with out resolving.
Medium/critical: mandatory to resolve and able to continue testing
Low/non critical : may or may not resolve but able to continue testing
10.Priority: the importance of defect to defect to resolve w.r.t. customer

(High/medium/low)
11.Status: new/reopen
new: reporting first time
reopen: re-reporting
12.Detected by: the name of test engineer
13.Detected on: date of defect detection and reporting
14.Assigned to : the responsible person at development site to receive the
defect.
15.Suggested fix (optional): suggestion to developers to accept and resolve
that defect.
Defect submission process
PM

Team Manager Team lead

Test lead Programmer

Test Engineer
Large Scale organizations
PM

Test Lead Team Lead

Test Engineers programmers

Small and medium scale organization


Defect life cycle

New
deferred

Open Rejected

closed Reopen
 Possibilities:
New – Open—Closed
New– Rejected—Closed
New—Open—Reopen- - - - - - - Closed
New – Rejected– Reopen - - - - - - Closed
New – Deferred
Defect Age: the time gap in between defect
reporting and defect closing or deferring
Defect Density: The average number of defects
detected in on module or function is called defect
density
Defect resolution Type: After receiving defect
report from testing team development people are
conducting review and then send a reply to testers
this reply is called “defect resolution type”.
Test closure
 After the completion of all reasonable cycles of
test execution and defects closing, the test lead is
conduction a review meeting to estimate
completeness and correctness of testing process.

In this review meeting testing team is


concentrating n three factors:
A) Coverage Analysis:
 Requirements oriented coverage
 Testing technique oriented coverage.
B) Defect Density Calculation :
Eg: Module No. of defects
A 20%
B 20%
C 40%(Need for final
regression)
D 20%
100%
C) Analysis of Deferred Defects:
Whether deferred defects are postponable or not ?
After the completion of this review the testing team is
conducting level-3 final regression or postmortem
testing on high bug density modules in the build.
Select high
defect density Effort
requirement estimation
Test
reporting
Plan
Regression regression
testing Life cycle of regression testing
User Acceptance testing: After the completion of
final regression testing and their modifications,
the project management is releasing master build
to collect feedback from real customers or model
customers. There are two ways to collect feedback
such as alpha-test and beta-test.
Sign Off: After completion of user acceptance test
and their modifications, the test lead is conduction
sign off review. In this review the test lead is
gathering all testing documents prepared by test
engineers.
These all testing documents combination is called
final test summary report (FTSR).
Test Strategy doc.
Test plan
Test case titles/scenarios
Test automation programs (if any)
Test logs
Defect reports
Release notes of modified build
Summary reports
Requirement traceability matrix

You might also like