Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 18

Master Test Plan

MASTER TEST PLAN

FOR

<PROJECT NAME>

Prepared by : Date :

Reviewed by : ______ Date :

<Version No.> <Date of Release> Page 1 out of 18


Master Test Plan

REVISION HISTORY

Version Date Description Reason for change

<Version No.> <Date of Release> Page 2 out of 18


Master Test Plan

Table of Contents

A Introduction...................................................................................................................................................5
A.1 Overview.............................................................................................................................................................. 5
A.2 Acronyms.............................................................................................................................................................5
B Test Methodology...........................................................................................................................................6
B.1 Test Objectives.....................................................................................................................................................6
C Test Strategy..................................................................................................................................................7
C.1 Summary.............................................................................................................................................................7
C.2 Project Test Approach.........................................................................................................................................7
C.2.1 W Model............................................................................................................................................................................7
C.2.2 Test Approach Overview..................................................................................................................................................8
C.2.3 Test Planning.....................................................................................................................................................................8
C.2.4 Knowledge Transfer.........................................................................................................................................................8
C.2.5 Test Preparation................................................................................................................................................................8
C.2.6 Internal Review.................................................................................................................................................................9
C.2.7 Client QA Review (Optional)...........................................................................................................................................9
C.2.8 Test Data Preparation......................................................................................................................................................9
C.2.9 Test Execution and Administration.................................................................................................................................9
C.3 Build Process.....................................................................................................................................................10
C.4 Unit Test Level...................................................................................................................................................10
C.4.1 Purpose and Objective...................................................................................................................................................10
C.4.2 Unit Test Approach Description....................................................................................................................................10
C.4.2.1 Plan..........................................................................................................................................................................10
C.4.2.2 Prepare.....................................................................................................................................................................10
C.4.2.3 Execute.....................................................................................................................................................................11
C.4.3 Entry/Exit Criteria, Completion Deliverables and Suspension..................................................................................11
C.5 Integration/Functional Test Level....................................................................................................................11
C.5.1 Purpose and Objective...................................................................................................................................................11
C.5.2 Integration/Functional Test Approach Description.....................................................................................................11
C.5.2.1 Plan...........................................................................................................................................................................11
C.5.2.2 Prepare.....................................................................................................................................................................12
C.5.2.3 Execute.....................................................................................................................................................................12
C.5.2.4 Report......................................................................................................................................................................12
C.5.3 Entry/Exit Criteria, Completion Deliverables and Suspension.................................................................................12
C.6 User Acceptance Test Level (UAT)...................................................................................................................13

<Version No.> <Date of Release> Page 3 out of 18


Master Test Plan

C.6.1 Purpose and Objective...................................................................................................................................................13


C.6.2 User Acceptance Test Approach Description...............................................................................................................13
C.6.2.1 Plan..........................................................................................................................................................................13
C.6.2.2 Prepare.....................................................................................................................................................................13
C.6.2.3 Execute.....................................................................................................................................................................13
C.6.2.4 Report......................................................................................................................................................................13
C.6.3 Entry/Exit Criteria, Completion Deliverables and Suspension.................................................................................13
D Test Management........................................................................................................................................14
D.1 Test Problem Log Management.......................................................................................................................14
D.1.1 Defect Resolution Process..............................................................................................................................................15
D.1.2 Escalation Procedures....................................................................................................................................................15
D.2 Test Results Review...........................................................................................................................................15
D.2.1 Reporting Test Progress.................................................................................................................................................15
Test Tools..........................................................................................................................................................16
D.3 Test Tools List....................................................................................................................................................16
E Test Team Requirement..............................................................................................................................16
E.1 Roles and Responsibility...................................................................................................................................16
Appendix G......................................................................................................................................................17
Project Glossary.......................................................................................................................................................17

<Version No.> <Date of Release> Page 4 out of 18


Master Test Plan

A Introduction
A.1 Overview
The Master Test Plan (MTP) defines and documents the processes and controls to be used for the project/ODC. The
plan will provide sufficient checks and cross checks during the project to ensure that the final deliverable(s) meet the
documented requirements. The MTP describes the scope of testing, responsibilities of participants and administration
for testing the project/product.

The MTP defines the high-level testing approach and procedures for each test level applicable in the ODC/project.
Various testing types being conducted within the ODC maybe :
• Unit Testing

• Functional Testing

• Regression Testing

• User Acceptance Testing

• Any other

This MTP also addresses Test Management, types of tests, Test Tools, and test team requirements.

Note : The types of testing defined here are only examples. These should be updated as per contractual requirements

The Detail Test Plan (DTP) document further defines the test approach for each level of testing and describes the
specifics for the testing environments. Project/Work pack overview and scope of project/work-pack with regards to
testing is covered in DTP document. The DTP also includes the test cases and test procedures for test execution, the
entry and exit criteria, and the roles and responsibilities of the test team. The Detail Test Plan(s) is a separate
document and is to be created for each project/work-pack (executed from the ODC) for which testing needs to be
carried out.

A.2 Acronyms
MTP Master Test Plan
DTP Detail Test Plan
ODC Offshore Development Center
HLD High Level Design
QA Quality Assurance
UTC Unit Test Cases
FTC Functional Test Cases
UT Unit Testing
RTM Requirement Traceability Matrix
UAT User Acceptance Testing

<Version No.> <Date of Release> Page 5 out of 18


Master Test Plan

B Test Methodology
B.1 Test Objectives
Testing is defined as the systematic execution of an application, its components and procedures with the intent of
finding defects. Testing also demonstrates that the system works from a functional, performance and hardware
perspective, and that the interfaces between systems function correctly.

The objectives of testing are:


 Primary goal of testing is to show that software meets specifications and to ensure that product and services
are based on the needs of specific user groups
 To decrease faults identified after being promoted to the Live/Production environments providing a more stable
production environment for the business to work in
 Provide the development team with a extra layer of protection by providing independent testing resource in a
stable clean testing environment
 To create a set up automated regression test scripts to test high risk functionality and high volume transactions
 Control, monitor and continuously improve the test process
 Distribute test analysis data to identify areas of improvements for proficient testing and to deliver high quality
product

<Version No.> <Date of Release> Page 6 out of 18


Master Test Plan

C Test Strategy
C.1 Summary
The testing strategy defines the objectives of all test stages and the techniques that apply. The testing strategy also
forms the basis for the creation of a standardized documentation set, and facilitates communication of the test process
and its implications outside of the test discipline. Any test support tools introduced should be aligned with, and in
support of, the test strategy. This section below provides a summary of the overall test approach and the specifics of
the approach to each level of testing.

C.2 Project Test Approach

C.2.1 W Model
Testing is least costly and most effective if it is performed throughout the whole life cycle, in parallel with every stage of
development. This strand of testing in parallel with development is represented in the W model shown below:

The W model is a natural evolution from the V Model of testing. The V model illustrates the layered and phased nature
of software testing, but lists only dynamic test stages like unit and system testing. The W model, supports testing of all
deliverables at every stage of the development life cycle. The W-model promotes the idea that for every activity that
generates a project deliverable, each of those deliverables should have an associated validation activity. Where it

<Version No.> <Date of Release> Page 7 out of 18


Master Test Plan

differs from the V-model is that it promotes both static testing (through inspections/reviews) of early document or code
deliverables and dynamic test stages of software deliverables.
The value of the W-model approach is that it focuses attention on all project deliverables. By matching the deliverables
with product risks, the types of testing required to be incorporated into the test strategy can be identified at an early
stage.
The importance of intermediate control points is especially noticeable for deliverables that require a considerably
longer time to produce. The control points help in performing Corrective Actions during the process rather than at the
end of the process when the deliverable is ready. Once the deliverable is produced the effort required to perform
Corrective Action to remove Non-Conformities could turn out to be enormous.

C.2.2 Test Approach Overview


The overall testing approach for the project is derived from Testing Methodology and W model described above and
consists of four basic steps. They are:

1. Test Planning
2. Knowledge Transfer
3. Test Preparation
4. Test Execution
5. Test Reporting
These steps are followed at the overall project/ODC level.

C.2.3 Test Planning


The test planning approach begins by documenting in the Detail Test Plan all the major components of the project that
are to be tested based upon the business/functional requirements.
The components are defined in the Business Requirements or Software Requirement Specifications document and are
to be planned in detail and documented in the Detail Test Plans.
Planning itself is a process. Performing each step of the planning process will ensure that the plan is built
systematically and completely. The detail test plan, along with the associated test scripts/test cases and test schedule,
will document these steps to ensure that the plan is executable by those who will be testing it, and accepted by those
who need to approve it.
The first step in the planning process is to create the detail test plan, get the KT done and to document the functions to
be tested. These functions are mapped to the Business Requirements and or the Functional Requirements to ensure
complete and testable requirements.

C.2.4 Knowledge Transfer


Knowledge Transfer (KT) is an integral part of QA activity and happens once the SRS/HLD are finalized. This process
helps QA understand the system functionality and identify the functional areas to be tested. After QA reviews the SRS
or HLD documents, KT session is done by development team through discussions so that both Testing and
Development teams have a common understanding of system to be developed and tested. Additionally, the two teams
can also agree on functionality which is critical / complex and hence requires additional time / focus during testing.

<Version No.> <Date of Release> Page 8 out of 18


Master Test Plan

C.2.5 Test Preparation


The purpose of test preparation is to prepare the test facilities, tools, techniques, test plans, test cases, test data, and
test procedures, for test execution. Testing Phase will start with development phase. Testing team will study SRS or
HLD and will start creating Test Cases while developers are busy carrying out development activities.

The test preparation approach is:

 First, develop a Master Test Plan to identify the test levels (E.g. Unit, Regression, Functional and UAT) and the
strategy and approach for the testing identified.
 Next, detail test plans (DTP) are created to identify the major project components and for the various types of
testing to be conducted. These test plans are to address in detail in Detail Test Plan:
1. What is in scope and out of testing scope
2. What is to be tested
3. Where testing is to occur
4. When testing is scheduled
5. Who is testing
6. How it is tested
 The detail test plan and test scripts/test cases are prepared by the Quality Assurance Team.

Test scripts/test cases are prepared for each of the functions to be tested and are mapped to the business
requirements/functional specifications. The RTM should be updated with the test scripts/ cases to ensure all
requirements have been covered by the test scripts/ cases. Test scripts/test cases are created for execution at all
levels of testing identified in the DTP. Test cases are created using the template which needs to be agreed at the
commencement of the project.

C.2.6 Internal Review


Next is the internal review of test cases by development team. This is to ensure that there is no functional gap and all
functionality is covered. This is also to make sure that both Development and Testing teams have a common
understanding of the testing is to be done, the tests which need to be carried out thereby resulting in reduced number
of ‘Invalid Defects’ (where testing team thinks it should be defects and development team disagrees). The test cases
are also reviewed by the QA lead which ensures quality of Test Cases. Any such defects in Test Cases are logged
using QR028, which are then further reviewed by QA team to implement changes.

C.2.7 Client QA Review (Optional)


Once the Test Cases are reviewed internally they are delivered to client for their comments / suggestion. Their
comments/ suggestion are then implemented in Test Cases before execution of Test Cases.

C.2.8 Test Data Preparation


Mention specific Test Data preparations and when data is required before testing can commence.

<Version No.> <Date of Release> Page 9 out of 18


Master Test Plan

C.2.9 Test Execution and Administration


During Test Execution, the test cases created are executed in the respective test environments. The first round of Unit
Testing is conducted by the development team in the development environment. The second round of unit testing and
other testing identified is conducted by the Testing Team. UAT is conducted by Onsite QA/client or business users.

The test administration and progress tracking approach is:


 Manual or automated test cases and test results will be documented in the test report. Test schedule is
prepared for monitoring the progress of testing during test execution. It is useful to provide the management an
overview of the progress of testing phase, deviations from planned, resource utilization etc.
 After the tester logs the defects in the defect tracking tool with detail description to replicate the defect, the
Team Lead will evaluate, prioritize the problems reported.
 Developers will update the defect tracking tool upon closure of problems. In case there is an issue w.r.t the
validity of reported defect, the development team can update the same in the defect tracking system
accordingly.
 Once a problem is closed, the tester will verify and close the problem in defect tracking tool.
 Test progress report will be generated every week or as needed and agreed in the Detail Test Plan.
 For UAT test separate meetings will be held.

C.3 Build Process

Note: Explain the Build Process for Testing to be done. Explain the location of code, the test bed etc., who will create
the Build, When and how often.

C.4 Unit Test Level

C.4.1 Purpose and Objective


Unit testing is performed by the developer and is the first step in testing the new or modified program. The purpose of
Unit Testing is to ensure that each individual program functions as required by the associated detailed design. Unit
testing is the lowest level of testing within the application. First round of Unit Test is performed by the developer
responsible for that component. The second round of unit testing will be performed by the Testing Team. Defects
discovered in Unit Test will be recorded in the defect tracking tool.

C.4.2 Unit Test Approach Description

C.4.2.1 Plan
White Box Testing is the most common testing technique used by programmers at the Unit test level. In the White Box
test approach, the programmers have an inside view of the system. Their concern is focused on “how it is done” NOT
“what is done”. White Box testing is logic oriented and the focus is on the execution of all possible paths of control flow
through the program. White box testing consists of testing paths, branch by branch, to produce predictable results.

<Version No.> <Date of Release> Page 10 out of 18


Master Test Plan

White box testing techniques to be followed are:-


 Statement coverage
 Decision coverage
 Condition coverage
 Decision/Condition coverage
 Multiple Condition coverage

C.4.2.2 Prepare
Unit test cases prepared by testing team members are reviewed by the development team. While writing UTCs,
developers should select a combination of White box testing techniques appropriate for the application. Too exhaustive
combination of these techniques leads to an unmanageable number of test cases so number of UTC are created
based on function points as well as on complexity of the application. Update the RTM to map UTC with requirements
from SRS/HLD to be assured that all the scenarios and functionality in the requirements is covered by application and
UTC.

C.4.2.3 Execute
Unit Test is the initial check of a program(s) in the development environment. As problems are discovered, defects
discovered in Unit Testing are logged into defect tracking system and then they are corrected by the programmer and
retested. These defects are later quantified and are used to evaluate the quality of product. When the exit criteria and
exit deliverable (see Entry/Exit Criteria below) has been met, programs are then integrated together and migrated to
next level of testing.

C.4.3 Entry/Exit Criteria, Completion Deliverables and Suspension

Table 2 Unit Test Entry/Exit Criteria, Completion Deliverables and Suspension Criteria
Entry Criteria Exit Criteria Completion Deliverables Suspension Criteria
 Completed  Source code has  Source code with compile/link options  Design change/flaw
Design been reviewed,  Updated design documentation
Document errors corrected
and compilation  Updated Requirement Traceability
 Source Code Matrix
Complete completed without
errors  Unit Test Report
 Unit Test
Cases

C.5 Integration/Functional Test Level

C.5.1 Purpose and Objective


Integration testing is performed by the Testing Team in the test environment. The purpose of Integration Test is to
ensure that all integrated programs within a package function as expected and in accordance with the design.

<Version No.> <Date of Release> Page 11 out of 18


Master Test Plan

Functional Tests demonstrate that the system functions as designed. Its focus is on the functionality of the system, as
seen externally. The key to success in Functional Test is the existence of a well-defined and documented design and
business requirements or Software Requirement Specifications. The main purpose of this testing is to verify that
software meets specification.

C.5.2 Integration/Functional Test Approach Description

C.5.2.1 Plan
A testing technique known as Black Box Testing is commonly planned for this level of testing. Black Box Testing is
focused on “what is done” as opposed to the White Box Testing of “how it is done”. Functional test analysts must also
plan to utilize various Black Box testing techniques such as Equivalence Partitioning, Boundary value Analysis and
Comparison Testing along with Error Processing technique mentioned above. Functional testing will consist of both
automated and manual testing based on project requirements. It is suggested that Regression testing (if required) be
done through automated test scripts.

C.5.2.2 Prepare
A Functional Test Plan will be created by the Testing Team with inputs from the Development Team. The test plan will
incorporate the test strategy as defined in this document, and the testing techniques identified above. Once the SRS
is reviewed, tester will start FTC creation in parallel to development phase. Gap analysis is prepared and RTM is
updated to map FTC with corresponding requirements. After FTC review, all the functional gaps, inaccurate functional
specs are covered and FTCs are updated accordingly.

C.5.2.3 Execute
Functional Test demonstrates that the system performs functionally and that the system interacts correctly within a
simulated production environment, without any adverse effect on other systems or interfaces. Defects identified are
logged into a defect tracking tool with severity of each defect.

C.5.2.4 Report
Time taken to write and execute the FTC is tracked. Problem logs are formally recorded and tracked. The process for
recording and tracking problem logs is located in the Test Management section of this document. When the exit criteria
and exit deliverables have been met, functionally tested and integrated code is promoted to Acceptance environment.

C.5.3 Entry/Exit Criteria, Completion Deliverables and Suspension

Table 4 Regression/Certification Test Entry/Exit Criteria, Completion Deliverables, Suspension Criteria


Entry Criteria Exit Criteria Completion Deliverables Suspension Criteria
 Results of unit  All test cases  Source code with  Major outages or
testing shows that executed. compile/link options equipment failure
Code performs as  All reported defects

<Version No.> <Date of Release> Page 12 out of 18


Master Test Plan

defined in the design have been closed  Actual test results  Change in project scope
at the Unit level. and verified.  System Test Summary  Test environment not
 Software migrated to Report available
system test  System Test Sign-off  Large number of high
environment severity or cutover critical
 Gap Analysis report to
 The trace out gaps in the problem logs opened in a
Functional/Integratio development based on short time period with no
n Test Cases are the requirements trend towards improvement
available. document  Unacceptable turn-around
 Updated Requirement time.
Traceability Matrix  Gaps in functional areas in
requirements and
developed code

C.6 User Acceptance Test Level (UAT)

C.6.1 Purpose and Objective


User Acceptance Testing ensures that the end users are satisfied that the system functions as required and all
functional gaps between developed code and Requirements are addressed, before the code can be deployed in the
production environment. That is, it ensures that the design captured all requirements.

C.6.2 User Acceptance Test Approach Description

C.6.2.1 Plan
Customer personnel with diverse job functions and methodologies use the system as they would in their specific daily
work environment. Customers simulate live operation of the business system, including all business situations that
might reasonably occur. Testing will focus on verifying that the system meets the business requirements and that the
system performs as expected

C.6.2.2 Prepare
A UAT Detail Test Plan will be created by the UAT Test Team. Test cases are made available by us to Client QA team
or created by them and are provided to system/business users for testing.

C.6.2.3 Execute
Testing is demonstrative rather than destructive, i.e., demonstrating that the system works, not on debugging.
Destructive, comprehensive testing and debugging already occurred during Unit, Integration and Functional testing.
The test cases are executed and the test results reviewed and documented. Any defects will be reported to the UAT
Test Coordinator by the customer(s) for handling. These are logged and categorized on severity level.

<Version No.> <Date of Release> Page 13 out of 18


Master Test Plan

C.6.2.4 Report
The users log defects in a defect tracking tool and assigns them to the Development Team Lead/ Project Manager who
would then assign it to the appropriate development TM for fixing. The process for recording defects is located in the
Test Management section of this document. When defects are corrected, the business user (who logged the defect) is
notified and the correction re-introduced into the testing cycle for validation by the customer(s). Once the User
Acceptance Test exit criteria and exit deliverables have been met, customer sign-off is obtained.

C.6.3 Entry/Exit Criteria, Completion Deliverables and Suspension

Table 5 UAT Entry/Exit Criteria, Completion Deliverables, Suspension Criteria


Entry Criteria Acceptance / Exit Completion Deliverables Suspension Criteria
Criteria
 Acceptance  All test cases have  Source code with  Major outages or equipment
Test Plan and been executed compile/link options. failure
test cases are  All reported  Actual test results.  Change in project scope
available. problems have  Test Summary report  Inaccurate functional specs
 Fully been closed
integrated and  Customer sign-offs  Unavailability of resources
functionally  Test environment not available
tested code is
 Large number of high severity or
available.
cutover critical problem logs
opened in a short time period with
no trend towards improvement
 Unacceptable turn-around time.

D Test Management
It is the role of test management to ensure that new or modified service products meet the business requirements for
which they have been developed or enhanced. The purpose of test management is to ensure that a testing strategy is
both devised and applied that is efficient, effective and economic as described above. Test management is also
concerned with both test resource and test environment management as covered in other sections of this document.

Key elements of test management include:


 Test organization –the set-up and management of a suitable test organizational structure and explicit role
definition. The project framework under which the testing activities will be carried out is reviewed, high level
test phase plans prepared and resource schedules considered. Test organization also involves the
determination of configuration standards and the definition of the test environment.
 Test planning – the requirements definition and design specifications facilitate in the identification of major test
items and these may necessitate the test strategy to be updated. A detailed test plan and schedule is prepared
with key test responsibilities being indicated.
 Test specifications – required for all levels of testing and covering all categories of test. The required outcome
of each test must be known before the test is attempted.
 Unit, integration, functional testing – configuration items are verified against the appropriate specifications and
in accordance with the test plan. The test environment should also be under configuration control and test data
and results stored for future evaluation.

<Version No.> <Date of Release> Page 14 out of 18


Master Test Plan

 Test monitoring and assessment – ongoing monitoring and assessment of the integrity of the development and
construction. The status of the configuration items should be reviewed against the phase plans and test
progress reports prepared providing some assurance of the verification and validation activities.
 Product assurance – the decision to negotiate the acceptance testing program and the release and
commissioning of the service product is subject to the ‘product assurance’ role being satisfied with the
outcome of the verification activities. Product assurance may oversee some of the test activity and may
participate in process reviews.

Most of the test management related activities are covered in other part of this document. This section mainly focuses
on problems logging and reporting section of test management.

D.1 Test Problem Log Management


All problems discovered during the various levels of testing will be documented, tracked and reported until they are
resolved. Lot of coordination is required between test coordinators and the development team to ensure that defects
logged are fixed and re-tested correctly in a timely and efficient manner.

Following are the steps for documenting test results:


 When a test case fails or a problem is identified, a problem log is created in the defect tracking tool. All defects
are assigned to the Team Lead/ Project Manager who then assigns to the concerned TM.
 The responsible programmer corrects and re-tests the defect.
 Notification of the fix is sent to the person who initiated the problem log by updating the status of the problem
log in the defect tracking tool
 If the retest is successful, the status is updated and the problem log is closed.
 If the retest is unsuccessful, or if another defect has been identified, the problem log status is updated and the
description is updated with the new findings. It is then returned to the programmer for correction.

Logged defects can be in any given status at one time. Various defect statuses are
 New (New logged defect)
 Resolved (Resolved by developer)
 Closed (Closed by tester after testing)
 Re-Opened (Re-Opened by tester after testing)

Log defect can fall under following resolution


 Open (Defect assigned to developer on which he has not started working)
 Fixed (Defects fixed by developer)
 Invalid (Defects which are invalid defects as per developer)
 Duplicate (Defects which are duplicate of another defect logged earlier)
 Wont Fix (Defect which is out of scope or with unclear functionality)
 Works For Me (Defects which cannot be replicated by development team)

D.1.1 Defect Resolution Process

Note: Document the Defect Resolution process here. Enumerate the defect tracking tool to be used, Defect categories
and status of resolutions should be described here. Mention how defects will be tracked to closure once identified by
the testers.

D.1.2 Escalation Procedures


Note: Explain the escalation procedure here along with time frame for each escalation.

<Version No.> <Date of Release> Page 15 out of 18


Master Test Plan

D.2 Test Results Review

D.2.1 Reporting Test Progress


The status of problems discovered and outstanding must be tracked and reported regularly. Defect Tracking Tool will
be used to automate the process and provide the information required on the report. (The defect tracking tool is
identified along with the client)

Following reports will be provided by the Quality Assurance Team:


 Weekly-Test Status (Schedule) Report
 Weekly Tracker (Defect) Reports
 Defect Categorization on ‘Type of defects’ and ‘Severity’ Reports (E.g. total number of Functional, Technical,
Syntactic, Deployment defects plus total number of Blocker, Critical, Major, Normal, Minor and Trivial defects in
each type)
 Defect Categorization on Functional areas Reports (Such as functional areas with maximum defects)
 Test Evaluation Report (QR082 as defined in QMS)
 Testing Project Metrics (QR083 as defined in QMS)

<Version No.> <Date of Release> Page 16 out of 18


Master Test Plan

Test Tools
D.3 Test Tools List

Test Tools
Planned Date of Acquisition
Test Tool Purpose/Use for New Test Tool

E Test Team Requirement


E.1 Roles and Responsibility
Note: Mention the various QA roles along with their responsibilities in this section.

<Version No.> <Date of Release> Page 17 out of 18


Master Test Plan

Appendix G
Project Glossary

Term Definition

<Version No.> <Date of Release> Page 18 out of 18

You might also like