Testing

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

UNIT-01

Software Testing
Software testing can be stated as the process of verifying and validating whether a software or application is bug-
free, meets the technical requirements as guided by its design and development, and meets the user requirements
effectively and efficiently by handling all the exceptional and boundary cases.

Software Testing is a method to check whether the actual software product matches expected requirements and to
ensure that software product is Defect free. It involves execution of software/system components using manual or
automated tools to evaluate one or more properties of interest. The purpose of software testing is to identify errors,
gaps or missing requirements in contrast to actual requirements.

What are different types of software testing?

Software Testing can be broadly classified into two types:

1. Manual Testing: Manual testing includes testing software manually, i.e., without using any automation tool or
any script. In this type, the tester takes over the role of an end-user and tests the software to identify any
unexpected behavior or bug. There are different stages for manual testing such as unit testing, integration
testing, system testing, and user acceptance testing. Testers use test plans, test cases, or test scenarios to test
software to ensure the completeness of testing. Manual testing also includes exploratory testing, as testers
explore the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the tester writes
scripts and uses another software to test the product. This process involves the automation of a manual process.
Automation Testing is used to re-run the test scenarios quickly and repeatedly, that were performed manually in
manual testing.Apart from regression testing, automation testing is also used to test the application from a load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time and
money when compared to manual testing.

Types Of Software Testing Techniques

There are two main categories of software testing techniques:

1. Static Testing Techniques are testing techniques that are used to find defects in an application under test
without executing the code. Static Testing is done to avoid errors at an early stage of the development cycle thus
reducing the cost of fixing them.

2. Dynamic Testing Techniques are testing techniques that are used to test the dynamic behaviour of the
application under test, that is by the execution of the code base. The main purpose of dynamic testing is to test
the application with dynamic inputs- some of which may be allowed as per requirement (Positive testing) and
some are not allowed (Negative Testing).

What are different levels of software testing?

Software level testing can be majorly classified into 4 levels:

1. Unit Testing: A level of the software testing process where individual units/components of a software/system are
tested. The purpose is to validate that each unit of the software performs as designed.
2. Integration Testing: A level of the software testing process where individual units are combined and tested as a
group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
3. System Testing: A level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it
is acceptable for delivery.

What is Software Testing Methodology?

Software Testing Methodology is defined as strategies and testing types used to certify that the Application Under
Test meets client expectations. Test Methodologies include functional and non-functional testing to validate the AUT.
Examples of Testing Methodologies are Unit Testing, Integration Testing, System Testing, Performance Testing etc.
Each testing methodology has a defined test objective, test strategy, and deliverables.

Software Testing Life Cycle (STLC)


The Software Testing Life Cycle (STLC) is a systematic approach to testing a software application to ensure that it
meets the requirements and is free of defects. It is a process that follows a series of steps or phases, and each phase
has specific objectives and deliverables. The STLC is used to ensure that the software is of high quality, reliable, and
meets the needs of the end-users.

Software Testing Life Cycle (STLC) is a sequence of specific activities conducted during the testing process to ensure
software quality goals are met. STLC involves both verification and validation activities. Contrary to popular belief,
Software Testing is not just a single/isolate activity, i.e. testing. It consists of a series of activities carried out
methodologically to help certify your software product. STLC stands for Software Testing Life Cycle.

1. Requirement Phase Testing: Requirement Phase Testing also known as Requirement Analysis in which test team
studies the requirements from a testing point of view to identify testable requirements and the QA team may
interact with various stakeholders to understand requirements in detail. Requirements could be either functional
or non-functional. Automation feasibility for the testing project is also done in this stage.

Activities in Requirement Phase Testing


 Identify types of tests to be performed.
 Gather details about testing priorities and focus.
 Prepare Requirement Traceability Matrix (RTM).
 Identify test environment details where testing is supposed to be carried out.
 Automation feasibility analysis (if required).

2. Test Planning in STLC: Test Planning in STLC is a phase in which a Senior QA manager determines the test plan
strategy along with efforts and cost estimates for the project. Moreover, the resources, test environment, test
limitations and the testing schedule are also determined. The Test Plan gets prepared and finalized in the same
phase.
Test Planning Activities
 Preparation of test plan/strategy document for various types of testing
 Test tool selection
 Test effort estimation
 Resource planning and determining roles and responsibilities.
 Training requirement

3. Test Case Development Phase: The Test Case Development Phase involves the creation, verification and rework
of test cases & test scripts after the test plan is ready. Initially, the Test data is identified then created and
reviewed and then reworked based on the preconditions. Then the QA team starts the development process of
test cases for individual units.
Test Case Development Activities
 Create test cases, automation scripts (if applicable)
 Review and baseline test cases and scripts
 Create test data (If Test Environment is available)

4. Test Environment Setup: Test Environment Setup decides the software and hardware conditions under which a
work product is tested. It is one of the critical aspects of the testing process and can be done in parallel with the
Test Case Development Phase. Test team may not be involved in this activity if the development team provides
the test environment. The test team is required to do a readiness check (smoke testing) of the given
environment.
Test Environment Setup Activities
 Understand the required architecture, environment set-up and prepare hardware and software
requirement list for the Test Environment.
 Setup test Environment and test data
 Perform smoke test on the build

5. Test Execution Phase: Test Execution Phase is carried out by the testers in which testing of the software build is
done based on test plans and test cases prepared. The process consists of test script execution, test script
maintenance and bug reporting. If bugs are reported then it is reverted back to development team for correction
and retesting will be performed.
Test Execution Activities
 Execute tests as per plan
 Document test results, and log defects for failed cases
 Map defects to test cases in RTM
 Retest the Defect fixes
 Track the defects to closure
6. Test Cycle Closure: Test Cycle Closure phase is completion of test execution which involves several activities like
test completion reporting, collection of test completion matrices and test results. Testing team members meet,
discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons
from current test cycle. The idea is to remove process bottlenecks for future test cycles.
Test Cycle Closure Activities
 Evaluate cycle completion criteria based on Time, Test coverage, Cost,Software, Critical Business
Objectives, Quality
 Prepare test metrics based on the above parameters.
 Document the learning out of the project
 Prepare Test closure report
 Qualitative and quantitative reporting of quality of the work product to the customer.
 Test result analysis to find out the defect distribution by type and severity.

UNIT-02
Dynamic Testing
Dynamic Testing is a software testing method used to test the dynamic behaviour of software code. The main
purpose of dynamic testing is to test software behaviour with dynamic variables or variables which are not constant
and finding weak areas in software runtime environment. The code must be executed in order to test the dynamic
behavior.

Dynamic testing is a type of software testing that involves executing the software and evaluating its behavior during
runtime. It is also known as functional testing, as it focuses on testing the software’s functionality and how it behaves
under different inputs and conditions.

During dynamic testing, the software is run and tested against a set of predefined test cases. These test cases are
designed to cover a range of inputs and use cases, and they are used to check the software’s behavior and output in
response to different inputs. This can include testing the software’s user interface, functional requirements, and
performance.

Dynamic testing can be performed manually or through the use of automated testing tools. Automated testing tools
can be used to run test cases repeatedly and quickly, making it easier to identify and fix any issues that are found

Types of Dynamic Testing

Dynamic Testing is classified into two categories

 White Box Testing


 Black Box Testing

White Box Testing


White Box Testing is a software testing method in which the internal structure/ design is known to the tester. The
main aim of White Box testing is to check on how System is performing based on the code. It is mainly performed by
the Developers or White Box Testers who has knowledge on the programming.

White Box Testing is a testing technique in which software’s internal structure, design, and coding are tested to verify
input-output flow and improve design, usability, and security. In white box testing, code is visible to testers, so it is
also called Clear box testing, Open box testing, Transparent box testing, Code-based testing, and Glass box testing.
White Testing is performed in 2 Steps:

1. Tester should understand the code well


2. Tester should write some code for test cases and execute them

Types of White Box Testing

 Unit Testing: It is often the first type of testing done on an application. Unit Testing is performed on each unit
or block of code as it is developed. Unit Testing is essentially done by the programmer. As a software
developer, you develop a few lines of code, a single function or an object and test it to make sure it works
before continuing Unit Testing helps identify a majority of bugs, early in the software development lifecycle.
Bugs identified in this stage are cheaper and easy to fix.

 Testing for Memory Leaks: Memory leaks are leading causes of slower running applications. A QA specialist
who is experienced at detecting memory leaks is essential in cases where you have a slow running software
application.

White Box Testing Techniques

 Statement Coverage:- This technique requires every possible statement in the code to be tested at least once
during the testing process of software engineering.
 Branch Coverage – This technique checks every possible path (if-else and other conditional loops) of a
software application.
 Condition Coverage: In this technique, all individual conditions must be covered as shown in the following
example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
 Multiple Condition Coverage: In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once.
 Basis Path Testing: In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path.
 Loop Testing: Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important.

Features of white box testing:

 Code coverage analysis: White box testing helps to analyse the code coverage of an application, which helps to
identify the areas of the code that are not being tested.
 Access to the source code: White box testing requires access to the application’s source code, which makes it
possible to test individual functions, methods, and modules.
 Knowledge of programming languages: Testers performing white box testing must have knowledge of
programming languages like Java, C++, Python, and PHP to understand the code structure and write tests.
 Identifying logical errors: White box testing helps to identify logical errors in the code, such as infinite loops or
incorrect conditional statements.
 Integration testing: White box testing is useful for integration testing, as it allows testers to verify that the
different components of an application are working together as expected.
 Unit testing: White box testing is also used for unit testing, which involves testing individual units of code to
ensure that they are working correctly.
 Optimization of code: White box testing can help to optimize the code by identifying any performance issues,
redundant code, or other areas that can be improved.
 Security testing: White box testing can also be used for security testing, as it allows testers to identify any
vulnerabilities in the application’s code.
Tools required for White box testing:

 PyUnit
 Sqlmap
 Nmap
 Parasoft Jtest
 Nunit
 VeraUnit
 CppUnit
 Bugzilla
 Fiddler
 JSUnit.net
 OpenGrok
 Wireshark
 HP Fortify
 CSUnit

Black Box Testing


Black Box Testing is a method of testing in which the internal structure/ code/design is NOT known to the tester. The
main aim of this testing to verify the functionality of the system under test and this type of testing requires to
execute the complete test suite and is mainly performed by the Testers, and there is no need of any programming
knowledge.

Black box testing is a technique of software testing which examines the functionality of software without peering into
its internal structure or coding. The primary source of black box testing is a specification of requirements that is
stated by the customer.

The Black Box Testing is again classified into two types.

 Functional Testing
 Non-Functional Testing
1. Functional Testing: Functional testing is performed to verify that all the features developed are according to the
functional specifications, and it is performed by executing the functional test cases written by the QA team, in
functional testing phase, system is tested by providing input, verifying the output and comparing the actual
results with the expected results.
There are different Levels of Functional Testing out of which the most important are

 Unit Testing – Generally Unit is a small piece of code which is testable, Unit Testing is performed at individual
unit of software and is performed by developers
 Integration Testing – Integration Testing is the testing which is performed after Unit Testing and is performed
by combining all the individual units which are testable and is performed either by developers or testers
 System Testing – System Testing is a performed to ensure whether the system performs as per the
requirements and is generally performed when the complete system is ready, it is performed by testers when
the Build or code is released to QA team
 Acceptance Testing – Acceptance testing is performed to verify whether the system has met the business
requirements and is ready to use or ready for deployment and is generally performed by the end users.

2. Non- Functional Testing: Non-Functional testing is a testing technique which does not focus on functional
aspects and mainly concentrates on the nonfunctional attributes of the system such as memory leaks,
performance or robustness of the system. Non-Functional testing is performed at all test levels.
There are many Non-Functional Testing Techniques out of which the most important are

 Performance Testing – Performance Testing is performed to check whether the response time of the system
is normal as per the requirements under the desired network load.
 Recovery Testing – Recovery testing is a method to verify on how well a system is able to recover from
crashes and hardware failures.
 Compatibility Testing – Compatibility testing is performed to verify how the system behaves across different
environments.
 Security testing – Security testing is performed to verify the robustness of the application, i.e to ensure that
only the authorizes users/roles are accessing the system
 Usability testing – Usability testing is a method to verify the usability of the system by the end users to verify
on how comfortable the users are with the system.

How to do BlackBox Testing in Software Engineering

 Initially, the requirements and specifications of the system are examined.


 Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly. Also,
some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the expected outputs.
 Defects if any are fixed and re-tested.

Black Box Testing Techniques

 Equivalence Class Testing: It is used to minimize the number of possible test cases to an optimum level while
maintains reasonable test coverage.
 Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This technique
determines whether a certain range of values are acceptable by the system or not. It is very useful in
reducing the number of test cases. It is most suitable for the systems where an input is within certain ranges.
 Decision Table Testing: A decision table puts causes and their effects in a matrix. There is a unique
combination in each column.
 Compatibility testing – The test case results not only depends on the product but is also on the
infrastructure for delivering functionality. When the infrastructure parameters are changed it is still expected
to work properly.
 Cause effect graphing – This technique establishes a relationship between logical input called causes with
corresponding actions called the effect. The causes and effects are represented using Boolean graphs.

Features of black box testing:

 Independent testing: Black box testing is performed by testers who are not involved in the development of
the application, which helps to ensure that testing is unbiased and impartial.
 Testing from a user’s perspective: Black box testing is conducted from the perspective of an end user, which
helps to ensure that the application meets user requirements and is easy to use.
 No knowledge of internal code: Testers performing black box testing do not have access to the application’s
internal code, which allows them to focus on testing the application’s external behaviour and functionality.
 Requirements-based testing: Black box testing is typically based on the application’s requirements, which
helps to ensure that the application meets the required specifications.
 Different testing techniques: Black box testing can be performed using various testing techniques, such as
functional testing, usability testing, acceptance testing, and regression testing.
 Easy to automate: Black box testing is easy to automate using various automation tools, which helps to
reduce the overall testing time and effort.
 Scalability: Black box testing can be scaled up or down depending on the size and complexity of the
application being tested.
 Limited knowledge of application: Testers performing black box testing have limited knowledge of the
application being tested, which helps to ensure that testing is more representative of how the end users will
interact with the application.

Black Box Testing White Box Testing


It is a way of software testing in which the internal It is a way of testing the software in which the tester
structure or the program or the code is hidden has knowledge about the internal structure or the
and nothing is known about it. code or the program of the software.
Implementation of code is not needed for black Code implementation is necessary for white box
box testing. testing.
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is needed. Knowledge of implementation is required.
It can be referred to as outer or external software It is the inner or the internal software testing.
testing.
It is a functional test of the software. It is a structural test of the software.
This testing can be initiated based on the This type of testing of software is started after a detail
requirement specifications document. design document.
No knowledge of programming is required. It is mandatory to have knowledge of programming.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing of It is generally applicable to the lower levels of
software. software testing.
It is also called closed testing. It is also called as clear box testing.
It is least time consuming. It is most time consuming.
It is not suitable or preferred for algorithm It is suitable for algorithm testing.
testing.
Can be done by trial and error ways and methods. Data domains along with inner or internal boundaries
can be better tested.
It is less exhaustive as compared to white box It is comparatively more exhaustive than black box
testing. testing.

Validation activities
Validation activities are an essential part of the software development and testing process. They are performed to
ensure that a software system meets its intended requirements and functions correctly. Validation activities
encompass various testing stages, each focusing on different aspects of the software's development lifecycle.

 Unit Validation:
1. Unit validation, also known as unit testing, aims to validate the smallest individual components or units
of code, such as functions or methods, in isolation.
2. It focuses on checking whether each unit of code behaves as expected and meets its specified
requirements.
3. Developers typically write test cases to evaluate each unit's functionality, input validation, and error
handling.
 Integration Validation:
1. Integration validation, or integration testing, verifies the interactions between different units/modules
when they are combined.
2. It ensures that units, when integrated, work together harmoniously and that data and control flow
between them is correct.
3. Test cases are designed to check the interfaces, data exchanges, and communication between integrated
components.
 Function Validation:
1. Function validation, also known as functional testing, assesses the software's functionality against its
specified requirements.
2. It examines the software's behavior under various scenarios to verify that it performs the functions it is
supposed to.
3. Test cases are created to cover different functions or features of the software, and their outcomes are
compared to expected results.
 System Validation:
1. System validation, or system testing, evaluates the entire software system as a whole.
2. It ensures that all integrated components and functions work together as expected and that the system
meets its intended purpose.
3. End-to-end testing is performed to simulate real-world scenarios and validate the system's overall
performance, reliability, and compliance with requirements.
 Acceptance Testing:
1. Acceptance testing is the final phase of validation and is typically carried out by users or stakeholders.
2. It determines whether the software meets user expectations and business requirements, ensuring it is
ready for production use.
3. User acceptance testing (UAT) is a common approach where real users perform tests based on their use
cases to confirm that the system meets their needs.

Regression Testing
Regression testing is a software testing practice that aims to ensure that changes or enhancements made to an
existing software system do not introduce new defects or negatively impact the existing functionality. It involves re-
executing a subset of test cases on the modified code to verify that the changes haven't caused unintended side
effects or regressions. Regression testing is a critical part of the software development lifecycle to maintain and
improve software quality.

Progressive vs. Regressive Regression Testing:

 Progressive Regression Testing: In progressive regression testing, new test cases are added to the test suite
as new features or functionality are developed. This ensures that the existing functionality remains intact
while new features are tested.
 Regressive Regression Testing: In regressive regression testing, the focus is on re-running existing test cases
after code changes or updates to verify that the changes haven't adversely affected the existing functionality.
This is the more common form of regression testing.

Regression Testing Produces Quality Software:

Regression testing contributes to the production of quality software by:

 Preventing Regressions: It helps catch and fix defects early, preventing the reintroduction of previously resolved
issues when changes are made.
 Stabilizing Code: It ensures that the software remains stable and reliable as it evolves, reducing the risk of
production failures.
 Ensuring Consistency: It confirms that new features or bug fixes do not negatively impact existing, working
functionality, ensuring a consistent user experience.

Regression Testability:

Regression testability refers to the ease with which a software application can be tested for regressions. A highly
testable application is one where it is straightforward to create and execute test cases efficiently. Good code
organization, modularity, and proper documentation contribute to higher testability.

Objectives of Regression Testing:

The primary objectives of regression testing are:

 Verify Stability: Ensure that existing functionality remains stable after code changes.
 Detect Regressions: Detect any new defects introduced by changes.
 Validate Fixes: Confirm that previously identified defects have been fixed.
 Assure Compatibility: Ensure that new code is compatible with the existing system.
 Support Continuous Integration: Enable automated testing in the continuous integration and continuous delivery
(CI/CD) pipeline.

Regression Testing Types:

There are several types of regression testing:

 Selective Regression Testing: Focuses on executing a subset of test cases that are likely to be affected by recent
changes, minimizing testing effort.
 Complete Regression Testing: Involves running the entire suite of test cases, ensuring comprehensive coverage
but requiring more time and resources.
 Partial Regression Testing: Executes a portion of the test suite, typically targeting specific modules or areas
affected by changes.

Regression Testing Techniques:

Various techniques can be used for regression testing:

 Re-running Manual Test Cases: Manually re-executing test cases that cover critical functionality to validate
correctness.
 Test Automation: Automating test cases using tools and frameworks to efficiently run a large number of tests
and catch regressions.
 Continuous Integration (CI): Integrating regression tests into the CI pipeline to automatically run tests whenever
code changes are pushed, providing rapid feedback.
 Test Case Prioritization: Prioritizing test cases based on risk and impact to focus testing efforts on critical areas.
 Test Data Management: Managing and maintaining test data sets to ensure they remain relevant and effective
for regression testing.
 Version Control: Using version control systems to track code changes and correlate them with test results to
identify regressions.

UNIT-03
Test Management Process in Software Testing
Test Management is a process of managing the testing activities in order to ensure high quality and high-end testing
of the software application. The method consists of organizing, controlling, ensuring traceability and visibility of the
testing process in order to deliver a high-quality software application. It ensures that the software testing process
runs as expected.
Test Management is a process where testing activities are managed to ensure high-quality and high-end testing of
software applications. This method consists of tracking, organization, controlling processes and checking the visibility
of the testing process to deliver a high-quality software application. It makes sure the software testing process runs
as expected.

Test Management Process :


It is a software process that manages the start to the end of all software testing activities. This management process
provides planning, controlling, tracking, and monitoring facilities throughout the whole group cycle, these process
includes several activities like test case design and test execution, test planning, etc. It also gives an initial plan and
discipline specifications for the software testing process.

The test management process has two main parts of test Management Process:

Planning :

 Risk analysis
 Test Estimation
 Test planning

Execution :

 Testing Activity
 Issue Management
 Test report and evolution

Test Planning

Test planning is a crucial phase in test management that involves defining the strategy and approach for conducting
testing activities on a software application. It is a comprehensive process that outlines the objectives, scope,
resources, schedule, and deliverables for testing. Effective test planning ensures that testing efforts are well-
organized, efficient, and aligned with project goals. Here are the key components and steps involved in test planning:

1. Objectives and Scope:


a. Define the testing objectives, including what needs to be verified and validated.
b. Specify the scope of testing, outlining what will be covered and any excluded areas.
2. Test Strategy and Approach:
a. Develop a test strategy document that describes the overall approach to testing.
b. Outline the testing methodology, techniques, and approaches to be used.
3. Resource Allocation and Schedule:
a. Allocate resources, including personnel, tools, and testing environments.
b. Create a detailed test schedule with milestones, accounting for dependencies and contingencies.
4. Risk Assessment and Mitigation:
a. Identify potential risks and challenges related to testing.
b. Develop a risk mitigation plan to address identified risks and minimize their impact on testing.
5. Test Deliverables and Communication:
a. Define the test deliverables, such as test plans, test cases, and reports.
b. Establish a communication plan to ensure effective collaboration and reporting with stakeholders
throughout the testing process.

Test Design Specification:

The Test Design Specification (TDS) is a document that outlines the detailed design of individual test cases, test
scenarios, and test scripts. It provides a clear and structured description of how testing will be performed for a
particular software feature, function, or requirement.
TDS documents are used by testers to execute test cases accurately and by developers to understand the specific test
requirements and expected results for debugging.

Components: A TDS typically includes the following components:

 Test Case ID: A unique identifier for each test case.


 Test Case Description: A detailed description of what the test case is intended to test.
 Test Preconditions: Any specific conditions or prerequisites that must be met before executing the test case.
 Test Input Data: The data or inputs that will be used during the test.
 Test Steps: A step-by-step sequence of actions to be performed during the test.
 Expected Results: The expected outcomes, including pass/fail criteria.
 Test Data Setup: Information on how test data will be prepared or configured.
 Dependencies: Any dependencies on other tests or components.
 Test Environment: Details about the test environment, including hardware, software, and configurations.

Test Process Specification:

The Test Process Specification (TPS) is a document that outlines the procedures and guidelines governing the entire
testing process within an organization or project. It provides a structured framework for how testing activities are to
be carried out and managed.

TPS documents are used to ensure that testing is conducted consistently across the organization, following
established best practices and guidelines. It also helps maintain quality and consistency in testing processes and
documentation.

Components: A TPS typically includes the following components:

 Testing Standards and Policies: Establishing testing standards, guidelines, and policies that ensure
consistency and best practices in testing across the organization.
 Testing Lifecycle: Defining the phases and stages of the testing process, from test planning and design to
execution and reporting.
 Testing Roles and Responsibilities: Clearly defining the roles and responsibilities of individuals involved in
testing, including testers, test managers, and other stakeholders.
 Testing Tools and Infrastructure: Identifying the testing tools, frameworks, and infrastructure to be used,
along with their configuration and maintenance requirements.
 Defect Management: Describing the process for capturing, reporting, prioritizing, and tracking defects
throughout the testing lifecycle.
 Test Execution Procedures: Detailing how test cases are to be executed, including test environment setup,
data preparation, and execution steps.
 Test Reporting and Documentation: Specifying the format and content of test reports, including test
summary reports, defect reports, and traceability matrices.
 Change Control: Defining the process for managing changes to test plans, test cases, and test data as the
project evolves.
 Continuous Improvement: Outlining processes for reviewing and improving the testing process based on
lessons learned and feedback from testing activities.

Software Metrics

Software metrics are quantitative measurements or indicators that provide insight into various aspects of software
development, quality, and project management. These measurements help software professionals assess the
performance, efficiency, and quality of software processes and products. Software metrics are used to make
informed decisions, improve processes, and manage software projects effectively.

Need for Software Metrics:

The need for software metrics arises from several important factors:
1. Quality Assessment: Metrics help in evaluating the quality and reliability of software, identifying defects, and
improving software quality.
2. Performance Monitoring: Metrics enable tracking progress, identifying bottlenecks, and ensuring that the
project is on schedule and within budget.
3. Decision-Making: Metrics provide objective data that aids in decision-making, resource allocation, and project
planning.
4. Continuous Improvement: Metrics support process improvement initiatives by identifying areas for
enhancement and measuring the effectiveness of changes.
5. Benchmarking: Metrics facilitate comparison with industry benchmarks and best practices, helping organizations
stay competitive.

Classification of Software Metrics:

1. Product Metrics: These metrics measure the attributes and characteristics of the software product itself, such as
code complexity, size, coverage, and defect density.
2. Process Metrics: Process metrics focus on evaluating the efficiency and effectiveness of the software
development process, including aspects like effort estimation accuracy, cycle time, and defect injection rate.
3. Project Metrics: Project metrics assess the performance and progress of the software project, including cost and
schedule variances, resource utilization, and adherence to project schedules.
4. Quality Metrics: Quality metrics gauge the quality and reliability of the software product, considering factors like
defect density, failure rate, and customer satisfaction.
5. Size Metrics: Size metrics quantify the size of the software product, often using measures like lines of code (LOC),
function points (FP), or cyclomatic complexity.

Test Suite Management

Efficient test suite management is crucial for ensuring effective testing in software development. It involves
organizing, prioritizing, and maintaining test suites to maximize test coverage and minimize testing time and
resources. Key components:

1. Organization:
a. Categorize test cases based on factors like functionality, priority, complexity, and dependencies.
b. Use test case management tools to store and manage test cases efficiently.
2. Maintenance:
a. Regularly update and review test cases to keep them aligned with changing requirements.
b. Remove obsolete or redundant test cases.
3. Execution:
a. Schedule and execute test cases as part of the testing process.
b. Automate repetitive and regression test cases for efficiency.

Test Suite Prioritization:

Test suite prioritization is the process of ordering or ranking test cases based on their importance or relevance to
achieve specific testing objectives. Different types of prioritization include:

1. Functional Prioritization:
a. Prioritize test cases based on the critical functionality of the software. Test the most critical features first.
2. Risk-Based Prioritization:
a. Assess and prioritize test cases based on perceived risks, focusing on high-risk areas first.
3. Requirement-Based Prioritization:
a. Prioritize test cases according to the importance of meeting specific requirements or user stories.
4. Code Coverage-Based Prioritization:
a. Prioritize test cases based on the percentage of code coverage they achieve. Start with high-impact code
areas.
Prioritization Techniques:

Several techniques can be used to prioritize test cases:

1. MoSCoW Method:
a. Categorize requirements or features as Must-haves, Should-haves, Could-haves, and Won't-haves.
Prioritize testing accordingly.
2. Pairwise Testing:
a. Focus on testing combinations of parameters that are most likely to uncover defects efficiently.
3. Equivalence Partitioning:
a. Group test cases based on equivalence classes, and prioritize testing the most critical classes.
4. Risk Assessment:
a. Collaboratively assess and assign risk levels to different features or requirements, and prioritize
accordingly.

Measuring Effectiveness:

To measure the effectiveness of test suite prioritization, consider the following metrics:

1. Defect Detection Rate:


a. Measure how many critical defects were detected by the prioritized test suite.
2. Testing Time Saved:
a. Compare the time required to execute the entire test suite versus the prioritized suite.
3. Code Coverage Improvement:
a. Determine if the prioritization approach leads to improved code coverage of critical code areas.
4. Release Quality:
a. Assess the quality of the software released based on the defects identified in the prioritized test suite.
5. Feedback from Stakeholders:
a. Collect feedback from stakeholders to evaluate whether the prioritization aligns with their expectations
and project goals.

UNIT-04
Automation Testing
Automated testing is the application of software tools to automate a human-driven manual process of reviewing and
validating a software product.

Automated Testing is a technique where the Tester writes scripts on their own and uses suitable Software or
Automation Tool to test the software. It is an Automation Process of a Manual Process. It allows for executing
repetitive tasks without the intervention of a Manual Tester.

 It is used to automate the testing tasks that are difficult to perform manually.
 Automation tests can be run at any time of the day as they use scripted sequences to examine the software.
 Automation tests can also enter test data compare the expected result with the actual result and generate
detailed test reports.
 The goal of automation tests is to reduce the number of test cases to be executed manually but not to eliminate
manual testing.
 It is possible to record the test suit and replay it when required.

Types of Automation Testing

Below are the different types of automation testing:


1. Unit testing: Unit testing is a phase in software testing to test the smallest piece of code known as a unit that can
be logically isolated from the code. It is carried out during the development of the application.
2. Integration testing: component: Integration testing is a phase in software testing in which individual software
components are combined and tested as a group. It is carried out to check the compatibility of the component
with the specified functional requirements.
3. Smoke testing: Smoke testing is a type of software testing that determines whether the built software is stable or
not. It is the preliminary check of the software before its release in the market.
4. Performance testing: Performance testing is a type of software testing that is carried out to determine how the
system performs in terms of stability and responsiveness under a particular load.
5. Regression testing: Regression testing is a type of software testing that confirms that previously developed
software still works fine after the change and that the change has not adversely affected existing features.
6. Security testing: Security testing is a type of software testing that uncovers the risks, and vulnerabilities in the
security mechanism of the software application. It helps an organization to identify the loopholes in the security
mechanism and take corrective measures to rectify the security gaps.
7. Acceptance testing: Acceptance testing is the last phase of software testing that is performed after the system
testing. It helps to determine to what degree the application meets end users’ approval.
8. API testing: API testing is a type of software testing that validates the Application Programming Interface(API)
and checks the functionality, security, and reliability of the programming interface.
9. UI Testing: UI testing is a type of software testing that helps testers ensure that all the fields, buttons, and other
items on the screen function as desired.

Need for automated testing

Testings that are not possible to be done manually like going through each and every testing again and again after
implementing any new feature or test case so that new features do not break existing functionality.

1. Efficiency: Automated testing is faster and more efficient than manual testing. It can execute a large number of
test cases in a shorter time, reducing testing cycle duration.
2. Repeatability: Automated tests can be executed repeatedly, ensuring consistent testing and reliable results. This
is vital for regression testing after code changes.
3. Coverage: Automated tests can cover a wide range of scenarios, ensuring comprehensive test coverage, which
may be impractical with manual testing.
4. Accuracy: Automated tests eliminate human errors and provide precise results, reducing the chances of false
positives or negatives.
5. Cost-Effectiveness: While there is an initial investment in creating automated test scripts, in the long run, it saves
time and resources compared to manual testing.
6. Continuous Integration: Automated tests can be integrated into the development process, facilitating continuous
integration and continuous delivery (CI/CD), ensuring that code changes do not introduce defects.
7. Regression/Recursive Testing: It's particularly useful for regression testing, ensuring that new updates or
features do not break existing functionality.
8. Parallel Testing: Automation allows for parallel testing on various configurations, browsers, and devices,
enhancing compatibility testing.
9. Data-Driven Testing: Automation supports data-driven testing, where the same test script can be executed with
multiple sets of test data.
10. Improved Product Quality: Automated testing helps identify defects early, resulting in higher software quality
and a better user experience.

Manual Testing vs Automated Testing

Below are some of the differences between manual testing and automated testing:
Parameters Manual Testing Automated Testing
Reliability Manual testing is not accurate at all times Since it is performed by third-party tools and/or
due to human error, thus it is less scripts, therefore it is more reliable.
reliable.
Investment Heavy investment in human resources. Investment in tools rather than human
resources.
Time efficiency Manual testing is time-consuming due to Automation testing is time-saving as due to the
human intervention where test cases are use of the tools the execution is faster in
generated manually. comparison to manual testing.
Programming There is no need to have programming It is important to have programming knowledge
knowledge knowledge to write the test cases. to write test cases.
Regression testing There is a possibility that the test cases When there are changes in the code, regression
executed the first time will not be able to testing is done to catch the bugs due to changes
catch the regression bugs due to the in the code.
frequently changing requirements.

Guidelines

1. Understand Your Requirements: Clearly define your testing needs, including the types of testing (functional,
performance, security) and the technology stack (web, mobile, desktop) you'll be working with. Understanding
your requirements is essential for tool selection.
2. Evaluate Compatibility: Ensure that the testing tool is compatible with your application's environment. It should
support the operating systems, browsers, and mobile devices you intend to test on.
3. Ease of Use: Look for tools with user-friendly interfaces and clear documentation. A tool that is easy to learn and
use will save time and resources.
4. Community and Support: Consider tools with active user communities and good technical support. These
resources can be invaluable for troubleshooting and gaining insights.
5. Integration Capabilities: Check if the tool can seamlessly integrate with your existing development and testing
ecosystem, including CI/CD pipelines and bug tracking systems.
6. Cost and ROI: Assess the total cost of ownership, which includes licensing fees, training, maintenance, and
infrastructure requirements. Evaluate whether the tool offers a favorable return on investment.
7. Customization and Extensibility: Look for tools that allow customization and scripting to adapt to your unique
testing needs. The ability to extend the tool's functionality is beneficial.
8. Vendor Reputation: Research the vendor's reputation and track record. Choose tools from reputable vendors
with a history of providing quality software.
9. Scalability: Ensure that the tool can scale with your project's requirements. Check if it supports parallel testing
for efficient execution.
10. Security: Consider the security features of the tool, especially if you're dealing with sensitive data or applications.
Ensure it complies with security best practices.
11. Feedback and Reviews: Read reviews and gather feedback from other testing professionals who have used the
tool. Their insights can help you make an informed decision.

Testing Tools
WinRunner

WinRunner was a popular automated functional GUI testing tool developed by Mercury Interactive. This tool allowed
users to record and play back user interface (UI) interactions as test scripts. It was widely used for functionality
testing of software applications.

Features:

 Recording and Playback: Users could record their interactions with a software application, and WinRunner would
generate test scripts that could be played back to replicate those interactions automatically.
 Functional Testing: application's functions and features.
 Support for Web Technologies: It supported web technologies like Java, VC++, Delphi, D2K, VB, and more,
making it versatile for testing various types of applications.
 Test Script Development: Users could develop and customize test scripts to suit their specific testing
requirements.
 Automation: WinRunner automated manual testing processes, saving time and effort in the testing phase.
 Integration: It could integrate with other testing tools and frameworks, allowing for a comprehensive testing
environment.

QTP (QuickTest Professional)

QTP stands for QuickTest Professional, a product of Hewlett Packard (HP). This tool helps testers to perform an
automated functional testing seamlessly, without monitoring, once script development is complete.

 Automated Testing: testers can create and run automated test scripts to verify the functionality of software
applications.
 Graphical User Interface (GUI) Testing
 Scripting Language: QTP uses VBScript as its scripting language, making it accessible to testers with basic
programming skills.
 Object Recognition: It uses a technology called "Object Repository" to identify and interact with various UI
elements, providing an easy and intuitive way to create test scripts.
 Test Automation Framework: allowing testers to create a structured and maintainable test suite.
 Integration: QTP/UFT can integrate with other testing tools and environments, making it versatile for different
testing scenarios.
 Regression Testing: existing test cases are rerun to ensure that new code changes have not introduced defects.
 Report Generation: QTP generates detailed reports after test execution, helping testers analyze test results and
identify issues.

LoadRunner

LoadRunner is a performance testing tool developed by Micro Focus. It is widely used for assessing the performance,
scalability, and reliability of software applications. LoadRunner enables testers to simulate real-world user
interactions with an application under various conditions to evaluate its performance.

 Performance Testing: LoadRunner specializes in performance testing, which includes load testing, stress testing,
and scalability testing. It helps assess how an application performs under different workloads.
 Components: LoadRunner consists of multiple components, including VuGen (Virtual User Generator) for script
creation, Controller for test management, and Analysis for result analysis.
 Scripting Language: VuGen uses a scripting language called Virtual User Script (VUScript) to create test scripts
that simulate user interactions with the application.
 Protocols: LoadRunner supports various communication protocols, making it versatile for testing different types
of applications, including web, mobile, and legacy systems.
 Scenarios: Testers can create test scenarios in the Controller, specifying the number of virtual users, test
duration, and workload distribution.
 Result Analysis: The Analysis component provides detailed reports and graphs to help testers analyze
performance data and identify bottlenecks.
 Realistic Load Generation: LoadRunner can generate load from multiple locations and simulate real user
behaviors, such as think time between actions.
 Community and Learning Resources: There are various tutorials, courses, and communities that provide
guidance on using LoadRunner effectively.

IBM Rational Functional Tester (RFT)

IBM Rational Functional Tester (RFT) is a software testing tool developed by IBM. It is designed to automate the
testing of various software applications, including web-based, .NET, Java, terminal emulator-based, SAP, and Siebel
applications. RFT is primarily used for functional and regression testing. Here are key aspects of IBM Rational
Functional Tester:

 Functional Testing: RFT is primarily used for functional testing, where it verifies that a software application
performs its functions correctly according to its specifications.
 Regression Testing: It is valuable for automating regression testing to ensure that new code changes do not break
existing functionality.
 Automated Test Scripting: Test scripts in RFT are created using various scripting languages, including Java and
Visual Basic .NET. This allows testers to write customized test scripts to meet specific testing requirements.
 Integration with Test Data: RFT can be integrated with various data sources to use test data dynamically in test
scripts.
 Object Recognition: RFT uses object recognition technology to identify and interact with various elements in the
application's user interface, making it easier to automate test cases.
 Reporting and Analysis: The tool provides reporting and analysis features to review test results, identify issues,
and track testing progress.
 Test Playback: RFT allows the playback of automated test scripts on multiple environments and browsers, making
it suitable for cross-browser testing.
 Integration with IBM Quality Management Tools: It can be integrated with other IBM software testing and
quality management tools for end-to-end testing and quality assurance processes.
 Extensive Documentation: IBM provides comprehensive documentation and tutorials to help users get started
and make the most of RFT.

Selenium

Selenium is an open-source, automated testing framework widely used for web application testing. It allows
developers and testers to automate web-based tasks and validate web applications across various browsers and
platforms. Here are some key points about Selenium:

 Open-Source: Selenium is an open-source project, making it freely available for anyone to use and contribute to
its development.
 Browser Automation: Selenium enables the automation of tasks within web browsers, allowing for testing,
scraping data, and interacting with web applications.
 Cross-Browser Testing: It supports testing across multiple web browsers such as Chrome, Firefox, Safari, and
Internet Explorer, ensuring that web applications work consistently across different environments.
 Programming Language Support: Selenium supports multiple programming languages, including Java, Python,
C#, and others, providing flexibility for testers to choose the language they are most comfortable with.
 Test Automation: Selenium is often used for functional and regression testing. Test cases are created using
Selenium scripts, which simulate user interactions with a web application.
 Integration: Selenium can be integrated with various testing frameworks, build tools, and continuous integration
systems for efficient test automation.
 Community Support: It has a large and active user community, which means that users can find extensive
documentation, tutorials, and support from the community.
 Advantages and Limitations: Selenium offers advantages like flexibility, cross-browser compatibility, and cost-
effectiveness. However, it may require significant setup and maintenance, and automating complex scenarios can
be challenging.

UNIT-05
Object Oriented Testing in Software Testing
Object-Oriented Testing (OOT) is a testing methodology specifically designed for object-oriented programming. In
object-oriented software development, the basic units are objects, which encapsulate data and behaviors. OOT
focuses on testing these objects and their interactions to ensure the quality and reliability of object-oriented
software.

It is a technique that is based on the hierarchy of classes and well-defined objects. Here, an object is defined as an
entity or an instance of a class that is used to store data and send & receive any messages and class can be defined as
a group of objects which has common properties.

Object Oriented testing can be performed at different levels to detect the issues.

 At the algorithmic level, a single module of every class should be tested. Every class gets tested as an individual
entity at the class level.
 Once class-level testing is done, Cluster level testing will be performed. Cluster-level testing is the integration of
individual classes. The main purpose of doing integration testing is to verify the interconnection between classes
and how well they perform interclass interactions. Thus, Cluster level testing can be viewed as integration testing
of classes.
 Once Cluster level testing is performed, system-level testing begins. At this level, integration between clusters can
be taken care of. Also, at each level regression testing is a must after every new release.

Object-Oriented Testing Levels /Techniques

Fault-based testing: The main focus of fault-based testing is based on consumer specifications or code or both. Test
cases are created in a way to identify all possible faults and flush them all. This technique finds all the defects that
include incorrect specification and interface errors.

Scenario-based testing: This testing technique is useful to detect issues due to wrong specifications and improper
interaction among the classes. Incorrect interactions lead to incorrect output which can cause the malfunctioning of
some segments sometimes. Scenario-based testing focuses on how the end user will perform the task in a specific
environment.
Class Testing based on the method testing: This Object Oriented Testing in Software Testing can be considered the
most simple and common approach. Each method of the class performs a proper cohesive function so that methods
can be involved once during the testing.

S.No. Object-Oriented Testing Conventional Testing

1 This emphasises performing isolated testing on This emphasises testing a software system's
certain objects or classes functionality against predetermined criteria or
requirements
2 It verifies the behaviour of each object or class It verifies the behaviour of the entire software system.
in the system.
3 Tests the interactions between objects or Tests the interactions between software components or
classes. modules.

4 It uses mock objects to simulate the behaviour It does not use mock objects.
of dependent objects or classes.

5 It can be more time-consuming than It can be faster than object-oriented testing.


conventional testing.
6 Requires a thorough understanding of the Requires a thorough understanding of the system's
system's design and implementation. requirements and specifications.

7 Involves testing at multiple levels, including unit Involves testing at multiple levels, including unit testing,
testing, integration testing, and system testing. integration testing, system testing, and acceptance
testing.
8 Can be more complex than conventional testing. Can be simpler than object-oriented testing.

9 Focuses on testing the individual behaviour of Focuses on testing the overall behaviour of the software
objects or classes. system.

10 Can detect defects or issues that may not be May not detect all types of software defects or issues.
detected by conventional testing.

11 Can be more effective in testing complex object- May be less effective in testing complex object-oriented
oriented systems. systems.

12 Involves creating test cases that simulate Involves creating test cases that cover all possible
different scenarios or inputs that the object or scenarios or requirements of the software.
class might encounter in the real world.
13 Requires the ability to create effective test cases Requires the ability to write test cases that cover all
that cover all possible scenarios. predetermined requirements or specifications.

14 May require the use of specialized testing tools May not require specialized testing tools and
and frameworks. frameworks.

Software Testing – Web Based Testing


Web Testing, or website testing is checking your web application or website for potential bugs before its made live
and is accessible to general public. Web Testing checks for functionality, usability, security, compatibility, performance
of the web application or website.

During this stage issues such as that of web application security, the functioning of the site, its access to handicapped
as well as regular users and its ability to handle traffic is checked.
Testing a web application is very common while testing any other application like testing functionality, configuration,
or compatibility, etc. Testing a web application includes the analysis of the web fault compared to the general
software faults. Web applications are required to be tested on different browsers and platforms so that we can
identify the areas that need special focus while testing a web application.

Types of Web Testing:

 Static Website Testing: A static website is a type of website in which the content shown or displayed is exactly
the same as it is stored in the server. This type of website has great UI but does not have any dynamic feature
that a user or visitor can use. In static testing, we generally focus on testing things like UI as it is the most
important part of a static website.
 Dynamic Website Testing: A dynamic website is a type of website that consists of both frontend i.e, UI, and the
backend of the website like a database, etc. This type of website gets updated or change regularly as per the
user’s requirements. In this website, there are a lot of functionalities involved like what a button will do if it is
pressed, are error messages are shown properly at their defined time, etc.
 E-Commerce Website Testing: An e-commerce website is very difficult in maintaining as it consists of different
pages and functionalities, etc. In this testing, the tester or developer has to check various things like checking if
the shopping cart is working as per the requirements or not, are user registration or login functionality is also
working properly or not, etc. The most important thing in this testing is that does a user can successfully do
payment or not and if the website is secured.
 Mobile-Based Web Testing: In this testing, the developer or tester basically checks the website compatibility on
different devices and generally on mobile devices because many of the users open the website on their mobile
devices.

Steps in Software Testing

 App Functionality: In web-based testing, we have to check the specified functionality, features, and operational
behavior of a web application to ensure they correspond to its specifications.
 Usability: While testing usability, the developers face issues with scalability and interactivity. As different
numbers of users will be using the website, it is the responsibility of developers to make a group for testing the
application across different browsers by using different hardware.
 Browser Compatibility: For checking the compatibility of the website to work the same in different browsers we
test the web application to check whether the content that is on the website is being displayed correctly across
all the browsers or not.
 Security: Security plays an important role in every website that is available on the internet. As a part of security,
the testers check things like testing the unauthorized access to secure pages should not be permitted, files that
are confined to the users should not be downloadable without the proper access.
 Load Issues: We perform this testing to check the behavior of the system under a specific load so that we can
measure some important transactions and the load on the database, the application server, etc. are also
monitored.
 Storage and Database: Testing the storage or the database of any web application is also an important
component and we must sure that the database is properly tested. We test things like finding errors while
executing any DB queries, checking the response time of a\the query, testing whether the data retrieved from
the database is correctly shown on the website or not.

Data Warehouse Testing


Data warehouse testing is the process of building and executing comprehensive test cases to ensure that data in a
warehouse has integrity and is reliable, accurate, and consistent with the organization’s data framework. This process
is crucial for modern businesses because of the increasing emphasis on data analytics and the way complex business
insights are identified on the assumption that data is trustworthy.
Extract, Transform, and Load (ETL) is the common process used to load data from source systems to the data
warehouse. Data is extracted from the source, transformed to match the target schema, and loaded into the data
warehouse.

ETL testing ensures that the transformation of data from source to warehouse is accurate. It also involves verifying
data at each point between the source and destination.

ETL testing is performed in five stages :

 Identifying data sources and requirements.


 Data acquisition.
 Implement business logic’s and dimensional modeling.
 Build and populate data.
 Build reports.

There are three basic levels of testing performed on data warehouse which are as follows :

1. Unit Testing –
This type of testing is being performed at the developer’s end. In unit testing, each unit/component of modules is
separately tested. Each modules of the whole data warehouse, i.e. program, SQL Script, procedure,, Unix shell is
validated and tested.
2. Integration Testing –
In this type of testing the various individual units/ modules of the application are brought together or combined
and then tested against the number of inputs. It is performed to detect the fault in integrated modules and to
test whether the various components are performing well after integration.
3. System Testing –
System testing is the form of testing that validates and tests the whole data warehouse application. This type of
testing is being performed by technical testing team. This test is conducted after developer’s team performs unit
testing and the main purpose of this testing is to check whether the entire system is working altogether or not.

Challenges of data warehouse testing are :

 Data selection from multiple source and analysis that follows pose great challenge.
 Volume and complexity of the data, certain testing strategies are time consuming.
 ETL testing requires hive SQL skills, thus it pose challenges for tester who have limited SQL skills.
 Redundant data in a data warehouse.
 Inconsistent and inaccurate reports.

Challenges in Web App Testing


Ensuring cross browser compatibility: One of the major challenges in web application testing is achieving cross
browser compatibility. Web applications need to function correctly across different web browsers such as Chrome,
Firefox, Safari, and Microsoft Edge. When performing cross browser testing, QA testers may identify bugs related to
layout inconsistencies, broken functionality, or JavaScript errors specific to certain browsers.

Responsiveness: Ensuring that a web application is responsive and functions correctly on various devices and screen
sizes, including desktops, tablets, and mobile phones.

Dealing with dynamic content: Web applications often retrieve data from a server or database and display it
dynamically on the user interface. In some cases, bugs may occur when the displayed data is inconsistent or not in
sync with the actual data. This can happen due to caching issues, incorrect data retrieval or manipulation, or
improper handling of real-time updates.
Performance and scalability testing: Web applications need to be responsive and perform well under different user
loads. QA testers can uncover bugs related to slow page loading times, high server response times, memory leaks, or
inefficient database queries that affect the overall performance of the application.

Security testing: Web applications are susceptible to various security risks. QA engineers play a crucial role in
identifying potential security vulnerabilities such as SQL injection, cross-site scripting (XSS), insecure direct object
references, or insufficient access controls.

Test data management: Test data management involves providing the necessary data for testing scenarios. A bug in
this area could occur if the test data provided is incomplete, inaccurate, or inconsistent.

Communication and collaboration: Without proper communication and collaboration among the stakeholders,
developers, and software testers, there is a higher chance of misinterpreting the requirements. This can lead to
problems where the implemented functionality does not match the intended behavior or specifications.

Scalability: Testing for the ability to scale up as the user base grows. Scalability challenges can arise when a web
application experiences sudden traffic spikes.

Load Balancing: Ensuring that the load is distributed evenly across servers and that load balancing configurations
work as intended.

Accessibility: Testing for web accessibility to ensure that the application is usable by individuals with disabilities.

UNIT-06
McCall’s Quality Factors and Criteria
McCall's Quality Model, developed by John D. McCall, identifies 11 quality factors and criteria to assess software
quality. These factors are categorized into three main perspectives: Product Operation, Product Revision, and Product
Transition.

This model classifies all software requirements into 11 software quality factors. The 11 factors are grouped into three
categories – product operation, product revision, and product transition factors.

Quality Factors: The higher-level quality attributes which can be accessed directly are called quality factors. These
attributes are external attributes. The attributes at this level are given more importance by the users and managers.

Quality Criteria: The lower or second-level quality attributes that can be accessed either subjectively or objectively
are called Quality Criteria. These attributes are internal attributes. Each quality factor has many second-level of
quality attributes or quality criteria.
 Product operation factors − Correctness, Reliability, Efficiency, Integrity, Usability.
 Product revision factors − Maintainability, Flexibility, Testability.
 Product transition factors − Portability, Reusability, Interoperability.

Product Operation:

1. Correctness: This factor assesses whether the software performs its intended functions accurately.
2. Reliability: Reliability evaluates the software's ability to function without failure for an extended period.
3. Efficiency: Efficiency measures the software's performance in terms of resource utilization and execution speed.
4. Integrity: Integrity focuses on data security, ensuring that data remains secure and unaltered.
5. Usability: Usability evaluates the software's user-friendliness and ease of use.
6. Maintainability: Maintainability assesses how easy it is to make modifications or improvements to the software.
7. Testability: Testability evaluates the ease of testing the software.

Product Revision:

8. Flexibility: Flexibility assesses the software's adaptability to changes in requirements.


9. Understandability: Understandability focuses on how well the software's code and design can be
comprehended.

Product Transition:

10. Portability: Portability evaluates the software's ability to run on various platforms or environments.
11. Interoperability: Interoperability assesses the software's compatibility and ability to work with other systems.

These quality factors and criteria are essential for software quality management. Evaluating a software product based
on these factors helps ensure that it meets the desired quality standards.

ISO 9126 Quality Characteristics


ISO/IEC 9126 is an international standard that defines software quality characteristics. These characteristics are
essential for assessing and ensuring the quality of software. The ISO 9126 standard categorizes software quality into
six main characteristics:

ISO/IEC 9126 is an international standard proposed to make sure ‘quality of all software-intensive products’ which
includes a system like safety-critical where in case of failure of software lives will be in jeopardy

1. Functionality: This characteristic assesses whether the software meets its intended functionality and whether it
provides the features and capabilities expected by users.
The functions are those that will satisfy implied needs.
 Suitability
 Accuracy
 Interoperability
 Security
 Functionality Compliance

2. Reliability: Reliability evaluates the software's ability to perform consistently and reliably, without failures or
errors, over time.
A set of attributes that will bear on the capability of software to maintain the level of performance.
 Maturity
 Fault Tolerance
 Recoverability
 Reliability Compliance

3. Usability: Usability focuses on the user-friendliness and ease of use of the software. It assesses how well users
can interact with and navigate the software.
A set of attributes that bear on the effort needed for use by an implied set of users.
 Understandability
 Learnability
 Operability
 Attractiveness
 Usability Compliance

4. Efficiency: Efficiency measures the software's performance in terms of resource utilization, speed, and
responsiveness. It evaluates how efficiently the software uses system resources.
A set of attributes that bear on the relationship between the level of performance of the software under stated
conditions.
 Time Behavior
 Resource Utilization
 Efficiency Compliance

5. Maintainability: Maintainability assesses how easily the software can be modified, improved, or maintained. It
looks at the software's code and architecture to determine its adaptability to changes.
A set of attributes that bear on the effort needed to make specified modifications.
 Analyzability
 Changeability
 Stability
 Testability
 Maintainability Compliance

6. Portability: Portability evaluates the software's ability to run on different platforms and environments without
major modifications.
A set of attributes that bear on the ability of software to be transferred from one environment to another.
 Adaptability
 Installability
 Co-existence
 Replaceability
 Portability Compliance

7. Quality in Use Model:


It identifies the four quality characteristics.
 Effectiveness
 Productivity
 Safety
 Satisfaction

ISO 9000:2000 and Software Quality Management


ISO 9000:2000 is a standard that primarily focuses on quality management systems (QMS) and not specifically on
software quality management. However, it lays the foundation for establishing effective quality management
practices that can be applied to various industries, including software development. ISO 9000:2000 encourages
organizations to document their quality management processes and continuously improve them.

Software Quality Management ensures that the required level of quality is achieved by submitting improvements to
the product development process. SQA aims to develop a culture within the team and it is seen as everyone's
responsibility.

The modern view of a quality associated with a software product several quality methods such as the following:

1. Portability: A software device is said to be portable, if it can be freely made to work in various operating system
environments, in multiple machines, with other software products, etc.
2. Usability: A software product has better usability if various categories of users can easily invoke the functions of
the product.
3. Reusability: A software product has excellent reusability if different modules of the product can quickly be
reused to develop new products.
4. Correctness: A software product is correct if various requirements as specified in the SRS document have been
correctly implemented.
5. Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show up,
new tasks can be easily added to the product, and the functionalities of the product can be easily modified, etc.

In the context of software quality management, ISO 9000:2000 can be used as a framework to:

1. Document Processes: It encourages organizations to document their software development and testing
processes. This documentation ensures that processes are defined and followed consistently.
2. Quality Assurance: ISO 9000:2000 promotes quality assurance practices, which are essential for software quality
management. This includes setting quality objectives, monitoring processes, and conducting regular audits.
3. Continuous Improvement: The standard emphasizes the importance of continuous improvement. Software
development organizations can use this principle to enhance their development and testing processes
continually.

You might also like