Professional Documents
Culture Documents
Manual Testing Q&A for Interview
Manual Testing Q&A for Interview
Manual Testing Q&A for Interview
A test report is a document that summarizes the results of testing activities, providing
stakeholders with an overview of the testing process, its outcomes, and any issues
found. A comprehensive test report usually contains the following information:
1. Test Summary: A brief overview of the testing project, including the testing scope,
objectives, and timelines.
2. Test Environment: Details about the testing environment, such as hardware, software,
and network configurations used for testing.
3. Test Cases: A list of test cases executed, including their IDs, descriptions, and
expected results.
4. Test Results: The actual results of each test case, indicating whether they passed,
failed, or were blocked.
5. Defects/Issues: A list of defects or issues found during testing, including their
descriptions, severity, and priority.
6. Defect Metrics: Statistics about the defects found, such as the number of defects,
defect density, and defect distribution by type or severity.
7. Test Coverage: An analysis of the testing coverage, including the percentage of
requirements or code covered by the tests.
8. Testing Effort: A breakdown of the testing effort, including the time spent on testing,
the number of testers involved, and the testing resources utilized.
9. Conclusion and Recommendations: A summary of the testing outcomes, highlighting
any critical issues, and providing recommendations for improvement.
10. Appendices: Additional information, such as test data, test scripts, or detailed defect
reports, which can be referenced as needed.
By having a test strategy in place, teams can ensure that testing is carried out in a
structured and systematic way, ultimately leading to higher quality software and reduced
testing costs.
Example:
In a Users table, the UserID column can be the primary key, ensuring that each user has
a unique identifier.
1 JohnDoe johndoe@example.com
2 JaneDoe janedoe@example.com
3 BobSmith bobsmith@example.com
Foreign Key:
A foreign key is a field in a table that references the primary key of another table.
It establishes a relationship between two tables, allowing us to link data across tables.
In testing, foreign keys are important because they help us to:
Verify data relationships and consistency
Test data integrity and referential integrity
Identify and test complex business rules and workflows
Example:
In an Orders table, the UserID column can be a foreign key that references
the UserID primary key in the Users table, establishing a relationship between users and
their orders.
1 1 2022-01-01
2 1 2022-01-15
3 2 2022-02-01
Testing Implications:
As a tester, I need to ensure that primary and foreign keys are properly defined and
enforced in the database.
I should test data integrity and consistency by verifying that primary keys are unique
and foreign keys reference valid primary keys.
I should also test business rules and workflows that rely on primary and foreign key
relationships.
By understanding primary and foreign keys, I can design more effective tests to ensure
that the database is functioning correctly and that the application is using the database
correctly.
Example:
1 John Smith 25
2 Jane Doe 30
3 Bob Brown 20
4 Alice Johnson 35
If we want to retrieve the customers in alphabetical order by name, we can use the
following ORDER BY query:
4 Alice Johnson 35
3 Bob Brown 20
2 Jane Doe 30
1 John Smith 25
Example:
Suppose we have a Customers table with the following
columns: CustomerID, Name, Email, and Phone. To retrieve all customers with their names
and emails, we can use the following SELECT query:
UPDATE Query:
Example:
DELETE Query:
Example:
Suppose we want to delete a customer with CustomerID 2. We can use the following
DELETE query:
Example:
Suppose we have two tables: Orders and Customers . We want to retrieve the customer
names and order dates. We can use an alias to simplify the query:
In this example, c is an alias for the Customers table, and o is an alias for
the Orders table.
Join:
A join is a way to combine data from two or more tables based on a common column.
There are several types of joins, including:
INNER JOIN: Returns only the rows that have matching values in both tables.
LEFT JOIN: Returns all the rows from the left table and the matching rows from the right
table.
RIGHT JOIN: Returns all the rows from the right table and the matching rows from the
left table.
FULL OUTER JOIN: Returns all the rows from both tables, with null values in the
columns where there are no matches.
In testing, joins are important because they help us to:
Retrieve related data from multiple tables
Validate data relationships and consistency
Test complex business logic and rules
Example:
Suppose we have two tables: Orders and Customers . We want to retrieve the customer
names and order dates. We can use an INNER JOIN to combine the data:
In this example, the INNER JOIN combines the `Customers` and `Orders` tables based
on the CustomerID column.
Web Testing:
Web testing involves testing a website or web application to ensure it meets the
required standards and works as expected.
It focuses on the client-side of the application, which includes the user interface, user
experience, and functionality.
Web testing typically involves testing:
User interface and user experience
Browser compatibility and responsiveness
Performance and scalability
Security and accessibility
Functionality and workflow
Example:
Suppose we're testing an e-commerce website. Web testing would involve testing the
website's user interface, ensuring that the layout and design are consistent across
different browsers and devices, and verifying that the website's functionality, such as
checkout and payment processing, works correctly.
Application Testing:
Example:
Suppose we're testing a mobile banking application. Application testing would involve
testing the entire application, including the client-side user interface, the server-side
API, and the database. We would verify that the application's business logic, such as
transaction processing and account management, works correctly, and that the data is
consistent and secure.
Key Differences:
Scope: Web testing focuses on the client-side, while application testing focuses on the
entire application.
Depth: Web testing typically involves testing the surface-level functionality, while
application testing involves testing the underlying business logic and data integrity.
Complexity: Application testing is often more complex and involves testing multiple
components and systems.
Example:
Suppose we're testing a simple banking application that allows users to view their
account balances. In a two-tier architecture, the client tier would be the user interface,
and the server tier would be the database that stores the account information.
Three-Tier Architecture:
A three-tier architecture consists of a client tier, an application tier, and a server tier.
The client tier is responsible for user interaction and presentation, the application tier
manages business logic and processing, and the server tier manages data storage.
The client tier communicates with the application tier, which in turn communicates with
the server tier.
Three-tier architecture is often used for more complex applications with a larger user
base.
Example:
Suppose we're testing an e-commerce application that allows users to browse products,
add them to a cart, and checkout. In a three-tier architecture, the client tier would be the
user interface, the application tier would manage the business logic of adding products
to the cart and processing payments, and the server tier would store the product
information and order data.
N-Tier Architecture:
Example:
Suppose we're testing a complex enterprise application that involves multiple systems,
such as customer relationship management, inventory management, and order
processing. In an n-tier architecture, there could be multiple application tiers, each
managing a different aspect of the application, such as customer data, inventory levels,
and order fulfillment.
9.How to make sure test cases meet the
requirements?
Understanding the Requirements:
The first step is to thoroughly understand the requirements of the application or feature
being tested.
I review the requirements documents, such as the Business Requirements Document
(BRD), Functional Requirements Document (FRD), or User Stories, to ensure I have a
clear understanding of what needs to be tested.
Once I have a good understanding of the requirements, I create test cases that cover
each requirement.
I ensure that each test case is specific, measurable, achievable, relevant, and time-
bound (SMART).
I also ensure that each test case has a clear objective, preconditions, steps, expected
results, and any necessary data or inputs.
After creating the test cases, I review them to ensure they meet the requirements.
I refine the test cases based on feedback from the development team, product owners,
or other stakeholders.
I also ensure that the test cases are prioritized based on risk, complexity, and business
value.
Traceability Matrix:
To ensure that each requirement is covered by at least one test case, I create a
traceability matrix.
The traceability matrix maps each requirement to one or more test cases, ensuring that
every requirement is tested.
I regularly update the traceability matrix as new requirements are added or changed.
Continuous Improvement:
Finally, I continuously review and refine the test cases to ensure they remain relevant
and effective.
I update the test cases based on changes to the requirements, and ensure that the test
cases are aligned with the evolving needs of the application or feature.
By following these steps, I can ensure that the test cases meet the requirements, and
that the application or feature is thoroughly tested to meet the needs of the end-users.
I use test management tools such as TestRail, PractiTest, or TestLink to store and
manage test cases, test suites, and test runs.
These tools allow me to create a unique identifier for each test case, which helps in
tracking and tracing defects.
I use defect tracking tools such as JIRA, TFS, or Bugzilla to report and track defects.
These tools allow me to create a unique identifier for each defect, which helps in linking
it to the relevant test case.
Bi-Directional Traceability:
I maintain bi-directional traceability between test cases and defects, meaning that I can
easily navigate from a test case to the related defects and vice versa.
This helps me to quickly identify the test cases that are affected by a defect, and to
ensure that the defect is properly tested once it's fixed.
I maintain a record of test case execution history, including the test runs, test results,
and defects reported.
This helps me to track the defects that were reported during a specific test run, and to
ensure that the defects are properly re-tested once they're fixed.
I update the defect status in the defect tracking tool as the defect is fixed, verified, and
closed.
I also update the test case status in the test management tool to reflect the changes in
the defect status.
I generate reports and metrics to track the defect density, defect leakage, and test
effectiveness.
These reports help me to identify areas for improvement in the testing process, and to
optimize the testing efforts to reduce defects.
I collaborate with the development team to ensure that the defects are properly fixed
and verified.
I provide them with detailed information about the defects, including the test case ID,
test case description, and the steps to reproduce the defect.
By maintaining traceability between test cases and defects, I can ensure that defects
are properly reported, tracked, and fixed, and that the testing process is efficient and
effective.
11. What is the defect management tool, test
management tool?
Defect Management Tool:
A defect management tool is a software application that helps track, manage, and
resolve defects or bugs found during the testing process.
It allows testers to report, track, and manage defects in a centralized manner, ensuring
that defects are properly documented, assigned, and resolved.
Some popular defect management tools include:
JIRA
TFS (Team Foundation Server)
Bugzilla
Mantis
Trac
A test management tool is a software application that helps plan, organize, and execute
testing activities.
It allows testers to create, manage, and execute test cases, test suites, and test runs,
and to track test results and defects.
Some popular test management tools include:
TestRail
PractiTest
TestLink
Zephyr
HP ALM (Application Lifecycle Management)
By using defect management and test management tools, I can streamline the testing
process, improve testing efficiency, and ensure that defects are properly reported and
resolved.
Test Case ID: A unique identifier for the test case, used to track and reference the test
case.
Test Case Description: A brief summary of the test case, including the purpose and
objective of the test.
Preconditions: The conditions that must be met before executing the test case, such
as specific software versions or configurations.
Steps: A series of numbered steps that outline the actions to be performed during the
test, including:
Actions: The specific actions to be taken, such as clicking a button or entering data.
Expected Results: The expected outcome or behavior of the system after each action.
Expected Result: The overall expected result of the test case, including any specific
outputs or behaviors.
Test Data: Any data required to execute the test case, such as input values or sample
data.
Priority: The level of importance assigned to the test case, indicating how critical it is to
the overall testing effort.
Risk: An assessment of the risk associated with the test case, including the potential
impact of failure.
Test Environment: The specific environment in which the test case should be
executed, including hardware, software, and network configurations.
Dependencies: Any dependencies or relationships between this test case and other
test cases or requirements.
Comments: Any additional notes or comments related to the test case, such as
assumptions or limitations.
Optional Elements:
Test Case Type: The type of test case, such as functional, regression, or usability.
Requirements Traceability: A link to the specific requirement or user story being
tested.
Automated Test Script: A reference to an automated test script that corresponds to
the test case.
Test Case Status: The current status of the test case, such as pass, fail, or blocked.
By including these elements, a test case provides a clear and concise outline of the
testing process, ensuring that testing is thorough, consistent, and repeatable.
Severity refers to the impact or potential impact of a defect on the application or system.
It measures the degree of damage or loss that a defect could cause, in terms of data,
functionality, or user experience.
Severity is usually categorized into levels, such as:
Critical: Defect causes a complete system failure or significant data loss.
High: Defect causes significant functionality impairment or data corruption.
Medium: Defect causes some functionality impairment or minor data issues.
Low: Defect is cosmetic or has minimal impact on functionality.
Priority:
Key differences:
Severity focuses on the potential impact of a defect, while priority focuses on the
urgency of fixing the defect.
Severity is often determined by the testing team, while priority is typically determined by
the product owner, project manager, or business stakeholders.
Severity is usually determined by the testing team, based on their technical expertise
and understanding of the application or system.
Priority is typically determined by the product owner, project manager, or business
stakeholders, based on business requirements, customer needs, and project timelines.
In some cases, a joint decision may be made by the testing team and product
owner/project manager, taking into account both technical and business considerations.
By understanding the difference between severity and priority, we can ensure that
defects are properly categorized and addressed in a timely and effective manner.
14. What is exit and entry criteria?
Entry Criteria:
Entry criteria define the conditions that must be met before a testing phase or activity
can begin.
It ensures that the testing team has a clear understanding of what is required to start
testing, and that all necessary prerequisites have been met.
Examples of entry criteria include:
Completion of previous testing phases or activities
Availability of required test environments, tools, and resources
Receipt of necessary test data, inputs, or documentation
Completion of defect fixes or changes to the application or system
Approval from stakeholders or project managers to proceed with testing
Exit Criteria:
Exit criteria define the conditions that must be met before a testing phase or activity can
be considered complete.
It ensures that the testing team has a clear understanding of what is required to
conclude testing, and that all necessary objectives have been met.
Examples of exit criteria include:
Completion of all planned test cases or test scripts
Achievement of desired test coverage or quality metrics
Resolution of all critical or high-priority defects
Obtaining stakeholder or project manager approval to conclude testing
Meeting specific testing deadlines or milestones
They provide a clear understanding of the testing scope, objectives, and timelines
They ensure that testing is thorough, consistent, and repeatable
They help to identify and mitigate risks, and ensure that testing is properly planned and
executed
They facilitate communication and collaboration among team members, stakeholders,
and project managers
They enable the testing team to measure progress, track defects, and report testing
results
By establishing clear entry and exit criteria, we can ensure that testing is conducted in a
structured and controlled manner, and that we meet our testing objectives and deliver
high-quality results.