Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

1.

Document a guideline for the QA improvement process


a. Object
b. Activities
c. Test Cases writing steps
d. Other details

2. Give a training session for the process


3. Start practicing the process

Phases to rollout
1. Define the process
2. Give training about the process
3. Participate in sprint planning meeting to help QA
a. How to actively participate in understanding of the sprint deliverable items
b. How to define the scope and identify the impact areas
c. How to

Create Test plan


create test cases against each pbi

assign test case to test plan


assign test cases to test set
add test steps in each test case
add preconditions of testing against each pbi

create test executons under test plan based on number of test cycles run on the environemtn
test execution n1
test execution n2
test execution n3
...

Assign test sets/test cases to the test execution


mark test execution in progress

run test cases by clicking on run button


start the timer
perform test steps
is test steps passed
if yes,
mark test step passed
stop test
If no
mark test step failed
add defect
log bug and set AWS environment field
mark test case failed
stop test
end
We have to finalize a simple yet inclusive DoD that can be used as a standard for all
types of projects we undertake at DPL. Our deadline is Fri, Sep 23.

Here is a simple DoD that we can start our discussion from:

 Acceptance criteria met


 Code reviewed
 Regression tests passed
 Functional tests passed
 Non-functional tests passed
 No critical bugs
Definition Of Done

I have added few in the above list

1. Acceptance criteria met


2. Unit test cases defined/attached
3. Unit test cases passed
4. Separate code branch has been created for a code change
5. Code reviewed as per the code guideline document
6. Regression tests passed
7. Functional tests passed
8. Non-functional tests passed
9. UI/UX reviewed by design team as per the design guideline document
10. No critical bugs
11. Required artifacts has been attached (Feature documentation, sprint/release note etc.)
12. Product Owner accepted the increment (User story, Feature etc.)
13. Increment has deployed on UAT
14. UAT signed off
15. Increment has deployed on Production
16. Code has merged into the production branch
DPL Definition of Done
Shared on 26-Sep-22

DPL Rebels: Welcome back from weekend.

I hope you are charged for a new week ahead. Let's start with the Definition of Done (DoD). Yeah, I
know it doesn't sound that exciting, but trust me, with a good DoD you will be able to accomplish
more in the same amount of time.

But first, let's hear what Scrum Guide has to say about DoD:

 The Definition of Done is a formal description of the state of the Increment when it meets the
quality measures required for the product.
 If a Product Backlog item does not meet the Definition of Done, it cannot be released or even
presented at the Sprint Review.
 If the Definition of Done for an increment is part of the standards of the organization, all
Scrum Teams must follow it as a minimum.

To fulfil these requirements, some of the PO and SM volunteered to put together standard DoD for
DPL. This DoD was then reviewed and agreed by all PO and SM. Now it is being presented to you to
incorporate in your projects.

Standard DoD for DPL:

1. Acceptance criteria met


2. Unit tests passed
3. Regression tests passed
4. Functional tests passed
5. Non-functional tests passed
6. No showstopper bugs
7. Code reviewed and checked-in
8. Accepted by PO

PO & SM - Please review this DoD with your team and get it incorporated in your projects within this
week (e.g. each story must only be marked done if all these items are checked). If a project requires
more stringent rules, then please go ahead, and add those requirements to the list. You must not,
however, remove any item from the standard DoD.
QA Improvement - Path to Success
What is QA?

• Product should do the same as what is expected to do

• Meet the acceptance criteria

• Fulfills client requirements under description of a PBI

• How we can estimate the scope of testing?

• How do we know/see the impact?

• Where is the usage of a developed item.

• How to control the missing elements?

• Engage with clients in requirement meetings

• Knowledge?

• Try to cover client’s requirement before to start developing

• WHAT IS THE PROBLEM?

• WHY WE FACING THIS PROBLEM? AND IDENTIFY THE ROOT CAUSE?

• Time constraint for QA?

• Engagement of QA with PO.


Client Expectations

1. Working Software

1. New Feature Should Work

2. Existing Features Should Work

2. Minimal Time to Perform UAT

3. No Surprises

4. No Bugs
Execution: Package Delivery

1. Backlog Management

• Detail Oriented + DoD

2. Quality Assurance

• Processes

• Manual Testing: Write test cases

• Automation: Automate test cases

• CI/CD: Create pipeline from day 1

• Continuous Improvement (Feedback)

3. Production Support

Talha has some reports he use to send to the C and Executive level stakeholders

The PO is responsible for submitting team allocation data to the finance department in the last week of the month so
that the finance team can submit invoice in time. Here is the template.

https://dplit-my.sharepoint.com/:x:/p/waleed_r/EbBwJRrK01VEksFm2MwfbfMBh6DYqjvigNigJMzqWbTNjw?e=eJjeIz
Target: ZERO bug should report on UAT?

- Ensure deployment on UAT

- Environment configuration in a QA document

- Use a mature deployment method to ensure successful deployment on UAT via CI/CD

- Monitor and control deployment on the QA environment

- DB schema/data

- Compare QA/UAT schema

- Compare configuration data on QA/UAT

- Web/App deployment

- Code deployment same QA/UAT

- Test execution against the scope

- Identify the scope of a PBI

- Find impact -> dependency of one item to another item

- List use cases of a PBI

- How to identify use case? Requirements (PBI, acceptance


criteria)

- Product Knowledge -100%

- Challenges

- Developer enforces QA for restricted


QA

- Information access restriction

- 90% (3)

- 70%-90% (1)

- 50% - 70% (2)

- < 50% (1)


Test Cases???

1. What is? What will be the out with the given input. -> Acceptance criteria of a
user story

1. Test Scenario: Single line

2. Test Cases: Detailed with multiple cases

1. Environment/Precondition

2. Input value

3. Output value/message

4. Where to check it

2. How to write?

3. Where to write?

4. How to manage?

5. How to ensure execution?

6. Test report

7. How to accept the reason for a reported bug and improvements

8. Difference between bug and observation

9. Manual testing techniques


How to write a Test Case?

1. Step 1: Test Case ID

2. Step 2: Test Description

3. Step 3: Assumptions and Pre-Conditions

4. Step 4: Test Data

5. Step 5: Steps to be Executed

6. Step 6: Expected Result – Where to check it

7. Step 7: Actual Result and Post-Conditions

8. Step 8: Pass/Fail
Challenges to writing Test Cases?

1. Time constraint

2. Didn’t plan on time

3. Unplanned items to test during a sprint

4. Tool is taking time to create test cases

5. Resource Shortage/Need Multiple resources

6. Not taking/defining action items after backlog refinement/grooming

7. Change in scope during sprint execution

8. Time Constraint:

9. Raise the challenge with stakeholders during a sprint

10. QA should define time to fulfill their needs

11. QA input is required to plan a sprint to meet quality

12. QA should be a part of solutioning sessions

13. Shift left strategy

14. Didn’t plan on time

15. Need to execute all sprint events on a scheduled time

16. Unplanned items to test during a sprint

17. How to share an estimate of production defects?

18. Lifecycle of production defects

19. Production defect reported

20. Bug Triage – Investigate L1 – [Product Team – PO, PA]

21. Is this really a defect or not?

22. What is its severity?

23. RCA – Root Cause Analysis – [QA]

24. Plan its fix – [PO, QA, DEV], Final approval Customer

25. DEV

26. QA

27. Rollout – HotFix –

28. On Production Branch


29. Merge the current sprint branch for QA

30. Tool is taking time to create test cases

31. Resource Shortage

32. Test cases should be reviewed by someone else

33. Not taking/defining action items after backlog refinement/grooming

34. Participate in backlog grooming sessions and define use cases at end of the
meeting

35. Change in scope during sprint execution

36. Need to spend time on the last deployment for which no dedicated time is
allocated

37. Need to allocate time for it

38.
How to write a Test Case?

1. Step 1: Test Case ID

2. Step 2: Test Description

3. Step 3: Assumptions and Pre-Conditions

4. Step 4: Test Data

5. Step 5: Steps to be Executed

6. Step 6: Actual Result and Post-Conditions

7. Step 7: Pass/Fail
How to write a Test Case? Challenges

1. Time constraint

1. Load

1. Resource shortage

2. Over commitment from QA

1. Better plan before commitment

3. Stretched sprint goals

2. Tool

1. Need to use the same tool in DPL


How to write a Test Case?

1. Step 1: Test Case ID -Test cases should all bear unique IDs to represent them. In
most cases, following a convention for this naming ID help with organization,
clarity, and understanding.

2. Step 2: Test Description - This description should detail what unit, feature, or
function is being tested or what is being verified.

3. Step 3: Assumptions and Pre-Conditions - This entails any conditions to be met


before test case execution. One example would be requiring a valid Outlook
account for login.

4. Step 4: Test Data - This relates to the variables and their values in the test case.
In the example of an email login, it would be the username and password for the
account.

5. Step 5: Steps to be Executed - These should be easily repeatable steps as


executed from the end user’s perspective. For instance, a test case for logging
into an email server might include these steps:

6. Step 6: Expected Result - This indicates the result expected after the test case
step execution. Upon entering the right login information, the expected result
would be a successful login.

7. Step 7: Actual Result and Post-Conditions - As compared to the expected result,


we can determine the status of the test case. In the case of the email login, the
user would either be successfully logged in or not. The post-condition is what
happens as a result of the step execution such as being redirected to the email
inbox.

8. Step 8: Pass/Fail – PASS/FAIL


 QA team will create a test plan during the sprint planning.
 QA team will add the test cases against the PBIs during and after the planning
meeting. This step will be completed within first two days of the sprint.
 Add pre conditions against the test cases.
 Steps will be added against each test case.
 Test cases will be assigned to test plan.
 New test set will be created if there is no existing test set available for the
feature.
 Test executions will be created under the test plan based on the number of test
cycles run on the environment.
 QA will assign test cases and test sets to test execution.
 QA team members will mark test execution in progress.
 Run test cases by clicking on the Run button.
 Start the timer.
 Perform the test steps.
 If the test is passed:
 Mark the test case as passed.
 Stop the test.
 Process completed.
 If the test is failed:
 Mark the test case as failed.
 Add defect.
 Log bug and set AWS environment field and assign the
sprint and the team.
 Assign it back to the developer.
 Process completed.
 For each iteration in the environment, new test executions will be created.
 Record test case in Katalon for automation purpose if test case is passed.
 Assign it to the Dev team for refactoring.
 PBI will be assigned to PO for UAT approval.
• Test Cases:

• Review PBI to create their test cases

• Ensure that Acceptance criteria should be defined for a PBI

• If acceptance criteria are not defined,

• Talk to PO to define it otherwise my test cases will miss the


requirement/scope

• Ensure my test cases have covered all acceptance criteria

• Follow all 7 steps which we discussed earlier

• Attached/Link each test case to the relevant PBI

• Perform the test steps.

• Verify the test results as per the defined expected result

• If the test is passed:

• Mark the test case as passed.

• Stop the test.

• Add/update relevant information to the test case and PBI

• Process completed.

• If the test is failed:

• Mark the test case as failed.

• Add defect with steps to reproduce, support images, video, content, etc.

• Log the bug and assign the sprint and the team.

• Add/update relevant information to the test case and PBI

• Assign back the PBI to the developer

• Process completed.

PBI will be assigned to PO for UAT approval.


How to Identify the Number of Test Cases to be written?

Real Scenarios from the existing project?


Different Types of Test Cases

Test cases can measure many different aspects of code. The steps involved may also be intended to
induce a Fail result as opposed to a positive expected result such as when a user inputs the wrong
password on a login screen.

Some common test case examples would be the following:


Performance Characterization – Breakpoint of our application(s)

A messaging system can be judged on its performance in four aspects—scalability, availability, latency,
and throughput. These factors are often at odds with each other, and the architect often needs to figure
what one aspect to compromise to improve the others:

• Scalability: This is how the system is able to handle increases in load without noticeable
degradation of the other two factors, latency or availability. Here, a load can mean things such as
the number of topics, consumers, producers, messages/sec, or average message size.

• Availability: In a distributed system, a variety of problems can occur at a unit level (servers,
disks, networks, and so on). The system's availability is a measure of how resilient the system is
to these failures so that it is available to end users.

• Latency: This is how much time it takes for a message to get to a consumer from a producer.

• Throughput: This is how many messages can be processed per second by the messaging system.

A classic tradeoff is between latency and throughput. To optimize throughput, we can batch messages
and process them together. But this has a very negative effect on latency.
A. Production Defect Fixing Process (Severity 3):
1. LOGGING Process: Support Coordinator TAT: Same day
i. Received issue via any communication channel (Email/Call/Text/Client
Portal/MS-Teams/Onsite discussion etc.)
ii. A support engineer will review the issue and request the support coordinator
to open call ID for the reported issue with correct severity, category, and
subject
iii. A support coordinator will assign a Call-ID to a support engineer for the L2
investigation

2. INVESTIGATION Process: Support Engineer TAT: 36 Hours not exceeding 4 Working


Days
i. A support engineer will Investigate the reported issue which must include the
following
1. Problem definition
a. Actual problem statement rather than a client’s reported
subject/summary
2. Problem Nature
a. Verify if the reported issue is a new one or reoccurred after
providing recent or previous fix
3. Impact:
a. How the reported issue impacting the system
4. History
a. Was the issue occurred in past, when, what was the solution
provided, etc.?
5. Trigger event:
a. The first occurrence of the issue
b. What was the last change after that the issue started occurring?
6. Scope
a. Which schemes, transactions, channels, issuing, acquiring, on-
us, off-us impacted/triggered/involved in the issue
7. Impacted modules
a. Which modules of the application impacted due to the issue
8. Frequency/Recency/Volume:
a. What is the frequency, recency, and volume of the issue?
9. Supporting logs:
a. End to end log extract of a reported transaction from all logs
starting from where transaction lands into Iris and Iris response
back to the requester
b. End to end log extract of a good case of the reported
transaction from all logs starting from where transaction lands
into Iris and Iris response back to the requester
10. Support information:
a. Transaction log record extract of the identified issue
b. Transaction log record extract of the good case
c. All Impacted/dependent configurations, packages, record
extract from the production
11. Support documents
a. Support document which needs to refer for the reported issue
which includes but not limited to DB packages, table extracts,
binaries, etc.
12. A support engineer will submit an L2 investigation via Tracker with
any one of the following outcomes
a. Root cause identified: This is a point that causes the problem
b. Root cause not identified: If there is no clue identified via any
log/database which causes the problem, then a support
engineer must discuss the investigation with his immediate
supervisor. The immediate supervisor must review the
investigation and guide the support engineer for further
investigation
ii. Fill L2 investigation in Tracker Call-ID with and attached all information
iii. Assign the Call-ID to the development team on Development DD Required
status

3. INVESTIGATION REVIEW Process: Development TAT: Max. 4 Hours


i. A lead or a senior resource must review the L2 investigation.
ii. If more information is required from L2 investigation or production, Then,
update information in Tracker and assign a Call-ID to the support engineer
iii. Else, review the code and
1. Identify the required code level change
2. Propose a fix with the scope and impact areas
3. Breakdown the solution into smaller tasks and assign it to a developer
via a tool
4. Create a new code branch on the latest production code on TFS
5. Define development timelines and align developer to fix it
6. Update/Define development DD in Tracker
7. Assign a Call-ID to QA to understand the issue and review the scope
on QA DD Required status
8. Notify development DD to support coordinator to get QA DD
9. Give a basic understanding of the proposed fix to the developer
10. Assign a Call-ID to the developer to start development

4. QA SCOPE REVIEW Process: QA TAT: Max. 4 Hours


i. Understand the reported issue from the L2 investigation
ii. Understand the proposed solution, scope, and impacted areas from the L3
investigation
iii. Update/Define QA DD in Tracker
iv. Assign Call-ID to development on Awaiting Development Start status
5. DEVELOPMENT Process: Development TAT: 46 Hours minus INVESTIGATION
REVIEW time
i. Get an understanding of all information required for the proposed fix from
the lead and assign Call-ID to himself on Development In Progress status
ii. Prepare unit cases and get them reviewed by the lead
iii. Checkout code from Call-ID branch of TFS onto the development VM
iv. Make code level change to fix the issue. It also includes any change in DB or
configuration
v. Perform all unit test cases
vi. Save log chunks of the reported issue after fix
vii. Check-in the code into the respective branch on TFS

6. DEVELOPMENT REVIEW Process: Development TAT: - Hours minus INVESTIGATION


REVIEW and DEVELOPMENT time
i. Get a review on the updated code by lead
ii. If any change required in code, then continue DEVELOPMENT process
iii. If no change required then, add/update development package artifacts on
Tracker including
1. Instruction manual
2. Unit test case sheet
3. Successful log chunks
4. TFS Changeset ID
5. Development package detail (A table used to share development
package information for QA. It must include pre-requisite for this Call-
ID)
iv. Assign a Call-ID to QA on Awaiting QA Start status

7. QA Process: QA TAT: 36 Hours minus QA SCOPE REVIEW time


i. Understand the fix provided by the development and reassign the Call-ID to
himself on QA In Progress status
ii. Prepare all test cases and get them reviewed by the lead
iii. Perform all test cases
iv. If any observation identified, then update observations in Tracker and assign
Call-ID to the developer on Development In Progress
v. If no observation identified, then reassign Call-ID to himself on Reviewed by
QA status and get it reviewed by the lead
vi. If any change in test cases, then update Call-ID status on QA In Progress and
continue QA Process
vii. If no change in test cases, then start packaging which includes
1. Update instruction manual including a list of artifacts and package
detail with sum and size
2. Compile all binaries, DB packages, etc.
3. All test cases
4. Successful log chunks of all test cases
5. Attach all artifacts in Tracker
6. Provide sign-off on QA package to notify all stakeholders (QA,
Development, Support)
7. Release QA verified package via Tracker and assign Call-ID to support
on Level 1 status to release the package to the client

8. PACKAGE RELEASE Process:


i. The support coordinator will check and ensure all artifacts included in the
provided package
ii. If found any missing information, then assign Call-ID to QA for required
information on Awaiting QA Start status
iii. If no information is required, then release the package to the client and
update Call-ID status to Patch Dispatched To Client

9. CODE MERGING Process:


i. This process will start just after QA sign-off the package
ii. The development lead will merge the Call-ID branch code into the TFS
production

10. RCA of a code bug and take necessary action to overcome the bug

 GET REVIEW/UNDERSTANDING OF NEW TRACKER VERSION SO EXISTING


PROCESSES CAN BE UPDATE AS PER NEW TRACKER FUNCTIONALITY
 I HAVE USED/TAGGED EACH PROCESS STEP WITH EXISTING TRACKER
STATUSES
 Under audit step, get monthly feedback from developers about challenges they faced
during development
 

You might also like