Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

Manual Testing - Full Course

Module-1: Software Testing Concepts


 What is Software? Types of Software
 What is Software Testing?
 What is Software Quality?
 Project vs Product
 Why do we need testing
 Error vs Bug vs Failure
 Why dose Software has bugs?
 SDLC (Software Development Life Cycle)
 Waterfall model
 Spiral model
 V-model
 White Box & Black Box Testing
 Static & Dynamic Testing
 Verification & Validation
 QA vs QC vs QE
 Levels of Software Testing
 System Testing Types
 GUI Testing
 Functional & Non-Functional Testing
 Re-Testing & Regression Testing
 Sanity & Smoke Testing
 Exploratory Testing
 Adhoc Testing & Monkey Testing
 Positive & Negative Testing
 End-to-end testing
 Globalization & Localization Testing
 Test Design Techniques
 STLC (Software Testing Life Cycle)
 Content of Test Plan
 Use Case, Test Scenario & Test Case
 Content of Test Case
 RTM
 Test Environment
 Test Execution
 Defect/Bug Reporting
 Defects/Bugs Classification
 Defect/Bug Severity & Priority
 Defect/Bug Life Cycle
 Test Closure
 Test Metrics
 QA/Tester Responsibilities
Module-2: Software Testing Project
 Project Introduction
 Understanding Functional Requirements from FRS
 Creating Test Scenarios
 Creating Test Cases
 Test Execution
 Bug Reporting & Tracking
 Test Sign Off

Module-3: Agile Testing

 What is Agile?
 What is Scrum? Scrum Team
 What is Sprint?
 What is User Story?
 How to give story points / How to estimate user story?
 What is the definition of Done / What is the definition of Ready?
 Different Sprint Activities
 Sprint Planning / Backlog Refinement / Sprint Review / Sprint Retrospective

Module-4: Jira Tool


 How to install and configure Jira tool?
 How to create an EPIC / User Stories in Jira
 Creating Sprints in Jira
 Sprint Life Cycle in Jira
 Backlogs in Jira
 Creating Bugs in Jira
 How to write Bugs in Jira, using Zephyr plugin
 Creating Test Cycles and Execute Test Cases in Jira
 What is Software? Types of Software
A Software is a collection of computer programs that helps us to perform a task.

Types of Software:
1) System software - ex: Device Drivers, Operating System, Servers, Utilities, etc.
2) Programming Software - ex: Compilers, Debuggers, Interpreters, etc.
3) Application Software - ex: Web Applications, Mobile Apps, Desktop Apps

 What is Software Testing?


Software Testing is a part of software development process.
Software Testing is an activity to detect and identify the defects in software.
The object of testing is to release quality product to the client.

 What is Software Quality?


Software Quality parameters:
1) Bug Free
2) Delivered on time
3) Within budget
4) Meets requirements and expectations
5) Maintainable

 Project vs Product
Project = software app developed for a specific customer, based on his requirement
Product = software app developed for multiple customers, based on market requirements.

 Why do we need testing


We need testing to release a quality product to the customer.
 Error vs Bug vs Failure
Error = a human mistake, an incorrect human action in software development
Bug = Defect = a difference between actual result and expected result (a deviation from the expected
behaviour)
Failure = a deviation identified by the end user while working with the software

Why Software has bugs:


1) Miscommunication or no communication
2) Software complexity
3) Changing Requirements
4) Lack of skilled testers

 SDLC (Software Development Life Cycle)

SDLC is a process used by software industry to design, develop and test software.

3 P (pillars) theory:
1) P - People
2) P - Process (SDLC)
3) P - Product
 Waterfall model

A. Requirements Analysis - collect the requirements from the customer and create a set of documents
(SRS - Software Requirements Specifications document)
INPUT - customer requirements
OUTPUT - SRS

B. System Design - based on the required document, the designers will prepare HLD (High Level Modules
Docs) and LLD (Low Level Modules Docs)
INPUT - SRS
OUTPUT - HLD & LLD

C. Implementation - based on the design documents, the developers will implement the software
INPUT: HLD & LLD
OUTPUT: CORE

D. Testing - the tester will test the software


INPUT: The software itself
OUTPUT: Test cases, test plan, test result

E. Deployment - the installation of the software on the customer environment

F. Maintenance - the start of the using phase for the customer

Advantages:
1) Good quality of the product
2) Since requirements changes are not allowed, chances of finding bugs will be less
3) Initial investment is less, since testers are hired in later stages
4) Preferred for small projects where requirements are freeze.

Disadvantage:
1) Requirement changes are not allowed
2) If there is a defect in the requirements that will be continued in later phases
3) Total investment is more because of time taking for rework of defect later
4) Testing will only start after coding

 Spiral model
Is an iterative model and overcomes the drawbacks of the waterfall model.
It is used whenever there is a dependency on the modules.
In every cycle new software will be released to the customer.
Software will p\be released in multiple versions -> Version control model.

Advantages:
1) Testing is done in every cycle, before going to the next one
2) Customer will get to use the software for every module
3) Requirements changes are allowed after every cycle before going to the next one

Disadvantages:
1) Requirement changes are not allowed in between the cycle
2) Every spiral model cycle looks like waterfall model
3) There is no testing in requirement & design phase
 V-model
BRS = Business Requirements Specifications = consists of the complete scope of the project, the
performance requirements and usability, the purpose of the product, the functions of the product, the
features of the product and the users, the scope of the work and the humanity requirements.

CRS = Customer Requirements Specifications = is a planning document normally used for computer
systems sup- port planning purposes. It identifies in some depth issues the customers have now and
foresee for the next period, issues the computing support group fore- sees, and any potential business
climate changes in the works.

URS = User Requirements Specifications = is a planning document that specifies what the software or
system needs to do. It is written from the point of view of the end user and does not need to be technical
or complicated.

BRS / CRS / URS is made by BUSINESS UNIT (REQUIREMENT PHASE),not by Devs and Testers. To be
understand by Dev and Testing teams, it transforms into SRS (contains technical term and parameters)
BRS / CRS / URS -----> User Acceptance Testing

SRS = Software Requirements Specifications = is a document that describes what the software will do and
how it will be expected to perform. It also describes the functionality the product needs to fulfill all
stakeholders (business, users) needs.
SRS is made by PROJECT MANAGERS (REQUIREMENT PHASE).
SRS -----> System Testing

HLD = High Level Design Documents = explains the architecture that would be used to develop a system.
The architecture diagram provides an overview of an entire system, identifying the main components that
would be developed for the product and their interfaces. It includes the description of the following parts:
System architecture, Database design, Brief mention of all the platforms, systems, services, and
processes the product would depend on, Brief description of relationships between the modules and
system features.

LLD = Low Level Design Documents = is a component-level design process that follows a step-by-step
refinement process. It provides the details and definitions for the actual logic for every system component.
It is based on HLD but digs deeper, going into the separate modules and features for every program in
order to document their specifications.

DLD = Detailed Level Design Documents = the most detailed technical document, which describes user
stories, error processing algorithms, state transitions, logical sequences, etc. It describes the interaction of
every low-level process with each other.
High-Level Design Low-Level Design

HLD is a macro-level design or system LLD is a micro-level design or detailed design.


design. It allows seeing the “Big Picture.” It allows seeing the minutiae.

The overall architecture of the application A detailed description of each and every
and the network design as well as module that is mentioned in the high-level
relationships between various system design (HLD.)
modules and functions.

Target audience: clients, designers, Target audience: designer teams, operation


reviewers, management team members, teams, and implementer.
program and solution teams.

Main idea: HLD is required to understand Main idea: provides information for building
the flow across various system entities. For the product, its configuration, and
example, if you have several integrated troubleshooting. For example, if you have
solutions, HLD describes how everything several solutions, LLD describes each one in
works as a single organism. detail, like a single organ in a body.

Goal: converts business goals and Goal: converts a high-level solution into a
requirements into a high-level solution. detailed solution ready for building.

Chronologically, HLD is created before any Chronologically, LLD is created after the high-
other technical documentation. level design document.

HLD Creator: Solutions Architect. LLD Creator: Designers & Developers with
PM.

Input Data: software requirement Input Data: reviewed and authorized high-
specifications (SRS.) level design document (HLD.)

Results: database design, functional design, Results: program specification and unit test
and review record. plan.

HLD / LLD / DLD are made by DESIGNERS (DESIGH PHASE).


HLD / LLD / DLD -----> Integration Testing

Static testing techniques (used for testing all documents):


1) Review
- conducted on documents to ensure correctness and completeness

2) Walk-through
- informal review, not pre planned and done whenever required
3) Inspection
- more formal review, in a 3-8 meeting scheduled via email

Verification - checks weather we are building the RIGHT PRODUCT

Dynamic testing techniques (used for testing the software):


1) Unit testing (white box - testing the code)
2) Integration testing (white box - testing the code)
3) System testing (black box - testing the software)
4) User acceptance testing (black box - testing the software)

Validation - checks weather we are building the PRODUCT RIGHT

Advantages:
1) Testing is involved in each and every phase.

Disadvantages:
1) More documentation
2) Initial investment is more

 QA vs QC vs QE
QA is:
- process related
- focus on building in quality
- preventing the defects
- is process oriented
- is involved in every phase of the SDLC

QC is:
- the actual testing of the software
- focus on testing for quality
- is detecting the defects
- is product oriented
- is involved only in the testing phase of the SDLC

QE is:
- automating testing (Quality Engineering)

3 P (pillars) theory:
4) P - People (QC)
5) P - Process (QA)
6) P - Product

 Levels of Software Testing


1) Unit testing:
- is a single component / module testing of a software
- is a white box testing technique
- is conducted by developers
- ex: basis path testing, control structure testing, mutation testing, loops coverage, conditional coverage

2) Integration testing
- is performed between 2 or more modules
- focuses on checking data communication between multiple modules
- is a white box testing technique
- is conducted by developers
-ex:
A) Incremental integration testing:
- incrementally adding the modules and test the data between the modules
- 2 approaches: Top - Down (integrated module is a child of the previous parent model) & Bottom -
Up (integrated module is a parent of the previous child model)
B) Non incremental integration testing
- adding all the models in a single shot and test the data flow between the modules
- drawbacks: missing data between modules and not understanding of the root cause of defect

3) System testing
- testing over all functionality of the app with respective client requirements
- it is a black box testing technique
- is conducted by the testing team
- is conducted after unit and integration testing phases
- it focuses on:

A) User Interface Testing (GUI)


B) Functional Testing
C) Non-Functional Testing
D) Usability testing
4) User Acceptance Testing
- after system testing, user will conduct 2 types of UAT testing:

A) Alpha Testing - users conduct testing in development environment


B) Beta Testing - users conduct testing in own environment

 System Testing Types


1) GUI Testing:
- is a process of testing the user interface of an app (“look and feel”)
- it includes all the graphical such as menus, check boxes, buttons, colors, fonts, sizes, icons, content and
images
- it is a front end testing and non functional , using input from the wire frames (design phase).

2) Usability Testing:
- it test the easiness of the application

3) Functional Testing:
- is tests the behaviour of the application
- it is focused on customer requirements

- types:
A) Object properties testing (disable or not, focus or not, visible or not, etc)

B) Database Testing (Gray box Testing - both UI and database)


- DML (Data Manipulation Language): insert, update, delete, select
- table level validation: column type, column length, number of columns
- relation between the tables: normalization, functions, procedures, triggers, indexes, views, etc.

C) Error handling: tester verify the error messages while performing incorrect actions on the application
(readable, user understandable language)

D) Calculation / Manipulation Testing: tester should test calculations

E) Links Existence & Links Execution: tester verifies where links are placed and if they navigate to proper
page
Types:
- Internal Link - on the same page
- External Link - on another page
- Broken Page - for future implementation, should navigate on the same page

F) Cookies & Sessions: definitions


Cookies - temporary files created by Browser while browsing the pages through Internet
(available for web application only) - CLIENT SIDE
Session - a time slot created in the back end for user activity in the front end app which
expire after some idle time - SERVER SIDE

4) Non - Functional Testing


- it is focused on customer expectation

- types:
A) Performance Testing: the speed of the application (how well it responds):
Load: gradually increasing the load (multiple users) on the app and check speed
Stress: suddenly increasing/decreasing the load (multiple users) on the app and check speed
Volume: check how much data is the application able to handle

B) Security Testing: check how secure is the application


Authentication: check if user are valid or not
Authorization / Access Control: if user is valid, check what kind of actions are permitted
(permissions)

C) Recovery Testing: check if the application has recovery mechanism in case of electrical outage, internet
loss, etc

D) Compatibility Testing:
Forward compatibility - upgrade mechanism
Backward compatibility - downgrade mechanism
Hardware compatibility - verify the app is able to install on different hardware platforms -
Configuration Testing - verify the performance of the software against different
combinations of hardware

E) Installation Testing - screens are clear to understand


- check also uninstall

F) Sanitation Testing / Garbage Testing: check if there are extra features not mentioned in the
requirements (bug is raised in order to remove the feature)

Web elements testing examples:

Edit Box
A) Functional:
- check if user can enter a value
- check if user can clear the value
- check if user is able to enter any type of characters (ECP)
- verify the maximum length of text that the edit box can hold (BVA)
B) GUI:
- check if text entered is visible
- check if edit box is enabled
- check if edit box is visible
- check is edit box position is as per requirements
- check if the edit box size is as per requirement
- check if text size/font/color in the edit box is as per requirement
- check if edit box gives any hint/placeholder for user regarding what to enter

Links
A) Functional:
- check if user can click the link
- verify if user is navigated to the right location after link is clicked
- verify if user is navigated to location in a new page or in the same tab, as per requirement
B) GUI:
- check if link is visible
- check if link is enabled
- check if link position is as per requirement
- check if mouse hoovering returns the link name on the bottom left of the browser
- check if the link text size/font/color is as per requirement
- check if on mouse hoover the cursor changes from tip to hand
- check if on mouse hoover the link color changes
- check if the color of the text link changes to showcase that was previously clicked

Button
A) Functional:
- verify user can click the button
- verify if user is able to perform the required function when the button is clicked
B) GUI:
- check if button is visible
- check if button size/shape/color are as per requirement
- check if button position is as per requirement
- verify text present in the button is visible
- verify text present in the button is positioned as per requirement
- verify text present in the button has the size/font/color as per requirement
- check if on mouse hoover the cursor changes from tip to hand
- check if on mouse hoover the button color changes as per requirement

Text box
A) Functional:
- check if user can enter a value
- check if user can clear the value
- check if user is able to enter any type of characters (ECP)
- verify the maximum length of text that the edit box can hold (BVA)
B) GUI:
- check if text entered is visible
- check if text box is enabled
- check if text box is visible
- check is text box position is as per requirements
- check if the text box size is as per requirement
- check if text size/font/color in the text box is as per requirement
- check if text box gives any hint/placeholder for user regarding what to enter

Check box
A) Functional:
- check if the checkbox can be selected or not
- check if multiple checkbox-es can be selected or not, as per requirement
- check if checkbox can be selected both by mouse and by keyboard
- check if checkbox selection enables the specific element as selected
- check if selection and submit is pressed, the user is redirected to the option as per choice made
- check if selection is right recorded in the database or for browser redirection
- verify if different checked/unchecked boxes combination are considered in redirection
B) GUI:
- check if checkbox is visible or not
- check if checkbox is enabled or not
- verify if checkbox size/color is as per requirements
- check if the checkbox label is correctly aligned in page
- check if the initial focus is on the first checkbox if multiple
- verify that multiple checkbox-es are aligned properly
- verify if selection control is inactive when the page loads

Search box
A) Functional:
- check if user is able to enter required text in the search box
- verify whether search box is giving any auto suggestions for text to be searched
- verify if user is allowed to use any characters in the search box (ECP)
- check if user can clear or delete text written in the search box
- check if user is able to select any specific auto search result populated in the search box
- verify the maximum length of text that the search box can hold (BVA)
B) GUI:
- check if search box is visible or not
- check if search box is enabled or not
- verify if search box size/color is as per requirements
- check the font/size/color of the text entered in the search box is as per requirements
- check if text entered in the search box is visible
- check if search box gives any hint/placeholder for user regarding what to search in search box

Calendar
A) Functional:
- check if the header of the calendar displays current date first (default date)
- check if clicking on the next and previous changes current month/year
- check if user can select a date/month/year or not
- check if months have the correct number of days (27, 28, 30, 31)
- if calendar has a date field, check if the field allows only digits (ECP)
- if calendar has a date field, check if the field allows maximum digits between 1 and 31 (BVA)
- if calendar has a date field, ensure the date format is as per requirements or interchangeable
B) GUI:
- check if the calendar window is displayed and active when is invoked by pressing the calendar icon
- check if the size/position of the calendar is as per requirements
- check if calendar is visible or not
- check the calendar appearance on different screen sizes
- check if dates are highlighted when mouse hoovering
- check if cursor changes on mouse hoovering from tip to hand

Radio Button
A) Functional:
- check if radio button is select able by using both mouse and keyboard
- check if user is able to select multiple radio buttons
- check when user selects no radio button and clicks submit if there is an error message displayed
- check when radio button is selected and submit is pressed if user is redirected on the right page
- check when radio button is selected if the database updates with the selection
- check that the default selection of the radio button is as per requirement
B) GUI:
- check if radio button is visible
- check if radio button is presented on the web page as per design document
- check if radio button shape/size/color is as per design document
- check if label text is present for radio button or not
- check the alignment on the page for the radio button
- if multiple radio buttons available, check if the displaying order is respected as per requirements

Drop down box


A) Functional:
- check if drop down is opened by clicking both the drop down and the drop down arrow
- check the maximum characters that can be displayed in the filter text area (BVA)
- check if filter collapses when clicking outside the area if already expanded
- check if the selected filter value is displayed in the filter list
- check if filter values are matching with the database (if they are brought from the data base)
- check if filtered results are matching the selected filter
- check if user can clear the selected filter value
- check the maximum number of values in the drop down with scroll bar
- check if the list is scroll able
- check if the user is not able to type text in the box
- check if filtered values are properly aligned
- check the loading time of the drop down list
B) GUI:
- check if drop down is visible or not
- check if drop down is clickable or not
- check if filtered value is highlighted on hover or selection
- check if filtered value character length is not longer than the visible space allocated
List box
A) Functional:
- check that user can select multiple values from the list
- check that user can select and deselect values from the list
- check that value changes after user selects and deselects value and selects another value from list
- check the number of values form list matches the number in the SRS
B) GUI:
- check if list box is visible or not
- check if list box elements are select able or not
- check that la values from list are scroll able
- check if color changes when user hover the mouse on the value
- verify if user is able to scroll through the values using arrow keys

Web table /HTML table


A) Functional:
- verify user can interact with the table
- verify when content changes if the values from the correspondent data base are updated
B) GUI:
- verify that the table is visible or not
- verify that the table dimensions are as per specifications
- verify if table content is highlighted when mouse hovers or clicks

 Re-Testing & Regression Testing


1) Regression Testing:
- testing conducts on modified build to make sure there will be no impact on existing functionality because
of changes like adding, deleting, modifying features
- types:
A) Unit Regression Testing - testing only the changes done by developer
B) Regional Regression Testing - testing the modified module along with impacted modules (Impact
Analysis Meeting, with Dev & QA to identify impacted modules)
C) Full Regression Testing - testing both main feature impacted and remaining part of the application

2) Re-Testing:
-it is used to ensure that the defects which were found and posted in the earlier build where fixed or not in
current build
- tester will test the bug fix alone (opposite to regression testing)
- tester will close the bug if is resolved, otherwise it will reopen and send it to dev

 Sanity & Smoke Testing


- smoke testing is conducted as son as you get the build from the developer to ensure the installation is
possible and the build is stable
- as the build is stable, sanity testing is performed to test main functionalities (positive testing)

 Exploratory Testing
 Adhoc Testing & Monkey Testing
 Positive & Negative Testing
- testing the application with valid inputs is called Positive Testing
- it checks if an application behaves as expected with valid inputs

- testing the application with invalid inputs is called Negative Testing


- it checks if an application behaves as expected with invalid inputs

 End-to-end testing
- testing the overall functionalities of the system including the data integration among all the modules
 Globalization & Localization Testing
1) Globalization Testing (global compatibility):
- performed to ensure the system or software app can run in any cultural or local environment
- different aspects of the software are tested to ensure that it supports every language and different
attributes
- it test the different currency formats, mobile phone number formats and address formats are supported
by the app

2) Localization Testing (local compatibility):


- performed to check system or software for a specific geographical and cultural environment
- checks if localized product only supports the specific kind of language and is usable in a specific region
- it checks if a specific currency format, specific mobile phone number formats and specific address formats
are working or not

 Test Design Techniques


- types:
A) Equivalence Class Partitioning (ECP) - classify/divide/partition data in multiple classes
B) Boundary Value Analysis (BVA) - establish boundaries (min-1, min, min+1, max-1,max,max+1)
C) Decision Table based testing - used if we have more conditions and corresponding actions (condition
and action)
D) State Transition - when changes in input are changing the state of the app
E) Error Guessing - used to find bugs in an app based on tester’s prior experience

- it is used to prepare data for testing (valid and invalid data)


 STLC (Software Testing Life Cycle)
STLC Phases:

1) Requirement Analysis
2) Test Planning
3) Test Design
4) Test execution
5) Bug Reporting & Tracking
6) Test Closure

 Content of Test Plan

- a document that describes the test scope, test strategy, objective, schedule, deliverable and resources
required to perform testing for a software product
- contents:
A) Overview
B) Scope - Inclusions, Test Environments, Exclusions
C) Test Strategy
D) Defect Reporting Procedure
E) Roles / Responsibilities
F) Test Schedule
G) Test Deliverable
H) Pricing
I) Entry and Exist Criteria
J) Suspension and Resumption Criteria
K) Tools
L) Risk and Mitigation
M) Approvals
 Use Case, Test Scenario & Test Case

1) Use Case: (created by manager / test lead / business manager part of FRS)
- use case describes the requirements
- contains THREE items:
A) Actor - is the user, which can be a single person or a group of people, interacting with a process
B) Action - is the reach of the final outcome
C) Goal/Outcome - is the successful user outcome

2) Test Scenario: (created by testers)


- a possible area to be tested (What to test?)

3) Test Case: (created by tester)


- step by step actions to be performed to validate functionality of app (How to test?)
- test case contains steps, expected result & actual result
Test suit - is a group of test cases ( Sanity Test Suite, Regression Test Suite, GUI Test Suit, etc)
 Content of Test Case

- is a set of step by step action to validate a particular feature or functionalities of a software application
- it contains:

A) Test Case ID
B) Test Case Title
C) Description
D) Preconditions
E) Priority
- P0 - smoke and sanity case testes (very important)
- P1 - regressions test cases
- P2 - functional test cases (major flows of the app)
- P3 - UI testing
F) Requirement ID
G) Steps/Actions
H) Expected Result
I) Actual Result
J) Test data

 RTM (REQUIREMENT TRACEABILITY MATRIX)

- it describers the mapping of Requirements with the test cases


- it’s purpose is to see that all test cases are covered so that no functionality should miss while testing
- RTM parameters:
A) Requirement ID
B) Requirement Description
C) Test Case ID’s
 Test Environment

- is a platform specially build for test case execution on the software product
- it is created by integrating the required software and hardware along with proper network configurations
- test environment simulates production/real time environment
- another name for test environment: TEST BED.

 Test Execution
 Defect/Bug Reporting

- any mismatch functionality found in a application is called as Defect/Bug/Issue.


- must be reproduced at least 3 times in 2 environments
- it is reported through templates or using tools (Clear Quest, Dev Track, Jira, Quality Center, Bug Zilla, etc.)

 Defects/Bugs Classification
 Defect/Bug Severity & Priority
1) Defect Severity:
- describes the seriousness of a defect and how much impact has on business workflow
- categorized in 4 classes:
A) Blocker - indicates nothing can proceed further
Ex1: Application crashed
Ex2: Login not working

B) Critical - indicates that main functionality is not working and business workflow is broken
Ex1: Funds transfer not working in net banking
Ex2: Ordering product not working (e-commerce application)

C) Major - it causes some undesirable behaviour, but the app/feature is still functional
Ex1: After sending email there is no confirmation message
Ex2: After booking cab there is no confirmation

C) Minor - it won’t cause any major breakdown of the system


Ex1: look and feel issues, spellings, grammar, alignments etc

2) Defect Priority:
- describes the importance of defect and states the order in which a defect should be fixed
- categorized in 3 classes:
A) P0 (High) - must be resolved immediately as it affects the system severely
B) P1 (Medium) - it can wait a new version / build to be created
C) P2 (Low) - developer can fix it in later releases
 Defect/Bug Life Cycle

 Test Closure
 Test Metrics
 QA/Tester Responsibilities

 Understanding the requirements and functional specifications of the application


 Identifying required Test Scenarios
 Designing Test Cases to validate application
 Setting up Test Environment (Test Bed)
 Execute Test Cases to validate application
 Log Test results (passed/failed)
 Defect Reporting and Tracking
 Retest fixed defects of previous build
 Perform various types of testing in application
 Report to the Lead about the status of assigned tasks
 Participate in regular team meetings
 Creating automation script - optional- to skip retesting passed test cases on future builds
 Provide recommendation on weather or not the software is ready for production
 Agile model / Agile methodology / Agile Process
- it is an iterative and incremental process
- there will be good communication between Customer, Business Analyst, Devs and Testers

Agile principles:

1) Customer no need to wait for long time


2) We develop, test and release piece of software to customer with few number of features
3) We can accept / accommodate requirement changes

Advantages:

1) Requirements changes are allowed in any stage of development or in middle development


2) Releases will be very fast
3) Customer small waiting time
4) Good communication between the team
5) Very easy model to adopt

Disadvantages:

1) Less focus on design and documentation since we deliver software very fast

 Scrum process
- a framework through which we build software product by following the agile principles

A) Scrum team:

1) Product owner:
- defines the feature of the product (classification in a form of epic and user stories)
- priorities the features according to the market value
- adjusts the features and priorities every iteration, as needed
- accepts or rejects work results

2) Scrum Master:
- the main role is facilitating and driving the agile process
- he makes sure the team members are following the agile process
- organizes scrum meetings and a\other scrum ceremonies

3) Dev Team:
- develop the software product

4) QA Team:
- test the software product

B) Scrum terminology:

1) User story - a feature / module of the software (a smaller requirement)

2) Epic - a collection of user stories (a larger requirement)

3) Product Backlog - contains a list of stories - defined by the product owner

4) Sprint - a period / span of time to complete user stories, a.k.a iteration, decided by the product owner
and the team, usually 2-4 weeks

5) Sprint Planning Meeting - meeting conducted with the whole team, to decide what can be delivered in
the sprint and duration

6) Sprint Backlog - list of committed stories by Dev/QA for specific sprint (subset of Product Backlog)

7) Scrum Meeting - meeting conducted by the scrum master, everyday, 15 mins (a.k.a Stand-up Meeting)
- “what did you do yesterday, what will you do today, any impediment on the way”

8) Sprint Retrospective Meeting - conducted by the entire team, after sprint completion
- “what went right/wrong”

9) Story point - rough estimation of user stories, will be decided by Dev & QA in Fibonacci series form
- 0, 1, 1, 2, 3, 5, 8, 13, 21, ….
- 1 story point = 1 hour / 1 day, 6 hours, etc

10) Burn down Chart - graph prepared by scrum master, which shows how much work remains in the
sprint (daily)

You might also like