Professional Documents
Culture Documents
Unit-2 1
Unit-2 1
Solution.
Sometimes, for an already existing application, we might not need a formal meeting and
someone guiding us through this document. We might have the necessary information
to do this by ourselves.
The formal format and a sample were shared with you all in the previous article. It does
not necessarily mean that all SRSs are going to be documented that way exactly.
Always, the form is secondary to the content.
Some teams will just choose to write a bulleted list, some teams will include use cases,
some teams will include sample screenshots (like the document we had) and some just
describe the details in paragraphs.
1
Step #5) SRS review also helps in better understanding if there are any specific
prerequisites required for the testing of the software.
Step #6) As a byproduct, a list of queries where some functionality is difficult to
understand or if more information needs to be incorporated into functional requirements
or if mistakes are made in SRS are identified.
What do we need to get started?
The correct version of the SRS document
Clear instructions on who is going to work on what and how much time have they
got.
A template to create Test Scenarios.
Other information on- who to contact in case of a question or who to report in
case of a documentation inconsistency
Who would provide this information?
Team leads are generally responsible for providing all the items listed in the section
above. However, team members’ inputs are always important for the success of this
entire endeavor.
Team leads often ask- What kind of inputs? Wouldn’t it be better to assign a certain
module to someone interested in it than to a team member who is not? Wouldn’t it be
nice to decide on the target date based on the team’s opinion than thrust a decision on
them? Also, for the success of a project, templates are important.
As a general rule, templates have a higher rate of efficiency when they are tailored to
the specific team’s convenience and comfort. It should, therefore, be noted that team
leads more than anything are team members. Getting your team onboard on the day-to-
day decisions is crucial for the smooth running of the project.
Also, how can we decide which one is the standard and how can we say what is right
and what’s not if we did not create the rules, to begin with?
That is what template is: A set of guidelines and an agreed format for uniformity and
concurrence for the entire team.
How to create a template for QA Test Scenarios?
Templates don’t have to be complicated or inflexible.
All they need to be are an efficient mechanism to create a useful testing
artifact. Something simple like the one we see below:
2
The header of this template contains the space required to capture basic information
about the project, the current document, and the referenced document.
The table below will let us create Test Scenarios. The columns included are:
3
test scenario. For Example, To test the login- we include these situations: Correct
username and password. Correct username and wrong password. Correct password
and wrong username. So, validating the login functionality will result in 3 test cases.
Note: You can expand this template or remove the fields as you see fit.
For example, you can add “Reviewed by” in the header or remove the date of creation,
etc. Also in the table, you can include a field “Created by” to designate the tester
responsible for a certain test scenario or remove the “No. of Test cases” column. The
choice is yours. Go with what works best for the entire team.
Let us now review our Orange HRM SRS Document and create the Test Scenarios
Section 2.2: Hardware and Hosting- This section is talking about how the Orange HRM
site is going to be hosted. Now, is this information important to us testers? The answer
is Yes and No. Yes, because when we test we need to have an environment that is
similar to the real-time environment.
This gives us an idea of how it needs to be. No, because, it is not a testable
requirement- a kind of prerequisite for the testing to happen.
Section 3: There is a login screen here and the details of the type of account we need
to have to enter the site. This is a testable requirement. So it needs to be a part of our
Test scenarios.
Please see the test scenarios document where test scenarios for a few sections of the
SRS have been added. For practice, please add the rest of the scenarios in a similar
manner. However, I am going right to section 4 of the document.
How to test them and if we need specific set up/any team’s help to validate it are details
we might not know at this point in time. But making them a part of our testing scope is
the first step to ensure that we do not miss them.
4
Sample Test Scenarios for OrangeHRM Application: (click to enlarge image)
=> Please refer and download the Test Scenarios document for more information.
5
Test scope identification and a rough idea on how many test cases we might end
up having- so how much time we need for documentation and eventually
execution.
#1) Test scenarios are not external deliverables (not shared with Business Analysts or
Dev teams) but are important for internal QA consumption. They are our first step
towards a 100% test coverage goal. Test scenarios once complete undergo a peer
review and once that is done, they are all consolidated.
For more details on how QA documents are reviewed, check out the article: How to
Perform Test Documentation Reviews in 6 Simple Steps.
#2) We could use a test management tool like HP ALM or qTest to create the test
scenarios. However, the Test scenarios creation in real-time is a manual activity. In my
opinion, it is more convenient manually. Since it is step 1 we do not need to bring out
the big guns yet. Simple excel sheets are the best way to go about it.
The next step to this series is that – we will work on creating test cases and get
further into the test designing phase. Before that, we will also get into – What test
planning is? Where it fits into the entire QA project? As always, work with us for
the best results.
[VIP]
ANSWER
6
requirements are not measurable then they should be revised or rewritten to gain better clarity.
For example, User stories help in mitigating the gap between developers and the user community
in Agile Methodology.
Usability:
Prioritize the important functions of the system based on usage patterns.
Frequently used functions should be tested for usability, as should complex and critical
functions. Be sure to create a requirement for this.
Reliability:
Reliability defines the trust in the system that is developed after using it for a period of time. It
defines the likeability of the software to work without failure for a given time period.
The number of bugs in the code, hardware failures, and problems can reduce the reliability of the
software.
Your goal should be a long MTBF (mean time between failures). It is defined as the average
period of time the system runs before failing.
7
Create a requirement that data created in the system will be retained for a number of years
without the data being changed by the system.
It’s a good idea to also include requirements that make it easier to monitor system performance.
Performance:
What should system response times be, as measured from any point, under what circumstances?
Are there specific peak times when the load on the system will be unusually high?
Think of stress periods, for example, at the end of the month or in conjunction with payroll
disbursement.
Supportability:
The system needs to be cost-effective to maintain.
Maintainability requirements may cover diverse levels of documentation, such as system
documentation, as well as test documentation, e.g. which test cases and test plans will
accompany the system.
Conclusion:
The various attributes of Non-functional Requirements defined above are important to evaluate
the qualities of the software under development. ReQtest as a requirements gathering
and requirements management tool can help in implementing the various attributes of Non-
functional Requirements. It improves software’s usability, reliability, supportability, and
performance.
9
WHEN TO USE DECISION TREE
OVER RANDOM FOREST
Random forest is best when multiple pieces of data come from a complex data
set and must be analyzed to generate a final output. We effectively sacrifice
easy interpretability to determine the most recurring output when we weight
virtually limitless inputs against each other. Decision trees are best used when
working with simpler data sets due to easier interpretability and simpler
model training.
Decision trees may also lead to issues when using qualitative variables, those
that aren’t numerical in value but rather fit into categories, to make decisions.
Numbers may be assigned to qualitative variables for data analysis uses, but
qualitative data still has the potential to create a staggering number of
branches or may present unclear decision possibilities entirely.
10
Sol. Different modules specified in the design document are coded in the Coding phase
according to the module specification. The main goal of the coding phase is to code from
the design document prepared after the design phase through a high-level language and
then to unit test this code.
Good software development organizations want their programmers to maintain to some
well-defined and standard style of coding called coding standards. They usually make
their own coding standards and guidelines depending on what suits their organization best
and based on the types of software they develop. It is very important for the programmers
to maintain the coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
A coding standard gives a uniform appearance to the codes written by different
engineers.
It improves readability, and maintainability of the code and it reduces complexity
also.
It helps in code reuse and helps to detect error easily.
It promotes sound programming practices and increases efficiency of the
programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.
11
Local variables should be named using camel case lettering starting with small
letter (e.g. localData) whereas Global variables names should start with a capital
letter (e.g. GlobalData). Constant names should be formed using capital letters
only (e.g. CONSDATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with small
letters.
The name of the function must describe the reason of using the function clearly
and briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. For
making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
There must be a space after giving a comma between two function arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of each block in
the program.
All braces should start from a new line and the code following the end of braces
also start from a new line.
12
8. Code should be well documented:
The code should be properly commented for understanding easily. Comments
regarding the statements increase the understandability of the code.
Software quality is defined as a field of study and practice that describes the desirable attributes of
software products. There are two main approaches to software quality: defect management and
quality attributes.
13
The ‘Three Dimensions of Quality’ model encourages thinking about quality from a human
perspective, focusing on customers and their relationship with products. What are the
dimensions and aspects which could affect this relationship?
The customer perception at the centre of the model is influenced by the three dimensions;
three Ds which can be important factors in many relationships, including those between a
person and a product:
DESIRABLE: The extent to which our needs and wishes are fulfilled. Are we getting
what we want? Is our experience a positive one?
DEPENDABLE: The extent to which we trust and feel that we can rely on a
product. Do we feel safe and protected? Is it there when we need it?
DURABLE: The extent to which a product’s value to us endures. If the product
changes, or our needs and desires change; do they still align?
Each of the dimensions is also the nucleus of a collection of quality aspects; some of the
many factors which can influence a person’s impression of a product:
14
Quality Aspects influencing how Dependable a product might be
15
Quality Aspects influencing how Durable the relationship with a product
might be
Following are the rules which are needed to keep in mind while drawing a DFD(Data
Flow Diagram).
Data can not flow between two entities. –
Data flow must be from entity to a process or a process to an entity. There can be
multiple data flows between one entity and a process.
16
A process must have atleast one input data flow and one output data flow –
Every process must have input data flow to process the data and an output data flow
for the processed data.
A data store must have atleast one input data flow and one output data flow –
Every data store must have input data flow to store the data and an output data flow
for the retrieved data.
All the process in the system must be linked to minimum one data store or any other
process.
Software Quality Assurance (SQA) is a process that assures that all software
engineering processes, methods, activities, and work items are monitored and comply
with the defined standards. These defined standards could be one or a combination of
anything like ISO 9000, CMMI model, ISO15504, etc.
SQA incorporates all software development processes starting from defining
requirements to coding until release. Its prime goal is to ensure quality.
17
The plan identifies the SQA responsibilities of the team and lists the areas that need to
be reviewed and audited. It also identifies the SQA work products.
SQA Activities
Given below is the list of SQA activities:
#1) Creating an SQA Management Plan
Creating an SQA Management plan involves charting out a blueprint of how SQA will be
carried out in the project with respect to the engineering activities while ensuring that
you corral the right talent/team.
18
#2) Setting the Checkpoints
The SQA team sets up periodic quality checkpoints to ensure that product development
is on track and shaping up as expected.
Based on the information gathered, the software architects can prepare the project
estimation using techniques such as WBS (Work Breakdown Structure), SLOC (Source
Line of Codes), and FP(Functional Point) estimation.
By validating the change requests, evaluating the nature of change, and controlling the
change effect, it is ensured that the software quality is maintained during the
development and maintenance phases.
19
For this purpose, we use software quality metrics that allow managers and developers
to observe the activities and proposed changes from the beginning till the end of SDLC
and initiate corrective action wherever required.
20
ISO 9000: Based on seven quality management principles that help organizations
ensure that their products or services are aligned with customer needs.
CMMI level: CMMI stands for Capability Maturity Model Integration. This model
originated in software engineering. It can be employed to direct process improvement
throughout a project, department, or entire organization.
5 CMMI levels and their characteristics are described in the below image:
21
An organization is appraised and awarded a maturity level rating (1-5) based on the
type of appraisal.
Test Maturity Model integration (TMMi): Based on CMMi, this model focuses on
maturity levels in software quality management and testing.
22
4. Error Collection and Analysis: Defect reporting, managing, and analysis to identify
problem areas and failure trends.
5. Metrics and Measurement: SQA employs a variety of checks and measures to gather
information about the effectiveness and quality of the product and processes.
6. Change Management: Actively advocate controlled change and provide strong
processes that limit unanticipated negative outcomes.
7. Vendor Management: Work with contractors and tool vendors to ensure collective
success.
8. Safety/Security Management: SQA is often tasked with exposing vulnerabilities and
bringing attention to them proactively.
9. Risk Management: Risk identification, analysis, and Risk mitigation are spearheaded by
the SQA teams to aid in informed decision making
10. Education: Continuous education to stay current with tools, standards, and industry
trends
SQA Techniques
SQA Techniques include:
Auditing: Auditing is the inspection of the work products and its related information to
determine if a set of standard processes were followed or not.
Reviewing: A meeting in which the software product is examined by both internal and
external stakeholders to seek their comments and approval.
Code Inspection: It is the most formal kind of review that does static testing to find bugs
and avoid defect seepage into the later stages. It is done by a trained mediator/peer and
is based on rules, checklists, entry and exit criteria. The reviewer should not be the
author of the code.
Design Inspection: Design inspection is done using a checklist that inspects the below
areas of software design:
General requirements and design
Functional and Interface specifications
Conventions
Requirement traceability
Structures and interfaces
Logic
Performance
Error handling and recovery
Testability, extensibility
Coupling and cohesion
Simulation: A simulation is a tool that models a real-life situation in order to virtually
examine the behaviour of the system under study. In cases when the real system cannot
be tested directly, simulators are great sandbox system alternatives.
Functional Testing: It is a QA technique that validates what the system does without
considering how it does it. Black Box testing mainly focuses on testing the system
specifications or features.
Standardization: Standardization plays a crucial role in quality assurance. This
decreases ambiguity and guesswork, thus ensuring quality.
Static Analysis: It is a software analysis that is done by an automated tool without
executing the program. Software metrics and reverse engineering are some popular
forms of static analysis. In newer teams, static code analysis tools such as SonarCube,
VeraCode, etc. are used.
Walkthroughs: A software walkthrough or code walkthrough is a peer review where the
developer guides the members of the development team to go through the product, raise
23
queries, suggest alternatives, and make comments regarding possible errors, standard
violations, or any other issues.
Unit Testing: This is a White Box Testing technique where complete code coverage is
ensured by executing each independent path, branch, and condition at least once.
Stress Testing: This type of testing is done to check how robust a system is by testing it
under heavy load i.e. beyond normal conditions.
Conclusion
SQA is an umbrella activity that is intertwined throughout the software lifecycle.
Software quality assurance is critical for your software product or service to succeed in
the market and live up to the customer’s expectations.
24
Software Engineering Institute Capability
Maturity Model (SEICMM)
The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a
research and development center promote by the U.S. Department of Defense (DOD).
Methods of SEICMM
There are two methods of SEICMM:
SEI CMM categorized software development industries into the following five maturity
levels. The various levels of SEI CMM have been designed so that it is easy for an
organization to build its quality system starting from scratch slowly.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very
few or no processes are described and followed. Since software production processes
are not limited, different engineers follow their process and as a result, development
efforts become chaotic. Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and
schedule are established. Size and cost estimation methods, like function point analysis,
COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are defined
and documented. There is a common organization-wide understanding of operations,
roles, and responsibilities. The ways through defined, the process and product qualities
are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average
defect correction time, productivity, the average number of defects found per hour
inspection, the average number of failures detected during testing per LOC, etc. The
software process and product quality are measured, and quantitative quality
requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process metrics
are used to analyze if a project performed satisfactorily. Thus, the outcome of process
measurements is used to calculate project performance rather than improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product
measurement data are evaluated for continuous process improvement.
Classes are user-defined data types that act as the blueprint for individual objects, attributes
and methods.
Objects are instances of a class created with specifically defined data. Objects can correspond to
real-world objects or an abstract entity. When class is defined initially, the description is the only
object that is defined.
Methods are functions that are defined inside a class that describe the behaviors of an object.
Each method contained in class definitions starts with a reference to an instance object.
Additionally, the subroutines contained in an object are called instance methods. Programmers
use methods for reusability or keeping functionality encapsulated inside one object at a time.
Attributes are defined in the class template and represent the state of an object. Objects will
have data stored in the attributes field. Class attributes belong to the class itself.
Encapsulation. This principle states that all important information is contained inside an object
and only select information is exposed. The implementation and state of each object are
privately held inside a defined class. Other objects do not have access to this class or the
authority to make changes. They are only able to call a list of public functions or methods. This
characteristic of data hiding provides greater program security and avoids unintended data
corruption.
Abstraction. Objects only reveal internal mechanisms that are relevant for the use of other
objects, hiding any unnecessary implementation code. The derived class can have its
functionality extended. This concept can help developers more easily make additional changes
or additions over time.
Inheritance. Classes can reuse code from other classes. Relationships and subclasses between
objects can be assigned, enabling developers to reuse common logic while still maintaining a
unique hierarchy. This property of OOP forces a more thorough data analysis, reduces
development time and ensures a higher level of accuracy.
Polymorphism. Objects are designed to share behaviors and they can take on more than one
form. The program will determine which meaning or usage is necessary for each execution of
that object from a parent class, reducing the need to duplicate code. A child class is then
created, which extends the functionality of the parent class. Polymorphism allows different
types of objects to pass through the same interface.
While Simula is credited as being the first object-oriented programming language, many other
programming languages are used with OOP today. But some programming languages pair with OOP
better than others. For example, programming languages considered pure OOP languages treat
everything as objects. Other programming languages are designed primarily for OOP, but with some
procedural processes included.
Ruby
Scala
JADE
Emerald
Java
Python
C++
PHP
JavaScript
Reusability. Code can be reused through inheritance, meaning a team does not have to write
the same code multiple times.
Productivity. Programmers can construct new programs quicker through the use of multiple
libraries and reusable code.
Interface descriptions. Descriptions of external systems are simple, due to message passing
techniques that are used for objects communication.
Security. Using encapsulation and abstraction, complex code is hidden, software maintenance is
easier and internet protocols are protected.
Flexibility. Polymorphism enables a single function to adapt to the class it is placed in. Different
objects can also pass through the same interface.
Criticism of OOP
The object-oriented programming model has been criticized by developers for multiple reasons. The
largest concern is that OOP overemphasizes the data component of software development and does not
focus enough on computation or algorithms. Additionally, OOP code may be more complicated to write
and take longer to compile.
Functional programming. This includes languages such as Erlang and Scala, which are used for
telecommunications and fault tolerant systems.
Structured or modular programming. This includes languages such as PHP and C#.
Imperative programming. This alternative to OOP focuses on function rather than models and
includes C++ and Java.
Declarative programming. This programming method involves statements on what the task or
desired outcome is but not how to achieve it. Languages include Prolog and Lisp.
Logical programming. This method, which is based mostly in formal logic and uses languages
such as Prolog, contains a set of sentences that express facts or rules about a problem domain.
It focuses on tasks that can benefit from rule-based logical queries.
Most advanced programming languages enable developers to combine models, because they can be
used for different programming methods. For example, JavaScript can be used for OOP and functional
programming.