Download as pdf or txt
Download as pdf or txt
You are on page 1of 115

Q. What Is An SRS Review? How it is conducted [vip].

Solution.

SRS is a document that is created by the development team in collaboration with


Business Analysts and environment/data teams. Typically, this document once finalized
will be shared with the QA team via a meeting where a detailed walkthrough is
arranged.

Sometimes, for an already existing application, we might not need a formal meeting and
someone guiding us through this document. We might have the necessary information
to do this by ourselves.

SRS review is nothing but going through the functional requirements


specification document and trying to understand what the target application is
going to be like.

The formal format and a sample were shared with you all in the previous article. It does
not necessarily mean that all SRSs are going to be documented that way exactly.
Always, the form is secondary to the content.

Some teams will just choose to write a bulleted list, some teams will include use cases,
some teams will include sample screenshots (like the document we had) and some just
describe the details in paragraphs.

Pre-Steps To Software Requirements Specification


Review
Step #1) Documents go through multiple revisions, so make sure we have the right
version of the referenced document, the SRS.
Step #2) Establish guidelines on what is expected at the end of the review from each
team member. In other words, decide on what deliverables are expected from this step
– typically, the output of this step is to identify the test scenarios. Test scenarios are
nothing but one line pointers of ‘what to test’ for certain functionality.
Step #3) Also establish guidelines on how this deliverable is to be presented- I mean,
the template.
Step #4) Decide on whether each member of the team is going to work on the entire
document or divide it among themselves. It is highly recommended that everyone reads
everything because that will prevent knowledge concentration with certain team
members.
But in case of a huge project, with the SRS documents running close to 1000 pages, the
approach of breaking up the document module wise and assigning to individual team
members is most practical.

1
Step #5) SRS review also helps in better understanding if there are any specific
prerequisites required for the testing of the software.
Step #6) As a byproduct, a list of queries where some functionality is difficult to
understand or if more information needs to be incorporated into functional requirements
or if mistakes are made in SRS are identified.
What do we need to get started?
 The correct version of the SRS document
 Clear instructions on who is going to work on what and how much time have they
got.
 A template to create Test Scenarios.
 Other information on- who to contact in case of a question or who to report in
case of a documentation inconsistency
Who would provide this information?
Team leads are generally responsible for providing all the items listed in the section
above. However, team members’ inputs are always important for the success of this
entire endeavor.

Team leads often ask- What kind of inputs? Wouldn’t it be better to assign a certain
module to someone interested in it than to a team member who is not? Wouldn’t it be
nice to decide on the target date based on the team’s opinion than thrust a decision on
them? Also, for the success of a project, templates are important.

As a general rule, templates have a higher rate of efficiency when they are tailored to
the specific team’s convenience and comfort. It should, therefore, be noted that team
leads more than anything are team members. Getting your team onboard on the day-to-
day decisions is crucial for the smooth running of the project.

Is Template Required For Test Scenarios?


Why a template for test scenarios – isn’t it enough if we just make a list?
It sure is. However, software projects are not ‘one-man’ shows. They involve teamwork.
Imagine a team of 4- if each one of them decides to review one module of the software
requirements specification each. Team member A has made a list on a sheet of paper.
Team member 2 used an excel sheet. Team member 3 used a notepad. Team member
4 used a word doc. How do we consolidate all the work done for the project at the end
of the day?

Also, how can we decide which one is the standard and how can we say what is right
and what’s not if we did not create the rules, to begin with?

That is what template is: A set of guidelines and an agreed format for uniformity and
concurrence for the entire team.
How to create a template for QA Test Scenarios?
Templates don’t have to be complicated or inflexible.
All they need to be are an efficient mechanism to create a useful testing
artifact. Something simple like the one we see below:

2
The header of this template contains the space required to capture basic information
about the project, the current document, and the referenced document.

The table below will let us create Test Scenarios. The columns included are:

Column #1) Test Scenario ID


Every entity in our testing process has to be uniquely identifiable. So, every test
scenario has to be assigned an ID. The rules to follow while assigning this ID have to be
defined.
For the sake of this article we are going to follow the naming convention as TS(prefix
that stands for Test Scenario) followed by ‘_’, module name MI(My Info module of the
Orange HRM project) followed by ‘_’ and then the subsection (For Example, MIM for
My Info Module, P for photograph and so on)followed by a serial number. An example
would be: “TS_MI_MIM_01”.
Column #2) Requirement
It helps that when we create a test scenario we should be able to map it back to the
section of the SRS document where we picked it from. If the requirements have IDs we
could use that. If not section numbers or even page numbers of the SRS document from
where we identified a testable requirement will do.
Column #3) Test Scenario description
A one-liner specifying ‘what to test’. We would also refer to it as the test objective.
Column #4) Importance
This is to give an idea about how important certain functionality is for the AUT. Values
like high, medium and low can be assigned to this field. You could also choose a point
system, like 1-5, 5 being most important, 1 being less important. Whatever the value
this field can take, it has to be pre-decided.
Column #5) No. of Test cases
A rough estimate on how many individual test cases we might end up writing that one

3
test scenario. For Example, To test the login- we include these situations: Correct
username and password. Correct username and wrong password. Correct password
and wrong username. So, validating the login functionality will result in 3 test cases.
Note: You can expand this template or remove the fields as you see fit.

For example, you can add “Reviewed by” in the header or remove the date of creation,
etc. Also in the table, you can include a field “Created by” to designate the tester
responsible for a certain test scenario or remove the “No. of Test cases” column. The
choice is yours. Go with what works best for the entire team.
Let us now review our Orange HRM SRS Document and create the Test Scenarios

Section 1 is the purpose of the document. No testable requirements there.

Section 2.1: Project Overview- Audience- no testable requirements there either.

Section 2.2: Hardware and Hosting- This section is talking about how the Orange HRM
site is going to be hosted. Now, is this information important to us testers? The answer
is Yes and No. Yes, because when we test we need to have an environment that is
similar to the real-time environment.

This gives us an idea of how it needs to be. No, because, it is not a testable
requirement- a kind of prerequisite for the testing to happen.

Section 3: There is a login screen here and the details of the type of account we need
to have to enter the site. This is a testable requirement. So it needs to be a part of our
Test scenarios.

Please see the test scenarios document where test scenarios for a few sections of the
SRS have been added. For practice, please add the rest of the scenarios in a similar
manner. However, I am going right to section 4 of the document.

Section 4: Aesthetic/HTML Requirements and Guidelines- This section best explains


how some requirements might not make sense to the test team at the time of the SRS
review, but the team should make a note of them as testable requirements all the same.

How to test them and if we need specific set up/any team’s help to validate it are details
we might not know at this point in time. But making them a part of our testing scope is
the first step to ensure that we do not miss them.

4
Sample Test Scenarios for OrangeHRM Application: (click to enlarge image)

=> Please refer and download the Test Scenarios document for more information.

Some Important Observations Regarding SRS Review

#1) No information is to be left uncovered.


#2) Perform feasibility analysis on whether a certain requirement is correct or
not and also if it can be tested or not.
#3) Unless a separate performance/security or any other form of test teams exist-
it is our job to make sure that all nonfunctional requirements have to be taken
into consideration.
#4) Not all information is targeted at the testers, so it is important to understand
what to note and what not.
#5) The importance and no. of test cases for a test scenario need not be accurate
and can be filled in with an approximate value or can be left empty.

To sum up, SRS review results in:

 Test Scenarios list


 Review Results – Documentation/Requirement errors found by statically going
through/verifying the SRS document
 A list of Questions for better understanding- in case of any
 The preliminary idea of how the test environment is supposed to be like

5
 Test scope identification and a rough idea on how many test cases we might end
up having- so how much time we need for documentation and eventually
execution.

Important points to note:

#1) Test scenarios are not external deliverables (not shared with Business Analysts or
Dev teams) but are important for internal QA consumption. They are our first step
towards a 100% test coverage goal. Test scenarios once complete undergo a peer
review and once that is done, they are all consolidated.
For more details on how QA documents are reviewed, check out the article: How to
Perform Test Documentation Reviews in 6 Simple Steps.

#2) We could use a test management tool like HP ALM or qTest to create the test
scenarios. However, the Test scenarios creation in real-time is a manual activity. In my
opinion, it is more convenient manually. Since it is step 1 we do not need to bring out
the big guns yet. Simple excel sheets are the best way to go about it.
The next step to this series is that – we will work on creating test cases and get
further into the test designing phase. Before that, we will also get into – What test
planning is? Where it fits into the entire QA project? As always, work with us for
the best results.

QA Training Day 3: How to write an SRS document from scratch.


Please keep your questions and comments coming. We appreciate your
readership a ton!

Q. How Non-functional Requirements add value to software development?

[VIP]

ANSWER

What are the Non-Functional Requirements?


The non-functional requirements are also called Quality attributes of the software under
development. They describe how the system should work. It is further divided into performance,
security, usability, compatibility as the characteristics of the software. In the Requirement
gathering techniques, the focus is on the functional requirement rather than non-functional
requirements. There exists a gap between both types of requirements.
Non-functional requirements add tremendous value to business analysis. It is commonly
misunderstood by a lot of people. It is important for business stakeholders, and Clients to clearly
explain the requirements and their expectations in measurable terms. If the non-functional

6
requirements are not measurable then they should be revised or rewritten to gain better clarity.
For example, User stories help in mitigating the gap between developers and the user community
in Agile Methodology.

Understand Non-Functional Requirements:


Defining non functional requirements: “Non-Functional Requirements (NFRs) refer to the
criteria that specify the quality of the operation of a system, as opposed to its behaviors, which
are known as its functional requirements. NFRs are the attributes of quality that contribute to the
system’s functionality.”
Some of the most typical non-functional requirements include performance, capacity, scalability,
availability, reliability, maintainability, recoverability, serviceability, security, data integrity,
manageability, and usability.
It is important to focus on getting non-functional requirements right so that the software runs
well and is sustainable over time. These contribute to the success of the software as much as the
functional requirements do so they should not be overlooked. This is because they can
significantly influence user experience.
An example of an NFR would be “how fast does a web page load?”. A page should load in under
3 seconds, so this NFR allows you to lay out certain rules that must be met.

How Non-functional Requirements add value


in software development?
It is vital to define the non-functional requirements as they are critical to project success. Under-
specifying non-functional requirements will lead to an inadequate system. Over specifying will
put questions on the system’s viability and price.
So what exactly are we looking for here? Well, here are four examples of Non-Functional
requirement groups; usability, reliability, performance, and supportability, as well as a few top
tips on each one.

Usability:
Prioritize the important functions of the system based on usage patterns.
Frequently used functions should be tested for usability, as should complex and critical
functions. Be sure to create a requirement for this.

Reliability:
Reliability defines the trust in the system that is developed after using it for a period of time. It
defines the likeability of the software to work without failure for a given time period.
The number of bugs in the code, hardware failures, and problems can reduce the reliability of the
software.
Your goal should be a long MTBF (mean time between failures). It is defined as the average
period of time the system runs before failing.

7
Create a requirement that data created in the system will be retained for a number of years
without the data being changed by the system.
It’s a good idea to also include requirements that make it easier to monitor system performance.

Performance:
What should system response times be, as measured from any point, under what circumstances?
Are there specific peak times when the load on the system will be unusually high?
Think of stress periods, for example, at the end of the month or in conjunction with payroll
disbursement.

Supportability:
The system needs to be cost-effective to maintain.
Maintainability requirements may cover diverse levels of documentation, such as system
documentation, as well as test documentation, e.g. which test cases and test plans will
accompany the system.

Conclusion:
The various attributes of Non-functional Requirements defined above are important to evaluate
the qualities of the software under development. ReQtest as a requirements gathering
and requirements management tool can help in implementing the various attributes of Non-
functional Requirements. It improves software’s usability, reliability, supportability, and
performance.

Q. What Is a Decision Tree Used For?


[VIP]
Decision trees allow us to break down information into multiple variables to
arrive at a singular best decision to a problem.

DECISION TREE COMPONENTS


 A singular node, or “decision,” connecting two or more distinct arcs —
decision branches — that present potential options.
 An event sequence comes next and is represented as a circular “chance
node” that points out potential events that may result from a decision.
8
 Finally, we call the costs and benefits associated with each branch of a
decision tree “consequences.” The endpoint of a tree is represented by a
triangle, or bar, known as a terminal.
Decision trees must contain all possibilities clearly outlined in a structured
manner in order to be effective, but they must also present multiple
possibilities for data scientists to make collaborative decisions and optimize
business growth.

Decision Trees vs. Random Forest:


What’s the Difference?
Random forest algorithms differ from decision trees in their ability to form
several decisions in order to reach a final majority decision.

Decision trees incorporate multiple variables to determine potential outcomes


that ultimately allow us to make a single, best decision. Random
forest algorithms go a step further and do not rely on a single decision.
Instead, they assemble randomized decisions based on several decisions made
beforehand, thereby basing the final decision on a majority opinion. A random
forest is essentially the outputs of multiple decision trees weighed against each
other to present a single outcome through continuous decision-making. That
said, random forest doesn't necessarily determine the best solution, but
instead introduces more diversity to create a smoother prediction based on the
outcome with the greatest possibility.

9
WHEN TO USE DECISION TREE
OVER RANDOM FOREST
Random forest is best when multiple pieces of data come from a complex data
set and must be analyzed to generate a final output. We effectively sacrifice
easy interpretability to determine the most recurring output when we weight
virtually limitless inputs against each other. Decision trees are best used when
working with simpler data sets due to easier interpretability and simpler
model training.

What Are the Disadvantages of a Decision Tree?

The main disadvantages of decision trees lie in their tendency to quickly


become complicated and full of information gain.

Decision trees are used to determine logical solutions to complex problems


but are ineffective without containing all possible outcomes to a possible
decision. Accordingly, decision trees have a tendency to become loaded with
several branches containing many variables, often branching off into a
separate outcomes entirely. This can lead to an overwhelming amount of data
and more confusion than clarity when making decisions.

Decision trees may also lead to issues when using qualitative variables, those
that aren’t numerical in value but rather fit into categories, to make decisions.
Numbers may be assigned to qualitative variables for data analysis uses, but
qualitative data still has the potential to create a staggering number of
branches or may present unclear decision possibilities entirely.

Q. WHAT ARE Coding Standards and Guidelines

10
Sol. Different modules specified in the design document are coded in the Coding phase
according to the module specification. The main goal of the coding phase is to code from
the design document prepared after the design phase through a high-level language and
then to unit test this code.
Good software development organizations want their programmers to maintain to some
well-defined and standard style of coding called coding standards. They usually make
their own coding standards and guidelines depending on what suits their organization best
and based on the types of software they develop. It is very important for the programmers
to maintain the coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
 A coding standard gives a uniform appearance to the codes written by different
engineers.
 It improves readability, and maintainability of the code and it reduces complexity
also.
 It helps in code reuse and helps to detect error easily.
 It promotes sound programming practices and increases efficiency of the
programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different modules
should follow some standard format and information. The header format must contain
below things that is being used in various companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output
parameters
 Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and


functions:
Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to understand the
reason of using it.

11
 Local variables should be named using camel case lettering starting with small
letter (e.g. localData) whereas Global variables names should start with a capital
letter (e.g. GlobalData). Constant names should be formed using capital letters
only (e.g. CONSDATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with small
letters.
 The name of the function must describe the reason of using the function clearly
and briefly.

4. Indentation:
Proper indentation is very important to increase the readability of the code. For
making the code readable, programmers should use White spaces properly. Some of
the spacing conventions are given below:
 There must be a space after giving a comma between two function arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of each block in
the program.
 All braces should start from a new line and the code following the end of braces
also start from a new line.

5. Error return values and exception handling conventions:


All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.
On the other hand, Coding guidelines give some general suggestions regarding the
coding style that to be followed for the betterment of understandability and readability
of the code. Some of the coding guidelines are given below :

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and
debugging difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the
reason behind using it. This is not possible if an identifier is used for multiple
purposes and thus it can lead to confusion to the reader. Moreover, it leads to more
difficulty during future enhancements.

12
8. Code should be well documented:
The code should be properly commented for understanding easily. Comments
regarding the statements increase the understandability of the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be
small enough to carry out small work and lengthy functions should be broken into
small ones for completing small tasks.

10. Try not to use GOTO statement:


GOTO statement makes the program unstructured, thus it reduces the
understandability of the program and also debugging becomes difficult.

Advantages of Coding Guidelines:


 Coding guidelines increase the efficiency of the software and reduces the
development time.
 Coding guidelines help in detecting errors in the early phases, so it helps to reduce the
extra cost incurred by the software project.
 If coding guidelines are maintained properly, then the software code increases
readability and understandability thus it reduces the complexity of the code.
 It reduces the hidden cost for developing the software.

Q. WHAT IS SOFTWARE QUALITY? [VIP]


Ans.

Software quality is defined as a field of study and practice that describes the desirable attributes of
software products. There are two main approaches to software quality: defect management and
quality attributes.

Q What are Three Dimensions of Quality

13
The ‘Three Dimensions of Quality’ model encourages thinking about quality from a human
perspective, focusing on customers and their relationship with products. What are the
dimensions and aspects which could affect this relationship?

The customer perception at the centre of the model is influenced by the three dimensions;
three Ds which can be important factors in many relationships, including those between a
person and a product:

 DESIRABLE: The extent to which our needs and wishes are fulfilled. Are we getting
what we want? Is our experience a positive one?
 DEPENDABLE: The extent to which we trust and feel that we can rely on a
product. Do we feel safe and protected? Is it there when we need it?
 DURABLE: The extent to which a product’s value to us endures. If the product
changes, or our needs and desires change; do they still align?

Each of the dimensions is also the nucleus of a collection of quality aspects; some of the
many factors which can influence a person’s impression of a product:

Quality Aspects influencing how Desirable a product might be

14
Quality Aspects influencing how Dependable a product might be

15
Quality Aspects influencing how Durable the relationship with a product
might be

Q. Rules for Data Flow Diagram [vip]

Following are the rules which are needed to keep in mind while drawing a DFD(Data
Flow Diagram).
 Data can not flow between two entities. –
Data flow must be from entity to a process or a process to an entity. There can be
multiple data flows between one entity and a process.

 Data can not flow between two data stores


Data flow must be from data store to a process or a process to an data store. Data flow
can occur from one data store to many process.

 Data can not flow directly from an entity to data store –


Data Flow from entity must be processed by a process before going to data store and
vice versa.

16
 A process must have atleast one input data flow and one output data flow –
Every process must have input data flow to process the data and an output data flow
for the processed data.

 A data store must have atleast one input data flow and one output data flow –
Every data store must have input data flow to store the data and an output data flow
for the retrieved data.

 Two data flows can not cross each other.

 All the process in the system must be linked to minimum one data store or any other
process.

Q. What is Software Quality Assurance? [VIP]

Software Quality Assurance (SQA) is a process that assures that all software
engineering processes, methods, activities, and work items are monitored and comply
with the defined standards. These defined standards could be one or a combination of
anything like ISO 9000, CMMI model, ISO15504, etc.
SQA incorporates all software development processes starting from defining
requirements to coding until release. Its prime goal is to ensure quality.

Software Quality Assurance Plan


Abbreviated as SQAP, the Software Quality Assurance Plan comprises the procedures,
techniques, and tools that are employed to make sure that a product or service aligns
with the requirements defined in the SRS(Software Requirement Specification).

17
The plan identifies the SQA responsibilities of the team and lists the areas that need to
be reviewed and audited. It also identifies the SQA work products.

The SQA plan document consists of the following sections:


1. Purpose
2. Reference
3. Software configuration management
4. Problem reporting and corrective action
5. Tools, technologies, and methodologies
6. Code control
7. Records: Collection, maintenance, and retention
8. Testing methodology

SQA Activities
Given below is the list of SQA activities:
#1) Creating an SQA Management Plan
Creating an SQA Management plan involves charting out a blueprint of how SQA will be
carried out in the project with respect to the engineering activities while ensuring that
you corral the right talent/team.

18
#2) Setting the Checkpoints
The SQA team sets up periodic quality checkpoints to ensure that product development
is on track and shaping up as expected.

#3) Support/Participate in the Software Engineering team’s requirement gathering


Participate in the software engineering process to gather high-quality specifications. For
gathering information, a designer may use techniques such as interviews and FAST
(Functional Analysis System Technique).

Based on the information gathered, the software architects can prepare the project
estimation using techniques such as WBS (Work Breakdown Structure), SLOC (Source
Line of Codes), and FP(Functional Point) estimation.

#4) Conduct Formal Technical Reviews


An FTR is traditionally used to evaluate the quality and design of the prototype. In this
process, a meeting is conducted with the technical staff to discuss the quality
requirements of the software and the design quality of the prototype. This activity helps
in detecting errors in the early phase of SDLC and reduces rework effort later.

#5) Formulate a Multi-Testing Strategy


The multi-testing strategy employs different types of testing so that the software product
can be tested well from all angles to ensure better quality.

#6) Enforcing Process Adherence


This activity involves coming up with processes and getting cross-functional teams to
buy in on adhering to set-up systems.

This activity is a blend of two sub-activities:


 Process Evaluation: This ensures that the set standards for the project are followed
correctly. Periodically, the process is evaluated to make sure it is working as intended
and if any adjustments need to be made.
 Process Monitoring: Process-related metrics are collected in this step at a designated
time interval and interpreted to understand if the process is maturing as we expect it to.
#7) Controlling Change
This step is essential to ensure that the changes we make are controlled and informed.
Several manual and automated tools are employed to make this happen.

By validating the change requests, evaluating the nature of change, and controlling the
change effect, it is ensured that the software quality is maintained during the
development and maintenance phases.

#8) Measure Change Impact


The QA team actively participates in determining the impact of changes that are brought
about by defect fixing or infrastructure changes, etc. This step has to consider the entire
system and business processes to ensure there are no unexpected side effects.

19
For this purpose, we use software quality metrics that allow managers and developers
to observe the activities and proposed changes from the beginning till the end of SDLC
and initiate corrective action wherever required.

#9) Performing SQA Audits


The SQA audit inspects the actual SDLC process followed vs. the established
guidelines that were proposed. This is to validate the correctness of the planning and
strategic process vs. the actual results. This activity could also expose any non-
compliance issues.

#10) Maintaining Records and Reports


It is crucial to keep the necessary documentation related to SQA and share the required
SQA information with the stakeholders. Test results, audit results, review reports,
change request documentation, etc. should be kept current for analysis and historical
reference.

#11) Manage Good Relations


The strength of the QA team lies in its ability to maintain harmony with various cross-
functional teams. QA vs. developer conflicts should be kept at a minimum and we
should look at everyone working towards the common goal of a quality product. No one
is superior or inferior to each other- we are all a team.

Software Quality Assurance Standards


Software development life cycle and particularly, SQA may require conformance to
quality standards such as:

20
ISO 9000: Based on seven quality management principles that help organizations
ensure that their products or services are aligned with customer needs.

7 principles of ISO 9000 are depicted in the below image:

CMMI level: CMMI stands for Capability Maturity Model Integration. This model
originated in software engineering. It can be employed to direct process improvement
throughout a project, department, or entire organization.

5 CMMI levels and their characteristics are described in the below image:

21
An organization is appraised and awarded a maturity level rating (1-5) based on the
type of appraisal.

Test Maturity Model integration (TMMi): Based on CMMi, this model focuses on
maturity levels in software quality management and testing.

5 TMMi levels are depicted in the image below:

As an organization moves to a higher maturity level, it achieves a higher capability for


producing high-quality products with fewer defects and closely meets the business
requirements.

Elements of Software Quality Assurance


Below are 10 essential elements of SQA which are enlisted for your reference:
1. Software Engineering Standards: SQA teams are critical to ensure that we adhere to
the above standards for software engineering teams.
2. Technical Reviews and Audits: Active and passive verification/validation techniques at
every SDLC stage.
3. Software Testing for Quality Control: Testing the software to identify bugs.

22
4. Error Collection and Analysis: Defect reporting, managing, and analysis to identify
problem areas and failure trends.
5. Metrics and Measurement: SQA employs a variety of checks and measures to gather
information about the effectiveness and quality of the product and processes.
6. Change Management: Actively advocate controlled change and provide strong
processes that limit unanticipated negative outcomes.
7. Vendor Management: Work with contractors and tool vendors to ensure collective
success.
8. Safety/Security Management: SQA is often tasked with exposing vulnerabilities and
bringing attention to them proactively.
9. Risk Management: Risk identification, analysis, and Risk mitigation are spearheaded by
the SQA teams to aid in informed decision making
10. Education: Continuous education to stay current with tools, standards, and industry
trends
SQA Techniques
SQA Techniques include:
 Auditing: Auditing is the inspection of the work products and its related information to
determine if a set of standard processes were followed or not.
 Reviewing: A meeting in which the software product is examined by both internal and
external stakeholders to seek their comments and approval.
 Code Inspection: It is the most formal kind of review that does static testing to find bugs
and avoid defect seepage into the later stages. It is done by a trained mediator/peer and
is based on rules, checklists, entry and exit criteria. The reviewer should not be the
author of the code.
 Design Inspection: Design inspection is done using a checklist that inspects the below
areas of software design:
 General requirements and design
 Functional and Interface specifications
 Conventions
 Requirement traceability
 Structures and interfaces
 Logic
 Performance
 Error handling and recovery
 Testability, extensibility
 Coupling and cohesion
 Simulation: A simulation is a tool that models a real-life situation in order to virtually
examine the behaviour of the system under study. In cases when the real system cannot
be tested directly, simulators are great sandbox system alternatives.
 Functional Testing: It is a QA technique that validates what the system does without
considering how it does it. Black Box testing mainly focuses on testing the system
specifications or features.
 Standardization: Standardization plays a crucial role in quality assurance. This
decreases ambiguity and guesswork, thus ensuring quality.
 Static Analysis: It is a software analysis that is done by an automated tool without
executing the program. Software metrics and reverse engineering are some popular
forms of static analysis. In newer teams, static code analysis tools such as SonarCube,
VeraCode, etc. are used.
 Walkthroughs: A software walkthrough or code walkthrough is a peer review where the
developer guides the members of the development team to go through the product, raise

23
queries, suggest alternatives, and make comments regarding possible errors, standard
violations, or any other issues.
 Unit Testing: This is a White Box Testing technique where complete code coverage is
ensured by executing each independent path, branch, and condition at least once.
 Stress Testing: This type of testing is done to check how robust a system is by testing it
under heavy load i.e. beyond normal conditions.
Conclusion
SQA is an umbrella activity that is intertwined throughout the software lifecycle.
Software quality assurance is critical for your software product or service to succeed in
the market and live up to the customer’s expectations.

24
Software Engineering Institute Capability
Maturity Model (SEICMM)
The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized and


consistently more mature processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI), a
research and development center promote by the U.S. Department of Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of an


organization's software process.

Methods of SEICMM
There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the software


process capability of an organization. The results of capability evaluation indicate the
likely contractor performance if the contractor is awarded a work. Therefore, the results
of the software process capability assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an organization


to improve its process capability. Thus, this type of evaluation is for purely internal use.

SEI CMM categorized software development industries into the following five maturity
levels. The various levels of SEI CMM have been designed so that it is easy for an
organization to build its quality system starting from scratch slowly.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very
few or no processes are described and followed. Since software production processes
are not limited, different engineers follow their process and as a result, development
efforts become chaotic. Therefore, it is also called a chaotic level.

Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and
schedule are established. Size and cost estimation methods, like function point analysis,
COCOMO, etc. are used.

Level 3: Defined
At this level, the methods for both management and development activities are defined
and documented. There is a common organization-wide understanding of operations,
roles, and responsibilities. The ways through defined, the process and product qualities
are not measured. ISO 9000 goals at achieving this level.

Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.

Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average
defect correction time, productivity, the average number of defects found per hour
inspection, the average number of failures detected during testing per LOC, etc. The
software process and product quality are measured, and quantitative quality
requirements for the product are met. Various tools like Pareto charts, fishbone
diagrams, etc. are used to measure the product and process quality. The process metrics
are used to analyze if a project performed satisfactorily. Thus, the outcome of process
measurements is used to calculate project performance rather than improve the process.

Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product
measurement data are evaluated for continuous process improvement.

Key Process Areas (KPA) of a software organization


Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software
process to the next level. The focus of each level and the corresponding key process
areas are shown in the fig.
SEI CMM provides a series of key areas on which to focus to take an organization from
one level of maturity to the next. Thus, it provides a method for gradual quality
improvement over various stages. Each step has been carefully designed such that one
step enhances the capability already built up.
What is the structure of object-oriented programming?

The structure, or building blocks, of object-oriented programming include the following:

 Classes are user-defined data types that act as the blueprint for individual objects, attributes
and methods.

 Objects are instances of a class created with specifically defined data. Objects can correspond to
real-world objects or an abstract entity. When class is defined initially, the description is the only
object that is defined.

 Methods are functions that are defined inside a class that describe the behaviors of an object.
Each method contained in class definitions starts with a reference to an instance object.
Additionally, the subroutines contained in an object are called instance methods. Programmers
use methods for reusability or keeping functionality encapsulated inside one object at a time.

 Attributes are defined in the class template and represent the state of an object. Objects will
have data stored in the attributes field. Class attributes belong to the class itself.

This image shows an example of the structure and


naming in OOP.

What are the main principles of OOP?

Object-oriented programming is based on the following principles:

 Encapsulation. This principle states that all important information is contained inside an object
and only select information is exposed. The implementation and state of each object are
privately held inside a defined class. Other objects do not have access to this class or the
authority to make changes. They are only able to call a list of public functions or methods. This
characteristic of data hiding provides greater program security and avoids unintended data
corruption.
 Abstraction. Objects only reveal internal mechanisms that are relevant for the use of other
objects, hiding any unnecessary implementation code. The derived class can have its
functionality extended. This concept can help developers more easily make additional changes
or additions over time.

 Inheritance. Classes can reuse code from other classes. Relationships and subclasses between
objects can be assigned, enabling developers to reuse common logic while still maintaining a
unique hierarchy. This property of OOP forces a more thorough data analysis, reduces
development time and ensures a higher level of accuracy.

 Polymorphism. Objects are designed to share behaviors and they can take on more than one
form. The program will determine which meaning or usage is necessary for each execution of
that object from a parent class, reducing the need to duplicate code. A child class is then
created, which extends the functionality of the parent class. Polymorphism allows different
types of objects to pass through the same interface.

What are examples of object-oriented programming languages?

While Simula is credited as being the first object-oriented programming language, many other
programming languages are used with OOP today. But some programming languages pair with OOP
better than others. For example, programming languages considered pure OOP languages treat
everything as objects. Other programming languages are designed primarily for OOP, but with some
procedural processes included.

For example, popular pure OOP languages include:

 Ruby

 Scala

 JADE

 Emerald

Programming languages designed primarily for OOP include:

 Java

 Python

 C++

Other programming languages that pair with OOP include:

 Visual Basic .NET

 PHP
 JavaScript

What are the benefits of OOP?

Benefits of OOP include:

 Modularity. Encapsulation enables objects to be self-contained, making troubleshooting and


collaborative development easier.

 Reusability. Code can be reused through inheritance, meaning a team does not have to write
the same code multiple times.

 Productivity. Programmers can construct new programs quicker through the use of multiple
libraries and reusable code.

 Easily upgradable and scalable. Programmers can implement system functionalities


independently.

 Interface descriptions. Descriptions of external systems are simple, due to message passing
techniques that are used for objects communication.

 Security. Using encapsulation and abstraction, complex code is hidden, software maintenance is
easier and internet protocols are protected.

 Flexibility. Polymorphism enables a single function to adapt to the class it is placed in. Different
objects can also pass through the same interface.

Criticism of OOP

The object-oriented programming model has been criticized by developers for multiple reasons. The
largest concern is that OOP overemphasizes the data component of software development and does not
focus enough on computation or algorithms. Additionally, OOP code may be more complicated to write
and take longer to compile.

Alternative methods to OOP include:

 Functional programming. This includes languages such as Erlang and Scala, which are used for
telecommunications and fault tolerant systems.

 Structured or modular programming. This includes languages such as PHP and C#.

 Imperative programming. This alternative to OOP focuses on function rather than models and
includes C++ and Java.

 Declarative programming. This programming method involves statements on what the task or
desired outcome is but not how to achieve it. Languages include Prolog and Lisp.
 Logical programming. This method, which is based mostly in formal logic and uses languages
such as Prolog, contains a set of sentences that express facts or rules about a problem domain.
It focuses on tasks that can benefit from rule-based logical queries.

Most advanced programming languages enable developers to combine models, because they can be
used for different programming methods. For example, JavaScript can be used for OOP and functional
programming.

You might also like