Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

UNIT-III

Software Design: Principles, Abstraction, Modularity,

Once the requirements document for the software to be developed is available,


the software design phase begins. While the requirement specification activity
deals entirely with the problem domain, design is the first phase of transforming
the problem into a solution. In the design phase, the customer and business
requirements and technical considerations all come together to formulate a
product or a system.
The design process comprises a set of principles, concepts and practices, which
allow a software engineer to model the system or product that is to be built. This
model, known as design model, is assessed for quality and reviewed before a
code is generated and tests are conducted. The design model provides details
about software data structures, architecture, interfaces and components which are
required to implement the system. This chapter discusses the design elements
that are required to develop a software design model. It also discusses the design
patterns and various software design notations used to represent a software
design.

Abstraction

Abstraction refers to a powerful design tool, which allows software designers to consider
components at an abstract level, while neglecting the implementation details of the
components. IEEE defines abstraction as ‘a view of a problem that extracts the
essential information relevant to a particular purpose and ignores the remainder of
the information.’ The concept of abstraction can be used in two ways: as a process and as
an entity. As a process, it refers to a mechanism of hiding irrelevant details and
representing only the essential features of an item so that one can focus on important
things at a time. As an entity, it refers to a model or view of an item.
Details of the data elements are not visible to the users of data. Data Abstraction
forms the basis for Object Oriented design approaches.

There are three commonly used abstraction mechanisms in software design,


namely, functional abstraction, data abstraction and control abstraction. All these
mechanisms allow us to control the complexity of the design process by
proceeding from the abstract design model to concrete design model in a
systematic manner.

 Functional abstraction: This involves the use of parameterized


subprograms. Functional abstraction can be generalized as collections of
subprograms referred to as ‘groups’. Within these groups there exist routines
which may be visible or hidden. Visible routines can be used within the
containing groups as well as within other groups, whereas hidden routines
are hidden from other groups and can be used within the containing group
only.

 Data abstraction: This involves specifying data that describes a data object.
For example, the data object window encompasses a set of attributes
(window type, window dimension) that describe the window object clearly. In
this abstraction mechanism, representation and manipulation details are
ignored.

 Control abstraction: This states the desired effect, without stating the exact
mechanism of control. For example, if and while statements in programming
languages (like C and C++) are abstractions of machine code
implementations, which involve conditional instructions. In the architectural
design level, this abstraction mechanism permits specifications of sequential
subprogram and exception handlers without the concern for exact details of
implementation.

Modularity
Modularity specifies to the division of software into separate modules which are
differently named and addressed and are integrated later on in to obtain the
completely functional software. It is the only property that allows a program to be
intellectually manageable. Single large programs are difficult to understand and
read due to a large number of reference variables, control paths, global variables,
etc.

The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other


applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Software Architecture,

Software architecture is, simply, the organization of a system. This organization


includes all components, how they interact with each other, the environment in
which they operate, and the principles used to design the software. In many cases,
it can also include the evolution of the software into the future.

Software architecture is designed with a specific mission or missions in mind. That


mission has to be accomplished without hindering the missions of other tools or
devices. The behavior and structure of the software impact significant decisions, so
they need to be appropriately rendered and built for the best possible results.

The software architecture of a system depicts the system’s organization or structure, and provides
an explanation of how it behaves. A system represents the collection of components that accomplish
a specific function or set of functions. In other words, the software architecture provides a sturdy
foundation on which software can be built. A series of architecture decisions and trade-offs impact
quality, performance, maintainability, and overall success of the system. Failing to consider
common problems and long-term consequences can put your system at risk. There are multiple
high-level architecture patterns and principles commonly used in modern systems. These are often
referred to as architectural styles. The architecture of a software system is rarely limited to a single
architectural style. Instead, a combination of styles often make up the complete system.

Cohesion and Coupling,

Coupling: Coupling is the measure of the degree of interdependence


between the modules. A good software will have low coupling.
Types of Coupling:
 Data Coupling: If the dependency between the modules is based on
the fact that they communicate by passing only data, then the modules
are said to be data coupled. In data coupling, the components are
independent of each other and communicate through data. Module
communications don’t contain tramp data. Example-customer billing
system.
 Stamp Coupling In stamp coupling, the complete data structure is
passed from one module to another module. Therefore, it involves tramp
data. It may be necessary due to efficiency factors- this choice was
made by the insightful designer, not a lazy programmer.
 Control Coupling: If the modules communicate by passing control
information, then they are said to be control coupled. It can be bad if
parameters indicate completely different behavior and good if
parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
 External Coupling: In external coupling, the modules depend on other
modules, external to the software being developed or to a particular type
of hardware. Ex- protocol, external file, device format, etc.
 Common Coupling: The modules have shared data such as global
data structures. The changes in global data mean tracing back to all
modules which access that data to evaluate the effect of the change. So
it has got disadvantages like difficulty in reusing modules, reduced
ability to control data accesses, and reduced maintainability.
 Content Coupling: In a content coupling, one module can modify the
data of another module, or control flow is passed from one module to
the other module. This is the worst form of coupling and should be
avoided.
Cohesion: Cohesion is a measure of the degree to which the elements of
the module are functionally related. It is the degree to which all elements
directed towards performing a single task are contained in the component.
Basically, cohesion is the internal glue that keeps the module together. A
good software design will have high cohesion.

Types of Cohesion:
 Functional Cohesion: Every essential element for a single computation
is contained in the component. A functional cohesion performs the task
and functions. It is an ideal situation.
 Sequential Cohesion: An element outputs some data that becomes the
input for other element, i.e., data flow between the parts. It occurs
naturally in functional programming languages.
 Communicational Cohesion: Two elements operate on the same input
data or contribute towards the same output data. Example- update
record in the database and send it to the printer.
 Procedural Cohesion: Elements of procedural cohesion ensure the
order of execution. Actions are still weakly connected and unlikely to be
reusable. Ex- calculate student GPA, print student record, calculate
cumulative GPA, print cumulative GPA.
 Temporal Cohesion: The elements are related by their timing involved.
A module connected with temporal cohesion all the tasks must be
executed in the same time span. This cohesion contains the code for
initializing all the parts of the system. Lots of different activities occur, all
at unit time.
 Logical Cohesion: The elements are logically related and not
functionally. Ex- A component reads inputs from tape, disk, and
network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
 Coincidental Cohesion: The elements are not related(unrelated). The
elements have no conceptual relationship other than location in source
code. It is accidental and the worst form of cohesion. Ex- print next line
and reverse the characters of a string in a single component.

Refactoring,

Refactoring is the process of restructuring code, while not changing its original
functionality. The goal of refactoring is to improve internal code by making
many small changes without altering the code's external behavior.

Computer programmers and software developers refactor code to improve the


design, structure and implementation of software. Refactoring improves code
readability and reduces complexities. Refactoring can also help software
developers find bugs or vulnerabilities hidden in their software.

The refactoring process features many small changes to a program's source code.
One approach to refactoring, for example, is to improve the structure of source
code at one point and then extend the same changes systematically to all
applicable references throughout the program. The thought process is that all the
small, behavior-preserving changes to a body of code have a cumulative effect.
These changes preserve the software's original behavior and do not modify its
behavior.

Martin Fowler, considered the father of refactoring, consolidated many best


practices from across the software development industry into a specific list of
refactorings and described methods to implement them in his book Refactoring:
Improving the Design of Existing Code.
What is the purpose of refactoring?
Refactoring improves code by making it:

 More efficient by addressing dependencies and complexities.

 More maintainable or reusable by increasing efficiency and readability.

 Cleaner so it is easier to read and understand.

 Easier for software developers to find and fix bugs or vulnerabilities in the
code.
Structured Analysis,

Structured Analysis is a development method that allows the analyst to understand


the system and its activities in a logical way.
It is a systematic approach, which uses graphical tools that analyze and refine the
objectives of an existing system and develop a new system specification which can
be easily understandable by user.
It has following attributes −
 It is graphic which specifies the presentation of application.
 It divides the processes so that it gives a clear picture of system flow.
 It is logical rather than physical i.e., the elements of system do not depend
on vendor or hardware.
 It is an approach that works from high-level overviews to lower-level details.

Structured Analysis Tools


During Structured Analysis, various tools and techniques are used for system
development. They are −

 Data Flow Diagrams


 Data Dictionary
 Decision Trees
 Decision Tables
 Structured English
 Pseudocode
Data Flow Diagrams (DFD) or Bubble Chart
It is a technique developed by Larry Constantine to express the requirements of
system in a graphical form.
 It shows the flow of data between various functions of system and specifies
how the current system is implemented.
 It is an initial stage of design phase that functionally divides the requirement
specifications down to the lowest level of detail.
 Its graphical nature makes it a good communication tool between user and
analyst or analyst and system designer.
 It gives an overview of what data a system processes, what transformations
are performed, what data are stored, what results are produced and where
they flow.
Basic Elements of DFD
DFD is easy to understand and quite effective when the required design is not
clear and the user wants a notational language for communication. However, it
requires a large number of iterations for obtaining the most accurate and complete
solution.
The following table shows the symbols used in designing a DFD and their
significance –
Physical DFD Logical DFD

It is implementation dependent. It shows It is implementation independent. It focuses


which functions are performed. only on the flow of data between processes.

It provides low level details of hardware, It explains events of systems and data
software, files, and people. required by each event.

It depicts how the current system operates It shows how business operates; not how the
and how a system will be implemented. system can be implemented.

Symbol Name Symbol Meaning

Square Source or Destination of Data

Arrow Data flow

Circle Process transforming data flow

Open Rectangle Data Store

Types of DFD
DFDs are of two types: Physical DFD and Logical DFD. The following table lists the
points that differentiate a physical DFD from a logical DFD.
Context Diagram
A context diagram helps in understanding the entire system by one DFD which
gives the overview of a system. It starts with mentioning major processes with little
details and then goes onto giving more details of the processes with the top-down
approach.
The context diagram of mess management is shown below.
Data Dictionary
A data dictionary is a structured repository of data elements in the system. It stores
the descriptions of all DFD data elements that is, details and definitions of data
flows, data stores, data stored in data stores, and the processes.
A data dictionary improves the communication between the analyst and the user. It
plays an important role in building a database. Most DBMSs have a data dictionary
as a standard feature. For example, refer the following table −

Sr.No. Data Name Description No. of Characters

1 ISBN ISBN Number 10

2 TITLE title 60

3 SUB Book Subjects 80

4 ANAME Author Name 15

Decision Trees
Decision trees are a method for defining complex relationships by describing
decisions and avoiding the problems in communication. A decision tree is a
diagram that shows alternative actions and conditions within horizontal tree
framework. Thus, it depicts which conditions to consider first, second, and so on.
Decision trees depict the relationship of each condition and their permissible
actions. A square node indicates an action and a circle indicates a condition. It
forces analysts to consider the sequence of decisions and identifies the actual
decision that must be made.

The major limitation of a decision tree is that it lacks information in its format to
describe what other combinations of conditions you can take for testing. It is a
single representation of the relationships between conditions and actions.
For example, refer the following decision tree −

Decision Tables
Decision tables are a method of describing the complex logical relationship in a
precise manner which is easily understandable.
 It is useful in situations where the resulting actions depend on the
occurrence of one or several combinations of independent conditions.
 It is a matrix containing row or columns for defining a problem and the
actions.
Components of a Decision Table
 Condition Stub − It is in the upper left quadrant which lists all the condition
to be checked.
 Action Stub − It is in the lower left quadrant which outlines all the action to
be carried out to meet such condition.
 Condition Entry − It is in upper right quadrant which provides answers to
questions asked in condition stub quadrant.
 Action Entry − It is in lower right quadrant which indicates the appropriate
action resulting from the answers to the conditions in the condition entry
quadrant.
The entries in decision table are given by Decision Rules which define the
relationships between combinations of conditions and courses of action. In rules
section,

 Y shows the existence of a condition.


 N represents the condition, which is not satisfied.
 A blank - against action states it is to be ignored.
 X (or a check mark will do) against action states it is to be carried out.
For example, refer the following table −

CONDITIONS Rule 1 Rule 2 Rule 3 Rule 4

Advance payment made Y N N N

Purchase amount = Rs - Y Y N
10,000/-

Regular Customer - Y N -

ACTIONS

Give 5% discount X X - -

Give no discount - - X X

Structured English
Structure English is derived from structured programming language which gives
more understandable and precise description of process. It is based on procedural
logic that uses construction and imperative sentences designed to perform
operation for action.
 It is best used when sequences and loops in a program must be considered
and the problem needs sequences of actions with decisions.
 It does not have strict syntax rule. It expresses all logic in terms of sequential
decision structures and iterations.
For example, see the following sequence of actions −
if customer pays advance
then
Give 5% Discount
else
if purchase amount >=10,000
then
if the customer is a regular customer
then Give 5% Discount
else No Discount
end if
else No Discount
end if
end if

Pseudocode
A pseudocode does not conform to any programming language and expresses
logic in plain English.
 It may specify the physical programming logic without actual coding during
and after the physical design.
 It is used in conjunction with structured programming.
 It replaces the flowcharts of a program.

Evolution of Object Models.

Object Modeling Technique (OMT) is real world based modeling


approach for software modeling and designing. It was developed basically
as a method to develop object-oriented systems and to support object-
oriented programming. It describes the static structure of the system.
Object Modeling Technique is easy to draw and use. It is used in many
applications like telecommunication, transportation, compilers etc. It is also
used in many real world problems. OMT is one of the most popular object
oriented development techniques used now-a-days. OMT was developed
by James Rambaugh.

Purpose of Object Modeling Technique:

 To test physical entity before construction of them.


 To make communication easier with the customers.
 To present information in an alternative way i.e. visualization.
 To reduce the complexity of software.
 To solve the real world problems.
Object Modeling Technique’s Models:
There are three main types of models that has been proposed by OMT:
1. Object Model:

Object Model encompasses the principles of abstraction, encapsulation,


modularity, hierarchy, typing, concurrency and persistence. Object
Model basically emphasizes on the object and class. Main concepts
related with Object Model are classes and their association with
attributes. Predefined relationships in object model are aggregation and
generalization (multiple inheritance).
2. Dynamic Model:

Dynamic Model involves states, events and state diagram (transition


diagram) on the model. Main concepts related with Dynamic Model are
states, transition between states and events to trigger the transitions.
Predefined relationships in object model are aggregation (concurrency)
and generalization.
3. Functional Model:

Functional Model focuses on the how data is flowing, where data is


stored and different processes. Main concepts involved in Functional
Model are data, data flow, data store, process and actors. Functional
Model in OMT describes the whole processes and actions with the help
of data flow diagram (DFD).

Unified Modeling Language: Introduction,

Unified Modeling Language (UML) is a general purpose modelling


language. The main aim of UML is to define a standard way
to visualize the way a system has been designed. It is quite similar to
blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language. We
use UML diagrams to portray the behavior and structure of a system.
UML helps software engineers, businessmen and system architects with
modelling, design and analysis. The Object Management Group (OMG)
adopted Unified Modelling Language as a standard in 1997. Its been
managed by OMG ever since. International Organization for
Standardization (ISO) published UML as an approved standard in 2005.
UML has been revised over the years and is reviewed periodically.
Views and Diagrams,

There are five different views that the UML aims to visualize through different
modeling diagrams. These five views are:

1. User's View
2. Structural Views
3. Behavioral Views
4. Environmental View
5. Implementation View

Now, these views just provide the thinking methodology and expectations (that
people have formed the software) of different groups of people. However, we still
need to diagrammatically document the system requirements, and this is done
through 9 different types of UML diagrams. These diagrams are divided into
different categories. These categories are nothing else but these views that we
mentioned earlier.

The different software diagrams according to the views they implement are:

1) User's view
This contains the diagrams in which the user's part of interaction with the software
is defined. No internal working of the software is defined in this model. The
diagrams contained in this view are:

 Use case Diagram

2) Structural view
In the structural view, only the structure of the model is explained. This gives an
estimate of what the software consists of. However, internal working is still not
defined in this model. The diagram that this view includes are:

 Class Diagrams
 Object Diagrams

3) Behavioral view
The behavioral view contains the diagrams which explain the behavior of the
software. These diagrams are:

 Sequence Diagram
 Collaboration Diagram
 State chart Diagram
 Activity Diagram

4) Environmental view
The environmental view contains the diagram which explains the after deployment
behavior of the software model. This diagram usually explains the user interactions
and software effects on the system. The diagrams that the environmental model
contain are:

 Deployment diagram

5) Implementation view
The implementation view consists of the diagrams which represent the
implementation part of the software. This view is related to the developer’s views.
This is the only view in which the internal workflow of the software is defined. The
diagram that this view contains is as follows:

UML Tools (VISIO)

Visio is a part of the Microsoft family, which is a diagramming software. It is helpful


in drawing building plans, floor charts, data flow diagrams, process flow diagrams,
business process modeling, swimlane diagrams, and many more.

Features:

o It connects the diagrams and the flowcharts to real-time data.


o Since it is a platform-independent, it can be accessed from anywhere.

Download link: https://products.office.com/en-in/visio/flowchart-software

Lucidchart,

The Lucidchart is a tool where users draw diagrams and charts. It provides
a collaborative platform for drawing diagrams. It is an online tool and paid
service. People can use it with a single sign-up. It is user-friendly. It
contains all the shapes for drawing any UML diagrams. It also
provides basic templates for all diagrams. So we can draw any diagram
easily with the help of those templates. It also helps business people to
plan for their events. So it will be very helpful for data visualization,
diagramming, etc. It uses a free trial option. It allows only a few frames and
shapes for free trial users. Free users are allowed to use only 60 shapes
per diagram. It does not support enterprise browsers. This tool is not only
helpful for drawing UML diagrams but also helpful for any project
design and project management.
Gliffy.

Gliffy was founded by Chris Kohlhardt and Clint Dickson in 2005. Gliffy is
an online programming tool that allows any team to visually share ideas.
The main advantage of Gliffy is the drag and drop feature which makes
the drawing so easy. We can effortlessly drag and drop the shapes and use
the available templates. It eases the process of drawing diagrams. Using
this we can draw wireframes for apps, design projects. In Gliffy we can
draw the diagrams with ease, share the document with anyone we want,
and can collaborate instantly, import and export the documents easily.
Another advantage of Gliffy is that it is cloud-based and does not need to
require the users to download it. The allowed format of downloadable
extensions in Gliffy are PNG, JPG, SVG, etc. We can also choose the size
like fit to max size, screen size, medium, large, extra-large.
UNIT-IV

Software Testing: Software Testing

Software testing is a process of identifying the correctness of software by considering its all
attributes (Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the
execution of software components to find the software bugs or errors or defects.

Functional and Non- Functional Testing:

Functional Testing:

Functional testing is a type of software testing in which the system is tested


against the functional requirements and specifications. Functional testing
ensures that the requirements or specifications are properly satisfied by the
application. This type of testing is particularly concerned with the result of
processing. It focuses on simulation of actual system usage but does not
develop any system structure assumptions.

It is basically defined as a type of testing which verifies that each function


of the software application works in conformance with the requirement and
specification. This testing is not concerned about the source code of the
application. Each functionality of the software application is tested by
providing appropriate test input, expecting the output and comparing the
actual output with the expected output.
Non-functional Testing:

Non-functional testing is a type of software testing that is performed to


verify the non-functional requirements of the application. It verifies whether
the behaviour of the system is as per the requirement or not. It tests all the
aspects which are not tested in functional testing.

Non-functional testing is defined as a type of software testing to check non-


functional aspects of a software application. It is designed to test the
readiness of a system as per nonfunctional parameters which are never
addressed by functional testing. Non-functional testing is as important as
functional testing.
Below is the difference between functional and non-functional testing:

Functional Testing Non-functional Testing

It verifies the operations and It verifies the behavior of an


actions of an application. application.

It is based on requirements of It is based on expectations of


customer. customer.

It helps to enhance the behavior It helps to improve the performance


of the application. of the application.

Functional testing is easy to It is hard to execute non-functional


execute manually. testing manually.

It tests what the product does. It describes how the product does.

Functional testing is based on Non-functional testing is based on


the business requirement. the performance requirement.

White Box and Black Box Testing,

1. Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to
the tester. Only the external design and structure are tested.

2. White Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is known to the
tester. Implementation and impact of the code are tested.

Differences between Black Box Testing vs White Box Testing:


S.
No. Black Box Testing White Box Testing

It is a way of software testing It is a way of testing the


in which the internal structure software in which the tester has
or the program or the code is knowledge about the internal
1. hidden and nothing is known structure or the code or the
S.
No. Black Box Testing White Box Testing

about it. program of the software.

Implementation of code is not Code implementation is


2. needed for black box testing. necessary for white box testing.

It is mostly done by software It is mostly done by software


3. testers. developers.

No knowledge of Knowledge of implementation


4. implementation is needed. is required.

It can be referred to as outer or It is the inner or the internal


5. external software testing. software testing.

It is a functional test of the It is a structural test of the


6. software. software.

This testing can be initiated This type of testing of software


based on the requirement is started after a detail design
7. specifications document. document.

No knowledge of It is mandatory to have


8. programming is required. knowledge of programming.

It is the behavior testing of the It is the logic testing of the


9. software. software.

It is applicable to the higher It is generally applicable to the


10. levels of testing of software. lower levels of software testing.

It is also called as clear box


11. It is also called closed testing. testing.

12. It is least time consuming. It is most time consuming.


S.
No. Black Box Testing White Box Testing

It is not suitable or preferred It is suitable for algorithm


13. for algorithm testing. testing.

Can be done by trial and error Data domains along with inner
14. ways and methods. or internal boundaries can be

Static Testing Strategies: Static,

Static testing is a verification process used to test the application without


implementing the code of the application. And it is a cost-effective process.

To avoid the errors, we will execute Static testing in the initial stage of
development because it is easier to identify the sources of errors, and it can fix
easily.

In other words, we can say that Static testing can be done manually or with the
help of tools to improve the quality of the application by finding the error at the
early stage of development; that is also called the verification process.

Structural testing

Structural testing, also known as glass box testing or white box testing is an
approach where the tests are derived from the knowledge of the software's
structure or internal implementation.
The other names of structural testing includes clear box testing, open box testing,
logic driven testing or path driven testing.

Structural Testing Techniques:


 Statement Coverage - This technique is aimed at exercising all
programming statements with minimal tests.
 Branch Coverage - This technique is running a series of tests to ensure that
all branches are tested at least once.
 Path Coverage - This technique corresponds to testing all possible paths
which means that each statement and branch are covered.

Desk Checking,
Desk checking is the process of manually reviewing the source
code of a program. It involves reading through the functions within
the code and manually testing them, often with
multiple input values. Developers may desk check their code
before releasing a software program to make sure
the algorithms are functioning efficiently and correctly.
The term "desk checking" refers to the manual approach of
reviewing source code (sitting a desk), rather than running it
through a debugger or another automated process. In some
cases, a programmer may even use a pencil and paper to record
the process and output of functions within a program. For
example, the developer may track the value of one or
more variables in a function from beginning to end. Manually
going through the code line-by-line may help a programmer catch
improper logic or inefficiencies that a software debugger would
not.
Code Walk Through,

Code walkthroughs are a form of peer review. The author of the code
typically leads a meeting attended by other members of the team.

A few members of the team are usually given a copy of the code a couple
of days before the meeting. The author presents the document under
review while the attending members run test cases over the code by hand.
Team members may also ask the author any relevant questions.

All attending members share and discuss their own findings with the
author.

The purpose
Code walkthrough is an informal analysis technique that aims to identify
any defects within the code.

This also allows all members of the team to gain an understanding of the
code, and thus enables cooperation and collaboration between members.

Advantages
 Incorporates multiple perspectives
 Fosters communication between the team
 Increases the knowledge of the entire team related to the code in
question

Disadvantages
 Areas where questions are not raised may go unnoticed
 Lack of diversity, as the author leads the meeting and everyone
merely verifies if what is being said matches the code
 Meetings can be time-consuming

Beta Testing,

Beta testing is a type of User Acceptance Testing among the most crucial testing,
which performed before the release of the software. Beta Testing is a type of Field
Test. This testing performs at the end of the software testing life cycle. This type of
testing can be considered as external user acceptance testing. It is a type of salient
testing. Real users perform this testing. This testing executed after the alpha
testing. In this the new version, beta testing is released to a limited audience to
check the accessibility, usability, and functionality, and more.

o Beta testing is the last phase of the testing, which is carried out at the
client's or customer's site.

Stress Testing,

Stress testing a Non-Functional testing technique that is performed as part of


performance testing. During stress testing, the system is monitored after subjecting
the system to overload to ensure that the system can sustain the stress.
The recovery of the system from such phase (after stress) is very critical as it is
highly likely to happen in production environment.

Reasons for conducting Stress Testing:


 It allows the test team to monitor system performance during failures.
 To verify if the system has saved the data before crashing or NOT.
 To verify if the system prints meaning error messages while crashing or did it
print some random exceptions.
 To verify if unexpected failures do not cause security issues.

Code Inspection,

Code inspection is a type of Static testing which aims in reviewing the


software code and examining for any errors in that. It helps in reducing the
ratio of defect multiplication and avoids later-stage error detection by
simplifying all the initial error detection processes. Actually, this code
inspection comes under the review process of any application.
How does it work?

 Moderator, Reader, Recorder, and Author are the key members of an


Inspection team.
 Related documents are provided to the inspection team and then plans
the inspection meeting and coordinate with inspection team members.
 If the inspection team is not aware of the project, the author provides an
overview of the project and code to inspection team members.
 Then each inspection team performs code inspection by following some
inspection checklists.
 After completion of code inspection, conduct a meeting with all team
members and analyze the reviewed code.

Purpose of code inspection:


1. It checks for any error that is present in the software code.
2. It identifies any required process improvement.
3. It checks whether the coding standard is followed or not.
4. It involves peer examination of codes.
5. It documents the defects in the software code.

Code Coverage,

Code coverage is a measure used in software testing. It describes the degree to which the
source code of a program has been tested. It is a form of testing that inspects the code directly
and is therefore a form of white box testing.[1] In time, the use of code coverage has been
extended to the field of digital hardware, the contemporary design methodology of which relies
on hardware description languages (HDLs). Code coverage was among the first methods
invented for systematic software testing.

Basic coverage criteria


There are a number of coverage criteria, the main ones being:

 Function coverage - Has each function (or subroutine) in the program been called?
 Statement coverage - Has each node in the program been executed?
 Decision coverage (not the same as branch coverage - see Position Paper CAST10.) -
Has every edge in the program been executed? For instance, have the requirements of
each branch of each control structure (such as in IF and CASE statements) been met as
well as not met?
 Condition coverage (or predicate coverage) - Has each boolean sub-expression
evaluated both to true and false? This does not necessarily imply decision coverage.
 Condition/decision coverage - Both decision and condition coverage should be satisfied.

Code Complexity,

The complexity of a given piece of code has to do with just how complicated and
unwieldy it is for a developer to understand. This idea makes intuitive sense. For
example, the picture at the top of the article is way more complex than, say, a
picture of a single rose on a white background.

In particular, the codebase's size can make it difficult for anyone to wrap their
heads around exactly what's going on. This becomes even more of a problem
when the code begins to have multiple paths through conditionals, has several
dependencies, or is coupled with other parts of the codebase.

Just think about the following two example functions:

a)

def squared(x):

x2 = x * x

return x2

b)

def squared_conditional(lst):
result = []

for x in lst:

if x > 3:

x2 = x * x

else:

x2 = x * x * x

result.append(x2)

return result

Just by looking at the two functions in a) and b), you can easily see that b) is
more complex because there's a lot more going on. In the second example, there
are far more lines, there's a for loop, and there's an if-else statement that makes it
more complex.

Statement Coverage,

Statement coverage is one of the widely used software testing. It comes under
white box testing.

Statement coverage technique is used to design white box test cases. This
technique involves execution of all statements of the source code at least once. It is
used to calculate the total number of executed statements in the source code out
of total statements present in the source code.

Statement coverage derives scenario of test cases under the white box testing
process which is based upon the structure of the code.

In white box testing, concentration of the tester is on the working of internal source
code and flow chart or flow graph of the code.

Path Testing
Path Testing is a structural testing method based on the source code or algorithm
and NOT based on the specifications. It can be applied at different levels of
granularity.

Path Testing Assumptions:


 The Specifications are Accurate
 The Data is defined and accessed properly
 There are no defects that exist in the system other than those that affect
control flow

Path Testing Techniques:


 Control Flow Graph (CFG) - The Program is converted into Flow graphs by
representing the code into nodes, regions and edges.
 Decision to Decision path (D-D) - The CFG can be broken into various
Decision to Decision paths and then collapsed into individual nodes.
 Independent (basis) paths - Independent path is a path through a DD-path
graph which cannot be reproduced from other paths by other methods.

Condition Testing

Condition coverage is also known as Predicate Coverage in which each one of the
Boolean expression have been evaluated to both TRUE and FALSE.

Example
if ((A || B) && C)
{
<< Few Statements >>
}
else
{
<< Few Statements >>
}

Function Coverage,

Cyclomatic Complexity,

Cyclomatic complexity is a source code complexity measurement that is being


correlated to a number of coding errors. It is calculated by developing a Control
Flow Graph of the code that measures the number of linearly-independent paths
through a program module.
Lower the Program's cyclomatic complexity, lower the risk to modify and easier to
understand. It can be represented using the below formula:

Cyclomatic complexity = E - N + 2*P


where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points

Compatibility,

Compatibility is non-functional testing to ensure customer satisfaction. It is to


determine whether your software application or product is proficient enough to run in
different browsers, databases, hardware, operating system, mobile devices, and
networks.

The application could also impact due to different versions, resolution, internet speed
and configuration, etc. Hence it’s important to test the application in all possible
manners to reduce failures and overcome embarrassments of bug leakage. As a Non-
functional test, Compatibility testing is to endorse that the application runs properly in
different browsers, versions, OS, and networks successfully.

Compatibility tests should always perform in a real environment instead of a virtual


environment.
Test the compatibility of the application with different browsers and operating systems
to guarantee 100% coverage.

Types Of Software Compatibility Testing


 Browser compatibility testing
 Hardware
 Networks
 Mobile Devices
 Operating System
 Versions

It is very popular in compatibility testing. It is to check the compatibility of the software


application on different browsers like Chrome, Firefox, Internet Explorer, Safari, Opera,
etc.
Hardware
It is to check the application/ software compatibility with the different hardware
configurations.

Network
It is to check the application in a different network like 3G, WIFI, etc.

Mobile Devices
It is to check if the application is compatible with mobile devices and their platforms
like android, iOS, windows, etc.

Operating Systems
It is to check if the application is compatible with different Operating Systems like
Windows, Linux, Mac, etc.

Versions
It is important to test software applications in different versions of the software. There
are two different types of version inspection.

Backward Compatibility Testing: Testing of the application or software in old or


previous versions. It is also known as downward compatible.
Forward Compatibility Testing: Testing of the application or software in new or
upcoming versions. It is also known as forward compatible
Integration,

Integration testing is the second level of the software testing process comes after
unit testing. In this testing, units or individual components of the software are
tested in a group. The focus of the integration testing level is to expose defects at
the time of interaction between integrated components or units.

Unit testing uses modules for testing purpose, and these modules are combined
and tested in integration testing. The Software is developed with a number of
software modules that are coded by different coders or programmers. The goal of
integration testing is to check the correctness of communication among all the
modules.

Acceptability,

Acceptance testing is formal testing based on user requirements and function


processing. It determines whether the software is conforming specified
requirements and user requirements or not. It is conducted as a kind of Black Box
testing where the number of required users involved testing the acceptance level of
the system. It is the fourth and last level of software testing.

User acceptance testing (UAT) is a type of testing, which is done by the customer
before accepting the final product. Generally, UAT is done by the customer
(domain expert) for their satisfaction, and check whether the application is working
according to given business scenarios, real-time scenarios.

In this, we concentrate only on those features and scenarios which are regularly
used by the customer or mostly user scenarios for the business or those scenarios
which are used daily by the end-user or the customer.

Testing Tools

Selenium,

Selenium WebDriver is a free, open-source framework that provides a


common application programming interface (API) for browser automation.
Ideally, modern web browsers should all render a web application in the
same way. However, each browser has its own rendering engine and
handles HTML a little differently, which is why testing is needed to ensure
that an application performs consistently across browsers and devices.
The same browser compatibility issues that affect web applications could
also affect automated web tests. But automated tests that use the
Selenium client API can run against any browser with a WebDriver-
compliant driver, including Chrome, Safari, Internet Explorer, Microsoft
Edge, and Firefox.

Selenium WebDriver can run on Windows, Linux and macOS platforms.


Tests can be written for Selenium WebDriver in any of the programming
languages supported by the Selenium project, including Java, C#, Ruby,
Python, and JavaScript. These programming languages communicate
with Selenium WebDriver by calling methods in the Selenium client API.
Ranorex,

Definition: Ranorex is a powerful tool for test automation. It is a GUI test


automation framework used for the testing of web-based, desktop, and mobile
applications. Ranorex does not have its own scripting language to automate
application. It uses standard programming languages such as VB.NET and C#.

Description: Ranorex can automate any desktop, web-based application, or


mobile software. It supports many technologies like Silverlight, .NET,
Winforms, Java, SAP, WPF, HTML5, Flash, Flex, Windows Apps (Native/Hybrid),
and iOS, Android.

Ranorex has a good Record, Replay and Edit User Performed Actions with the
Object-Based Capture/Replay Editor, which means that the Ranorex Recorder
offers a simple approach to create automated test steps for web, mobile
(native and WAP) and desktop applications.

watir

Watir (Web Application Testing in Ruby), pronounced as "Water" is an open source tool
developed using Ruby which helps in automating web application no matter which
language the application is written. The browsers supported are Internet Explorer, Firefox,
Chrome, Safari, and Edge. Watir is available as Rubygems gem for installation.

Features of Watir

Watir is rich in features, as discussed below −


Location web elements − There are different ways you can locate web-elements
rendered inside the browser. The ones mostly used are id, class, tag name,
custom attributes, label etc.
Taking Screenshots − Watir allows you to take screenshot of the testing done as
and when required. This helps to keep track of the intermediate testing.
Page Performance − You can easily measure page performance using the
performance object which has properties like, performance.navigation,
performance.timing, performance.memory and performance.timeOrigin. These
details are obtained when you connect to the browser.
Page Objects − Page object in Watir will help us to reuse the code in the form of
classes. Using this feature, we can automate our app without having to duplicate
any code and also make it manageable.
Downloads − With Watir, it is easy to test file download for UI or website.
Alerts − Watir provides easy to use APIs to test alerts popup in your UI or website.
Headless Testing − Using headless testing, the details are obtained in the
command line without having to open the browser. This helps to execute UI test
cases at the command line.

Advantages of Using Watir


Watir offers the following advantages −
 Watir is an open source tool and very easy to use.
 Watir is developed in Ruby and any web application that works in a browser
can be easily automated using watir.
 All the latest browsers are supported in Watir making it easy for testing.
 Watir has inbuilt libraries to test page-performance, alerts, iframes test,
browser windows, take screenshots etc.

Disadvantages of Watir
Like any other software, Watir also has its limitations
 Watir is supported only for Ruby test framework and it cannot be used with
any other testing frameworks.
 Mobile testing using Watir is not enhanced and desktop browsers are
mimicked to behave like mobile browsers instead of acting as real time
devices.
UNIT-V

Software Quality Assurance: SQA Concepts,

Software Quality Assurance (SQA) is simply a way to assure quality in


the software. It is the set of activities which ensure processes, procedures
as well as standards are suitable for the project and implemented
correctly.
Software Quality Assurance is a process which works parallel to
development of software. It focuses on improving the process of
development of software so that problems can be prevented before they
become a major issue. Software Quality Assurance is a kind of Umbrella
activity that is applied throughout the software process.
Software Quality Assurance has:
1. A quality management approach
2. Formal technical reviews
3. Multi testing strategy
4. Effective software engineering technology
5. Measurement and reporting mechanism

Garvin’s Quality Dimensions,

David Garvin suggests that quality ought to be thought-about by taking a


third-dimensional read point that begins with an assessment of
correspondence and terminates with a transcendental (aesthetic) view.
Though Garvin’s 8 dimensions of quality weren’t developed specifically for
the software system, they’ll be applied once software system quality is
taken into account.
Eight dimensions of product quality management will be used at a strategic
level to investigate quality characteristics. The idea was outlined by David
A. Garvin, formerly C. Roland Christensen academician of Business
Administration at Harvard grad school (died thirty Gregorian calendar
month 2017).
A number of the scale are reciprocally reinforcing, whereas others don’t
seem to be, improvement in one is also at the expense of others.
Understanding the trade-offs desired by customers among these
dimensions will facilitate build a competitive advantage.

McCall’s Quality Factors,

This model classifies all software requirements into 11 software quality factors. The
11 factors are grouped into three categories – product operation, product revision,
and product transition factors.
 Product operation factors − Correctness, Reliability, Efficiency, Integrity,
Usability.
 Product revision factors − Maintainability, Flexibility, Testability.
 Product transition factors − Portability, Reusability, Interoperability.

Product Operation Software Quality Factors


According to McCall’s model, product operation category includes five software
quality factors, which deal with the requirements that directly affect the daily
operation of the software. They are as follows −
Correctness
These requirements deal with the correctness of the output of the software system.
They include −
 Output mission
 The required accuracy of output that can be negatively affected by
inaccurate data or inaccurate calculations.
 The completeness of the output information, which can be affected by
incomplete data.
 The up-to-dateness of the information defined as the time between the event
and the response by the software system.
 The availability of the information.
 The standards for coding and documenting the software system.
Reliability
Reliability requirements deal with service failure. They determine the maximum
allowed failure rate of the software system, and can refer to the entire system or to
one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of
the software system. It includes processing capabilities (given in MHz), its storage
capacity (given in MB or GB) and the data communication capability (given in
MBPS or GBPS).
It also deals with the time between recharging of the system’s portable units, such
as, information system units located in portable computers, or meteorological units
placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to
unauthorized persons, also to distinguish between the group of people to be given
read as well as write permit.
Usability
Usability requirements deal with the staff resources needed to train a new
employee and to operate the software system.

Product Revision Quality Factors


According to McCall’s model, three software quality factors are included in the
product revision category. These factors are as follows −
Maintainability
This factor considers the efforts that will be needed by users and maintenance
personnel to identify the reasons for software failures, to correct the failures, and to
verify the success of the corrections.
Flexibility
This factor deals with the capabilities and efforts required to support adaptive
maintenance activities of the software. These include adapting the current software
to additional circumstances and customers without changing the software. This
factor’s requirements also support perfective maintenance activities, such as
changes and additions to the software in order to improve its service and to adapt it
to changes in the firm’s technical or commercial environment.
Testability
Testability requirements deal with the testing of the software system as well as with
its operation. It includes predefined intermediate results, log files, and also the
automatic diagnostics performed by the software system prior to starting the
system, to find out whether all components of the system are in working order and
to obtain a report about the detected faults. Another type of these requirements
deals with automatic diagnostic checks applied by the maintenance technicians to
detect the causes of software failures.

Product Transition Software Quality Factor


According to McCall’s model, three software quality factors are included in the
product transition category that deals with the adaptation of software to other
environments and its interaction with other software systems. These factors are as
follows −
Portability
Portability requirements tend to the adaptation of a software system to other
environments consisting of different hardware, different operating systems, and so
forth. The software should be possible to continue using the same basic software
in diverse situations.
Reusability
This factor deals with the use of software modules originally designed for one
project in a new software project currently being developed. They may also enable
future projects to make use of a given module or a group of modules of the
currently developed software. The reuse of software is expected to save
development resources, shorten the development period, and provide higher
quality modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software
systems or with other equipment firmware. For example, the firmware of the
production machinery and testing equipment interfaces with the production control
software.
Software Reviews,

Software Review is systematic inspection of a software by one or more


individuals who work together to find and resolve errors and defects in the
software during the early stages of Software Development Life Cycle
(SDLC). Software review is an essential part of Software Development Life
Cycle (SDLC) that helps software engineers in validating the quality,
functionality and other vital features and components of the software. It is a
whole process that includes testing the software product and it makes sure
that it meets the requirements stated by the client.
Usually performed manually, software review is used to verify various
documents like requirements, system designs, codes, test plans and test
cases.
Objectives of Software Review:

The objective of software review is:

1. To improve the productivity of the development team.

2. To make the testing process time and cost effective.

3. To make the final software with fewer defects.

4. To eliminate the inadequacies.

Process of Software Review:

Software Reliability,

Software Reliability means Operational reliability. It is described as the ability of a


system or component to perform its required functions under static conditions for
a specific period.

Software reliability is also defined as the probability that a software system fulfills
its assigned task in a given environment for a predefined number of input cases,
assuming that the hardware and the input are free of error.

Software Reliability is an essential connect of software quality, composed with


functionality, usability, performance, serviceability, capability, installability,
maintainability, and documentation. Software Reliability is hard to achieve because
the complexity of software turn to be high. While any system with a high degree of
complexity, containing software, will be hard to reach a certain level of reliability,
system developers tend to push complexity into the software layer, with the speedy
growth of system size and ease of doing so by upgrading the software.

For example, large next-generation aircraft will have over 1 million source lines of
software on-board; next-generation air traffic control systems will contain between
one and two million lines; the upcoming International Space Station will have over
two million lines on-board and over 10 million lines of ground support software;
several significant life-critical defense systems will have over 5 million source lines
of software. While the complexity of software is inversely associated with software
reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.

Software Quality Standards (ISO)

The ISO/IEC 9126 standard describes a software quality model which categorizes software
quality into six characteristics (factors) which are sub-divided into sub-characteristics (criteria).
The characteristics are manifested externally when the software is used as a consequence of
internal software attributes.

The internal software attributes are measured by means of internal metrics (e.g., monitoring of
software development before delivery). Examples of internal metrics are given in ISO 9126-3.
The quality characteristics are measured externally by means of external metrics (e.g.,
evaluation of software products to be delivered). Examples of external metrics are given in ISO
9126-2.

Our work focuses on the assessment of the internal quality of a software product as it can be
assessed upon the source code. In the following we describe the general quality model as
defined in ISO 9126-1 with its properties and sub-properties. Details of the internal aspect of
this model are given as well.

9000 Model,

The International organization for Standardization is a world wide


federation of national standard bodies. The International standards
organization (ISO) is a standard which serves as a for contract between
independent parties. It specifies guidelines for development of quality
system.
Quality system of an organization means the various activities related to its
products or services. Standard of ISO addresses to both aspects i.e.
operational and organizational aspects which includes responsibilities,
reporting etc. An ISO 9000 standard contains set of guidelines of
production process without considering product itself.
ISO 9000 Certification

SEI CMM Model

CMM was developed by the Software Engineering Institute (SEI) at


Carnegie Mellon University in 1987.
 It is not a software process model. It is a framework that is used to
analyze the approach and techniques followed by any organization to
develop software products.
 It also provides guidelines to further enhance the maturity of the process
used to develop those software products.
 It is based on profound feedback and development practices adopted by
the most successful organizations worldwide.
 This model describes a strategy for software process improvement that
should be followed by moving through 5 different levels.
 Each level of maturity shows a process capability level. All the levels
except level-1 are further described by Key Process Areas (KPA’s).

Shortcomings of SEI/CMM:

 It encourages the achievement of a higher maturity level in some cases


by displacing the true mission, which is improving the process and
overall software quality.
 It only helps if it is put into place early in the software development
process.
 It has no formal theoretical basis and in fact is based on the experience
of very knowledgeable people.
 It does not have good empirical support and this same empirical support
could also be constructed to support other models.
Software Maintenance: Need and Categories

Software Maintenance is the process of modifying a software product after


it has been delivered to the customer. The main purpose of software
maintenance is to modify and update software applications after delivery to
correct faults and to improve performance.
Need for Maintenance –

Software Maintenance must be performed in order to:


 Correct faults.
 Improve the design.
 Implement enhancements.
 Interface with other systems.
 Accommodate programs so that different hardware, software, system
features, and telecommunications facilities can be used.
 Migrate legacy software.
 Retire software.

Challenges in Software Maintenance:

The various challenges in software maintenance are given below:


 The popular age of any software program is taken into consideration up
to ten to fifteen years. As software program renovation is open ended
and might maintain for decades making it very expensive.
 Older software program’s, which had been intended to paintings on
sluggish machines with much less reminiscence and garage ability can
not maintain themselves tough in opposition to newly coming more
advantageous software program on contemporary-day hardware.
 Changes are frequently left undocumented which can also additionally
reason greater conflicts in future.
 As era advances, it turns into high priced to preserve vintage software
program.
 Often adjustments made can without problems harm the authentic
shape of the software program, making it difficult for any next
adjustments.

Categories of Software Maintenance –

Maintenance can be divided into the following:

1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to
rectify some bugs observed while the system is in use, or to enhance
the performance of the system.

2. Adaptive maintenance:
This includes modifications and updations when the customers need the
product to run on new platforms, on new operating systems, or when
they need the product to interface with new hardware and software.

3. Perfective maintenance:
A software product needs maintenance to support the new features that
the users want or to change different types of functionalities of the
system according to the customer demands.

4. Preventive maintenance:
This type of maintenance includes modifications and updations to
prevent future problems of the software. It goals to attend problems,
which are not significant at this moment but may cause serious issues in
future.

Cost Factors

The cost of system maintenance represents a large proportion of the


budget of most organizations that use software system. More than 65% of
software lifecycle cost is expanded in the maintenance activities.
Cost of software maintenance can be controlled by postponing the.
development opportunity of software maintenance but this will cause the
following intangible cost:
 Customer dissatisfaction when requests for repair or modification cannot
be addressed in a timely manner.
 Reduction in overall software quality as a result of changes that
introduce hidden errors in maintained software.

Software maintenance cost factors:

The key factors that distinguish development and maintenance and which
lead to higher maintenance cost are divided into two subcategories:
1. Non-Technical factors
2. Technical factors

Non-Technical factors:

The Non-Technical factors include:


1. Application Domain
2. Staff stability
3. Program lifetime
4. Dependence on External Environment
5. Hardware stability

Technical factors:

Technical factors include the following:

1. module independence
2. Programming language
3. Programming style
4. Program validation and testing
5. Documentation
6. Configuration management techniques

Efforts expanded on maintenance may be divided into productivity activities


(for example analysis and evaluation, design and modification, coding). The
following expression provides a module of maintenance efforts:
M = P + K(C - D)
where,

M: Total effort expanded on the maintenance.


P: Productive effort.
K: An empirical constant.
C: A measure of complexity that can be attributed to a lack of good design
and documentation.
D: A measure of the degree of familiarity with the software.
Software Reengineering,

Software Re-engineering is a process of software development which is


done to improve the maintainability of a software system. Re-engineering is
the examination and alteration of a system to reconstitute it in a new form.
This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing etc.

 To describe a cost-effective option for system evolution.


 To describe the activities involved in the software maintenance process.
 To distinguish between software and data re-engineering and to explain
the problems of data re-engineering.
Steps involved in Re-engineering:

1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering

Diagrammatic Representation:

Reverse Engineering,

Software Reverse Engineering is a process of recovering the design,


requirement specifications and functions of a product from an analysis of its
code. It builds a program database and generates information from this.
The purpose of reverse engineering is to facilitate the maintenance work by
improving the understandability of a system and to produce the necessary
documents for a legacy system.
Reverse Engineering Goals:

 Cope with Complexity.


 Recover lost information.
 Detect side effects.
 Synthesise higher abstraction.
 Facilitate Reuse.

Steps of Software Reverse Engineering:

1. Collection Information:

This step focuses on collecting all possible information (i.e., source


design documents etc.) about the software.

2. Examining the information:


The information collected in step-1 as studied so as to get familiar with
the system.

3. Extracting the structure:

This step concerns with identification of program structure in the form of


structure chart where each node corresponds to some routine.

4. Recording the functionality:

During this step processing details of each module of the structure,


charts are recorded using structured language like decision table, etc.

5. Recording data flow:

From the information extracted in step-3 and step-4, set of data flow
diagrams are derived to show the flow of data among the processes.
6. Recording control flow:

High level control structure of the software is recorded.

7. Review extracted design:

Design document extracted is reviewed several times to ensure


consistency and correctness. It also ensures that the design represents
the program.

8. Generate documentation:

Finally, in this step, the complete documentation including SRS, design


document, history, overview, etc. are recorded for future use.

Case Study

Case study in software engineering is an empirical enquiry that draws on multiple sources of
evidence to investigate one instance of a contemporary software engineering phenomenon within
its real life context cannot be clearly specified. Fenton and pfleeger identify four factors that affect
the choice between experiment and case study: level of control, difficulty of control, level of
replication, and cost of replication. Easterbrook et al. call case studies one of the five “classes of
research methods”. Zelkowitz and Wallace propose a terminology that is somewhat different from
what is used in other fields, and categorize project monitoring, case study, and field study as
observational methods. Alexander and Potters study of formal specifications and rapid
prototyping

Data Collection: In order to properly study software engineering the case study researcher must
consider a wide range of types of data collection that either gather the naturally produced data or
that generate new data. It is important to carefully decide which data to collect and how to collect
the data. It is advantageous to use several data sources, and several types of one single data source.
We have reviewed five main types of data collection: interviews: interviews focus groups,
observation, archival data and metrics.

DATA COLLECTION APPROACHES

Interviews: Data collection through interviews is one of the most frequently used and most
important sources of data in case studies in software engineering. Almost all case studies involve
some kind of interviews, either for primary data collection or as validation of other kinds of data.
An interview can be carried out in many different ways, but essentially a researcher talks to an
interviewee attempts to answer. Interview question may be open, that is, allowing and inviting a
broad range of answer and issues from the interviewee, or closed, offering a limited set of
alternative answers. Interviews may be divided into unstructured, structured, and fully structured.

You might also like