Software Engineering

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

UNIVERSITY OF MADRAS

B.Sc. DEGREE COURSE IN COMPUTER SCIENCE


SYLLABUS WITH EFFECT FROM 2020-2021

BCE-CSC14
CORE-XIV: SOFTWARE ENGINEERING
(Common paper to B.Sc.Software Applications-V Sem. & B.C.A.-V Sem.)

III YEAR / VI SEM


OBJECTIVES:
 To introduce the software development life cycles
 To introduce concepts related to structured and objected oriented analysis & design co
 To provide an insight into UML and software testing techniques

OUTCOMES:
 The students should be able to specify software requirements, design the software using tools
 To write test cases using different testing techniques.

UNIT- I
Introduction – Evolution – Software Development projects – Emergence of Software Engineering.
Software Life cycle models – Waterfall model – Rapid Application Development – Agile Model – Spiral
Model

UNIT- II
Requirement Analysis and Specification – Gathering and Analysis – SRS – Formal System Specification

UNIT- III
Software Design – Overview – Characteristics – Cohesion & Coupling – Layered design – Approaches
Function Oriented Design – Structured Analysis – DFD – Structured Design – Detailed design

UNIT- IV
Object Modeling using UML – OO concepts – UML – Diagrams – Use case, Class, Interaction, Activity,
State Chart – Postscript

UNIT- V
Coding & Testing – coding – Review – Documentation – Testing – Black-box, White-box, Integration,
OO Testing, Smoke testing.

TEXT BOOK:
1. Rajib Mall, “Fundamentals of Software Engineering”, PHI 2018, 5th Edition.

REFERENCE BOOKS:
1. Roger S. Pressman, “Software Engineering - A Practitioner’s Approach”, McGraw Hill 2010, 7th
Edition.
2. Pankaj Jalote, “An Integrated Approach to Software Engineering”, Narosa Publishing House 2011,
3rd Edition.

WEB REFERENCES:
 NPTEL online course – Software Engineering - https://nptel.ac.in/courses/106105182/
SOFTWARE ENGINEERING
Unit-1:

Introduction of Software Engineering ?

Software Engineering is an engineering branch related to the


evolution of software product using well-defined scientific principles,
techniques, and procedures. The result of software engineering is an
effective and reliable software product.

Software Engineering Software Evolution ?

Software Evolution is a term which refers to the process of developing


software initially, then timely updating it for various reasons, i.e., to add
new features or to remove obsolete functionalities etc. The evolution
process includes fundamental activities of change analysis, release
planning, system implementation and releasing a system to customers.
The necessity of Software evolution:
Software evaluation is necessary just because of the following
reasons:

a) Change in requirement with time:


With the passes of time, the organization’s needs and modus Operandi
of working could substantially be changed so in this frequently changing
time the tools(software) that they are using need to change for maximizing
the performance.
b) Environment change:
As the working environment changes the things(tools) that enable us
to work in that environment also changes proportionally same happens in
the software world as the working environment changes then, the
organizations need reintroduction of old software with updated features
and functionality to adapt the new environment.
c) Errors and bugs:
As the age of the deployed software within an organization increases
their preciseness or impeccability decrease and the efficiency to bear the
increasing complexity workload also continually degrades. So, in that case,
it becomes necessary to avoid use of obsolete and aged software. All such
obsolete Softwares need to undergo the evolution process in order to
become robust as per the workload complexity of the current
environment.
d) Security risks:
Using outdated software within an organization may lead you to at
the verge of various software-based cyberattacks and could expose your
confidential data illegally associated with the software that is in use. So, it
becomes necessary to avoid such security breaches through regular
assessment of the security patches/modules are used within the software.
If the software isn’t robust enough to bear the current occurring Cyber
attacks so it must be changed (updated).
e) For having new functionality and features:
In order to increase the performance and fast data processing and
other functionalities, an organization need to continuously evolute the
software throughout its life cycle so that stakeholders & clients of the
product could work efficiently.
Software Development Project ?
A software development project is a complex undertaking by two or more
persons within the boundaries of time, budget, and staff resources that
produces new or enhanced computer code that adds significant business value
to a new or existing business process.
Types of Software Development:
1.Frontend Development:
Frontend developers work on the part of the product with which the user
interacts. They are primarily concerned with the user interface (UI). For
example, they might create the layout, visual aspects, and interactive elements
of a website or app. However, their role isn’t identical to that of a UI or user
experience (UX) designer. They also fix bugs and make certain that the UI can
run on different browsers
2. Backend Development:
In contrast, a backend developer works with the part of the product users can’t
see — the back end. This professional builds the infrastructure that powers the
website, app, or program, focusing on functionality, integration of systems,
and core logic. They will also deal with the complex, underlying structure,
ensuring strong performance, scalability, and security.
3. Full-Stack Development:
A full-stack developer works on all aspects of the product, including both the
front and back ends. To be a successful full-stack developer, you must have
strong programming skills, as well as a variety of soft skills that all tech
professionals must have, such as problem-solving and critical thinking. At the
end of the day, you — and perhaps your team — are responsible for creating a
full, complete product.
4. Desktop Development:
Desktop developers exclusively create applications that run on a desktop
operating system, such as Windows, Mac, or Linux. This is opposed to
developers that create applications that run on mobile, tablet, or other devices
5. Web Development:
Web development is the process of building web applications. People use
these apps through an internet browser on a multitude of devices. This is
different from a mobile app, which runs on a phone or tablet and doesn’t
necessarily require an internet connection to run.
6. Database Development
Not to be confused with a database administrator, who typically works with
daily database upkeep and troubleshooting and implements the system, a
database developer is responsible for building the database, modifying and
designing existing or new programs, and ensuring that they satisfy the
requirements of the users.
7. Mobile Development:
As is probably obvious from the name, a mobile developer builds applications
that run natively on mobile devices, including smartphones, tablets, and some
types of smartwatches. Usually, these professionals will specialize in either iOS
or Android development but not both.
8. Cloud Computing:
Cloud computing encompasses services, programs, and applications that run
over the cloud. That means they can be accessed remotely from practically any
location, provided the user has an internet connection and an appropriate
login. They offer plenty of advantages, including scalability.
Emergence of Software Engineering?
There are mainly Six Stages of the Emergence of Software Engineering:
1.Early Computer Programming:
Early commercial computers were very slow and too elementary
compared to today’s standards. Even simple processing tasks took
considerable computation time on those computers. No wonder programs at
that time were very small in size and lacked sophistication. Those programs are
usually written in assembly languages.
2.High-level Language Programming:
Computers became faster with the introduction of semiconductor
technology in the early 1960s. Faster semiconductor transistors replaced the
prevalent vacuum tube-based circuits in a computer. With the availability of
more powerful computers, it became possible to solve larger and more
complex problems.
At this time, high-level languages such as FORTRAN, ALGOL, and COBOL were
introduced. This considerably reduced the effort required to develop software
and helped programmers to write larger programs.
3. Control Flow-based Design:
A program’s control flow structure indicates the sequence in which the
program’s instructions are executed. To help develop programs having good
control flow structures, the flowcharting technique was developed. Even
today, the flowcharting technique is being used to represent and design
algorithms.
4. Data Structure-oriented Design: Computers became even more powerful
with the advent of Integrated Circuits (ICs) in the early 1970s. These could now
be used to solve more complex problems.
It is much more important to pay attention to the design of the important data
structures of the program than to the design of its control structure. Design
techniques based on this principle are called Data Structure-oriented Design.
5. Data Flow-oriented Design:
Software developers looked out for more effective techniques for
designing software and Data Flow-Oriented Design techniques proposed. The
functions also called processes and the data items exchanged between the
different functions represented in a diagram known as a Data Flow Diagram
(DFD).
6. Object-oriented Design:
Object-oriented design technique is an intuitively appealing approach.
Where the natural objects (such as employees, etc.) relevant to a problem are
first identified. Then the relationships among the objects such as composition,
reference, and inheritance are determined. Each object essentially acts as a
data hiding also known as data abstraction
Software Life cycle models ?
Waterfall model:
The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear-sequential life cycle model.
Waterfall Model – Design
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In “The Waterfall” approach, the
whole process of software development is divided into separate phases. In this
Waterfall model, typically, the outcome of one phase acts as the input for the
next phase sequentially.
Requirement Gathering and analysis: All possible requirements of the system
to be developed are captured in this phase and documented in a requirement
specification document.
System Design: The requirement specifications from first phase are studied in
this phase and the system design is prepared. This system design helps in
specifying hardware and system requirements and helps in defining the overall
system architecture.
Implementation – With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next
phase. Each unit is developed and tested for its functionality, which is referred
to as Unit Testing.
Integration and Testing – All the units developed in the implementation phase
are integrated into a system after testing of each unit. Post integration the
entire system is tested for any faults and failures.
Deployment of system – Once the functional and non-functional testing is
done; the product is deployed in the customer environment or released into
the market.
Maintenance – There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance the
product some better versions are released. Maintenance is done to deliver
these changes in the customer environment.
Waterfall Model – Advantages:
•Simple and easy to understand and use
•Clearly defined stages.
•Well understood milestones.
•Easy to arrange tasks.
•Process and results are well documented.
Waterfall Model – Disadvantages:
• working software is produced until late during the life cycle.
•High amounts of risk and uncertainty.
• a good model for complex and object-oriented projects.
• model for long and ongoing projects.
Rapid Application Development ?
The RAD (Rapid Application Development) model is based on prototyping
and iterative development with no specific planning involved. The process of
writing the software itself involves the planning required for developing the
product.
The various phases of RAD are as follows:

1.Business Modelling: The information flow among business functions is


defined by answering questions like what data drives the business process,
what data is generated, who generates it, where does the information go, who
process it and so on.
2. Data Modelling: The data collected from business modeling is refined into a
set of data objects (entities) that are needed to support the business. The
attributes (character of each entity) are identified, and the relation between
these data objects (entities) is defined.
3. Process Modelling: The information object defined in the data modeling
phase are transformed to achieve the data flow necessary to implement a
business function. Processing descriptions are created for adding, modifying,
deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction
of the software; even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already
been tested since RAD emphasis reuse. This reduces the overall testing time.
But the new part must be tested, and all interfaces must be fully exercised.
Advantage of RAD Model:
• model is flexible for change.
• this model, changes are adoptable.
• reduced development time.
• increases the reusability of features.
Disadvantage of RAD Model
• required highly skilled designers.
• smaller projects, we cannot use the RAD model.
• the high technical risk, it’s not suitable.
• user involvement.

Agile model?
The Agile model is a type of incremental model where software is developed in
a rapid incremental cycle.
Phases of Agile Model:

1.Requirements gathering: In this phase, you must define the requirements.


You should explain business opportunities and plan the time and effort needed
to build the project. Based on this information, you can evaluate technical and
economic feasibility.
2.Design the requirements: When you have identified the project, work with
stakeholders to define requirements. You can use the user flow diagram or the
high-level UML diagram to show the work of new features and show how it will
apply to your existing system.
3.Construction/ iteration: When the team defines the requirements, the work
begins. Designers and developers start working on their project, which aims to
deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.
4.Testing: In this phase, the Quality Assurance team examines the product’s
performance and looks for the bug.
5.Deployment: In this phase, the team issues a product for the user’s work
environment.
6.Feedback: After releasing the product, the last step is feedback. In this, the
team receives feedback about the product and works through the feedback.
Advantages of the Agile Model:
•Little or no planning required.
•Easy to manage.
•Gives flexibility to developers.
•Resource requirements are minimum.
Disadvantages of the Agile Model:
•Not suitable for handling complex dependencies.
•More risk of sustainability, maintainability and extensibility.
•An overall plan, an agile leader and agile PM practice is a must without which
it will not work.
Spiral Model?
The spiral model is a systems development lifecycle (SDLC) method used for
risk management that combines the iterative development process model with
elements of the Waterfall model.
Each cycle in the spiral is divided into four parts:
Objective setting: Each cycle in the spiral starts with the identification of
purpose for that cycle, the various alternatives that are possible for achieving
the targets, and the constraints that exists.
Risk Assessment and reduction: The next phase in the cycle is to calculate
these various alternatives based on the goals and constraints. The focus of
evaluation in this stage is located on the risk perception for the project.

Development and validation: The next phase is to develop strategies that


resolve uncertainties and risks. This process may include activities such as
benchmarking, simulation, and prototyping.
Planning: Finally, the next step is planned. The project is reviewed, and a
choice made whether to continue with a further period of the spiral. If it is
determined to keep, plans are drawn up for the next step of the project.
Advantages:
•High amount of risk analysis
•Useful for large and mission-critical projects.
Disadvantages:
•Can be a costly model to use.
•Risk analysis needed highly particular expertise
•Doesn’t work well for smaller projects.
UNIT-2:
Requirement Analysis and Specification?
Requirements Gathering: Requirements gathering is the process of
understanding what you are trying to build and why you are building it. Next
Up: Defining and Implementing a Requirements Baseline. Previous: A Guide to
Requirements Elicitation for Product Teams.

Steps:
Step 1: Understand Pain Behind The Requirement.
Step 2: Eliminate Language Ambiguity.
Step 3: Identify Corner Cases.
Step 4: Write User Stories.
Step 5: Create a Definition Of “Done”
Requirements analysis: Requirements analysis, also called requirements
engineering, is the process of determining user expectations for a new or
modified product. These features, called requirements, must be quantifiable,
relevant and detailed. In software engineering, such requirements are often
called functional specifications.

The various steps of requirement analysis are shown in fig:


•Draw the context diagram: The context diagram is a simple model that
defines the boundaries and interfaces of the proposed systems with the
external world. It identifies the entities outside the proposed system that
interact with the system.
•Development of a Prototype (optional): One effective way to find out what
the customer wants is to construct a prototype, something that looks and
preferably acts as part of the system they say they want.
We can use their feedback to modify the prototype until the customer is
satisfied continuously. Hence, the prototype helps the client to visualize the
proposed system and increase the understanding of the requirements. When
developers and users are not sure about some of the elements, a prototype
may help both the parties to take a final decision.
•Model the requirements: This process usually consists of various graphical
representations of the functions, data entities, external entities, and the
relationships between them. The graphical view may help to find incorrect,
inconsistent, missing, and superfluous requirements. Such models include the
Data Flow diagram, Entity-Relationship diagram, Data Dictionaries, State-
transition diagrams, etc.
•Finalise the requirements: After modeling the requirements, we will have a
better understanding of the system behavior. The inconsistencies and
ambiguities have been identified and corrected. The flow of data amongst
various modules has been analyzed. Elicitation and analyze activities have
provided better insight into the system. Now we finalize the analyzed
requirements, and the next step is to document these requirements in a
prescribed format.
Software requirements specification (SRS) :
Requirement specification: A software requirements specification (SRS) is a
document that describes what the software will do and how it will be expected
to perform. It also describes the functionality the product needs to fulfill all
stakeholders (business, users) needs.
•Introduction :
•Purpose of this Document: At first, main aim of why this document is
necessary and what’s purpose of document is explained and described.
•Scope of this document: In this, overall working and main objective of
document and what value it will provide to customer is described and
explained. It also includes a description of development cost and time
required.
•Overview: In this, description of product is explained. It’s simply summary or
overall review of product.
•General description: in this, general functions of product which includes
objective of user, a user characteristic, features, benefits, about why its
importance is mentioned. It also describes features of user community.
Functional Requirements: In this, possible outcome of software system which
includes effects due to operation of program is fully explained. All functional
requirements which may include calculations, data processing, etc. Are placed
in a ranked order.
Interface Requirements :In this, software interfaces which mean how software
program communicates with each other or users either in form of any
language, code, or message are fully described and explained. Examples can be
shared memory, data streams, etc.
Performance Requirements :In this, how a software system performs desired
functions under specific condition is explained. It also explains required time,
required memory, maximum error rate, etc.
Design Constraints :In this, constraints which simply means limitation or
restriction are specified and explained for design team. Examples may include
use of a particular algorithm, hardware and software limitations, etc.
Non-Functional Attributes :In this, non-functional attributes are explained that
are required by software system for better performance. An example may
include Security, Portability, Reliability, Reusability, Application compatibility,
Data integrity, Scalability capacity, etc.
Preliminary Schedule and Budget :In this, initial version and budget of project
plan are explained which include overall time duration required and overall
cost required for development of project.
Appendices :In this, additional information like references from where
information is gathered, definitions of some specific terms, acronyms,
abbreviations, etc. Are given and explained.
formal software specification: A formal software specification is a statement
expressed in a language whose vocabulary, syntax, and semantics are formally
defined. The need for a formal semantic definition means that the specification
languages cannot be based on natural language; it must be based on
mathematics.

UNIT -3:
Software design overview :Software design is the process of envisioning and
defining software solutions to one or more sets of problems. One of the main
components of software design is the software requirements analysis (SRA).
SRA is a part of the software development process that lists specifications used
in software engineering.
Characteristics of Software design?
The factors are divided into three categories:
•Operational
•Transitional
•Maintenance or Revision
Operational Characteristics:
The factors of this characteristic are related to the exterior quality of the
software.
Some of these factors are:-
Reliability – The software should not fail during execution and should not have
defects.
Correctness – The software should meet all the requirements of the customer.
Integrity – The software should not have any side effects.
Efficiency – The software should efficiently use the storage space and time.
Usability – The software should be user-friendly so that anyone can use it.
Security – The software should keep the data secure from any external threat.
Safety – The software made should not be hazardous or harmful to the
environment or life.
Transitional Characteristics:
The factors of this characteristic play a significant role when the software is
moved from one platform to another.
Some of these factors are:-
Interoperability – It is the ability of the software to use the information
transparently.
Reusability – If on doing slight modifications to the code of the software, we
can use it for a different purpose, then it is reusable.
Portability – If the software can perform the same operations on different
environments and platforms, that shows its Portability.
Maintenance or Revision Characteristics
Maintenance characteristics deal with the interior role of the software and tell
us about the software’s capability to maintain itself in the ever-changing
environment.
Maintainability – The software should be easy to maintain by any user.
Flexibility – The software should be flexible to any changes made to it.
Extensibility – There should not be any problem with the software on
increasing the number of functions performed by it.
Testability – It should be easy to test the software.
Modularity – A software is of high modularity if it can be divided into separate
independent parts and can be modified and tested separately.
Scalability – The software should be easy to upgrade.
Cohesion & Coupling ?
Cohesion: cohesion refers to the degree to which the elements inside a
module belong together. In one sense, it is a measure of the strength of
relationship between the methods and data of a class and some unifying
purpose or concept served by that class..
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is
contained in the component. A functional cohesion performs the task and
functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input
for other element, i.e., data flow between the parts. It occurs naturally in
functional programming languages.
Communicational Cohesion: Two elements operate on the same input data or
contribute towards the same output data. Example- update record in the
database and send it to the printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of
execution. Actions are still weakly connected and unlikely to be reusable. Ex-
calculate student GPA, print student record, calculate cumulative GPA, print
cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A
module connected with temporal cohesion all the tasks must be executed in
the same time span. This cohesion contains the code for initializing all the parts
of the system. Lots of different activities occur, all at unit time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A
component reads inputs from tape, disk, and network. All the code for these
functions is in the same component. Operations are related, but the functions
are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements
have no conceptual relationship other than location in source code. It is
accidental and the worst form of cohesion. Ex- print next line and reverse the
characters of a string in a single component.
Coupling: Coupling is the degree of interdependence between software
modules; a measure of how closely connected two routines or modules are;
the strength of the relationships between modules.
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact
that they communicate by passing only data, then the modules are said to be
data coupled. In data coupling, the components are independent of each other
and communicate through data. Module communications don’t contain tramp
data. Example-customer billing system.
Stamp Coupling: In stamp coupling, the complete data structure is passed from
one module to another module. Therefore, it involves tramp data. It may be
necessary due to efficiency factors- this choice was made by the insightful
designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information,
then they are said to be control coupled. It can be bad if parameters indicate
completely different behavior and good if parameters allow factoring and
reuse of functionality. Example- sort function that takes comparison function
as an argument.
External Coupling: In external coupling, the modules depend on other
modules, external to the software being developed or to a particular type of
hardware. Ex- protocol, external file, device format, etc.
Common Coupling: The modules have shared data such as global data
structures. The changes in global data mean tracing back to all modules which
access that data to evaluate the effect of the change. So it has got
disadvantages like difficulty in reusing modules, reduced ability to control data
accesses, and reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of
another module, or control flow is passed from one module to the other
module. This is the worst form of coupling and should be avoided.
Layered design?

Layered design: Layered architectures are said to be the most common and
widely used architectural framework in software development. It is also known
as an n-tier architecture and describes an architectural pattern composed of
several separate horizontal layers that function together as a single unit of
software.
Layered technology is divided into four parts:
•A quality focus: It defines the continuous process improvement principles of
software. It provides integrity that means providing security to the software so
that data can be accessed by only an authorized person, no outsider can access
the data. It also focuses on maintainability and usability.
•Process: It is the foundation or base layer of software engineering. It is key
that binds all the layers together which enables the development of software
before the deadline or on time. Process defines a framework that must be
established for the effective delivery of software engineering technology. The
software process covers all the activities, actions, and tasks required to be
carried out for software development.
•Method: During the process of software development the answers to all
“how-to-do” questions are given by method. It has the information of all the
tasks which includes communication, requirement analysis, design modeling,
program construction, testing, and support.
• Tools: Software engineering tools provide a self-operating system for
processes and methods. Tools are integrated which means information created
by one tool can be used by another.
Approaches Function Oriented Design : Function Oriented Design is an
approach to software design where the design is decomposed into a set of
interacting units where each unit has a clearly defined function.
Structured Analysis: Structured analysis is a software engineering technique
that uses graphical diagrams to develop and portray system specifications that
are easily understood by users. These diagrams describe the steps that need to
occur and the data required to meet the design function of a particular
software.
Data Dictionary: The content that is not described in the DFD is described in
the data dictionary. It defines the data store and relevant meaning. A physical
data dictionary for data elements that flow between processes, between
entities, and between processes and entities may be included. This would also
include descriptions of data elements that flow external to the data stores.
State Transition Diagram: State transition diagram is similar to the dynamic
model. It specifies how much time the function will take to execute and data
access triggered by events. It also describes all of the states that an object can
have, the events under which an object changes state, the conditions that
must be fulfilled before the transition will occur and the activities were
undertaken during the life of an object.
ER Diagram: ER diagram specifies the relationship between data store. It is
basically used in database design. It basically describes the relationship
between different entities.
Data flow diagram (DFD):A data flow diagram (DFD) is a graphical or visual
representation using a standardized set of symbols and notations to describe a
business’s operations through data movement. They are often elements of a
formal methodology such as Structured Systems Analysis and Design Method
(SSADM).
Logical DFD: Logical data flow diagram mainly focuses on the system process.
It illustrates how data flows in the system. Logical DFD is used in various
organizations for the smooth running of system. Like in a Banking software
system, it is used to describe how data is moved from one entity to another.
Physical DFD: Physical data flow diagram shows how the data flow is actually
implemented in the system. Physical DFD is more specific and close to
implementation.
Components of Data Flow Diagram:
Entities: Entities include source and destination of the data. Entities are
represented by rectangle with their corresponding names.
Process: The tasks performed on the data is known as process. Process is
represented by circle. Somewhere round edge rectangles are also used to
represent pprocess
Data Storage: Data storage includes the database of the system. It is
represented by rectangle with both smaller sides missing or in other words
within two parallel lines.
Data Flow:The movement of data in the system is known as data flow. It is
represented with the help of arrow. The tail of the arrow is source and the
head of the arrow is destination.

Structured Design: Structured design is a conceptualization of problem into


several well-organized elements of solution. It is basically concerned with the
solution design. Benefit of structured design is, it gives better understanding of
how the problem is being solved.
Structure Chart: It is created by the data flow diagram. Structure Chart
specifies how DFS’s processes are grouped into tasks and allocate to the CPU.
The structured chart does not show the working and internal structure of the
processes or modules and does not show the relationship between data or
data-flows. Similar to other SASD tools, it is time and cost-independent and
there is no error-checking technique associated with this tool. .
Pseudo Code: It is the actual implementation of the system. It is an informal
way of programming that doesn’t require any specific programming language
or technology.
Detailed Design: Detailed design deals with the implementation part of what is
seen as a system and its sub-systems in the previous two designs. It is more
detailed towards modules and their implementations. It defines logical
structure of each module and their interfaces to communicate with other
modules.
UNIT-4
Object Oriented concepts:
Objects: All entities involved in the solution design are known as objects. For
example, person, banks, company, and users are considered as objects. Every
entity has some attributes associated with it and has some methods to
perform on the attributes.
Classes: A class is a generalized description of an object. An object is an
instance of a class. A class defines all the attributes, which an object can have
and methods, which represents the functionality of the object.
Messages: Objects communicate by message passing. Messages consist of the
integrity of the target object, the name of the requested operation, and any
other action needed to perform the function. Messages are often
implemented as procedure or function calls.
Abstraction : In object-oriented design, complexity is handled using
abstraction. Abstraction is the removal of the irrelevant and the amplification
of the essentials.
Encapsulation: Encapsulation is also called an information hiding concept. The
data and operations are linked to a single unit. Encapsulation not only bundles
essential information of an object together but also restricts access to the data
and methods from the outside world.
Inheritance: OOD allows similar classes to stack up in a hierarchical manner
where the lower or sub-classes can import, implement, and re-use allowed
variables and functions from their immediate superclasses.This property of
OOD is called an inheritance. This makes it easier to define a specific class and
to create generalized classes from specific ones.
Polymorphism: OOD languages provide a mechanism where methods
performing similar tasks but vary in arguments, can be assigned the same
name. This is known as polymorphism, which allows a single interface is
performing functions for different types. Depending upon how the service is
invoked, the respective portion of the code gets executed.
UML (Unified Modeling Language):
A UML diagram is a diagram based on the UML (Unified Modeling Language)
with the purpose of visually representing a system along with its main actors,
roles, actions, artifacts or classes, in order to better understand, alter,
maintain, or document information about the system.
Diagram:
Use-case : Use-case diagrams describe the high-level functions and scope of a
system. These diagrams also identify the interactions between the system and
its actors. The use cases and actors in use-case diagrams describe what the
system does and how the actors use it, but not how the system operates
internally.
Draw a Use Case Diagram:
•Functionalities to be represented as use case
•Actors
•Relationships among the use cases and actors.
Use case diagrams are drawn to capture the functional requirements of a
system. After identifying the above items, we have to use the following
guidelines to draw an efficient use case diagram
•The name of a use case is very important. The name should be chosen in such
a way so that it can identify the functionalities performed.
•Give a suitable name for actors.
•Show relationships and dependencies clearly in the diagram.
•Do not try to include all types of relationships, as the main purpose of the
diagram is to identify the requirements.
•Use notes whenever required to clarify some important points.
Following is a sample use case diagram representing the order management
system. Hence, if we look into the diagram then we will find three use
cases (Order, SpecialOrder, and NormalOrder) and one actor which is the
customer.
The SpecialOrder and NormalOrder use cases are extended from Order use
case. Hence, they have extended relationship. Another important point is to
identify the system boundary, which is shown in the picture. The actor
Customer lies outside the system as it is an external user of the system.

Use case diagrams can be used for :


•Requirement analysis and high level design.
•Model the context of a system.
•Reverse engineering.
•Forward engineering.
Class Diagram: Class diagrams are the blueprints of your system or subsystem.
You can use class diagrams to model the objects that make up the system, to
display the relationships between the objects, and to describe what those
objects do and the services that they provide.
Draw a Class Diagram:
•The name of the class diagram should be meaningful to describe the aspect of
the system.
•Each element and their relationships should be identified in advance.
•Responsibility (attributes and methods) of each class should be clearly
identified
•For each class, minimum number of properties should be specified, as
unnecessary properties will make the diagram complicated.
•Use notes whenever required to describe some aspect of the diagram. At the
end of the drawing it should be understandable to the developer/coder.
•, before making the final version, the diagram should be drawn on plain paper
and reworked as many times as possible to make it correct.

The following diagram is an example of an Order System of an application. It


describes a particular aspect of the entire application.
• of all, Order and Customer are identified as the two elements of the system.
They have a one-to-many relationship because a customer can have multiple
orders.
• class is an abstract class and it has two concrete classes (inheritance
relationship) SpecialOrder and NormalOrder.
• two inherited classes have all the properties as the Order class. In addition,
they have additional functions like dispatch () and receive ().
Class diagrams are used for :
•Describing the static view of the system.
•Showing the collaboration among the elements of the static view.
•Describing the functionalities performed by the system.
•Construction of software applications using object oriented languages.
Interaction diagram: Interaction diagrams are models that describe how a
group of objects collaborate in some behavior - typically a single use-case. The
diagrams show a number of example objects and the messages that are passed
between these objects within the use-case.
We have two types of interaction diagrams in UML. One is the sequence
diagram and the other is the collaboration diagram. The sequence diagram
captures the time sequence of the message flow from one object to another
and the collaboration diagram describes the organization of objects in a system
taking part in the message flow.
Following things are to be identified clearly before drawing the interaction
diagram
•Objects taking part in the interaction.
•Message flows among the objects.
• sequence in which the messages are flowing.
•Object organization.
Interaction diagrams can be used
•To model the flow of control by time sequence.
•To model the flow of control by structural organizations.
•For forward engineering.
•For reverse engineering.
Activity Diagram : activity diagram is a behavioral diagram i.e. it depicts the
behavior of a system. An activity diagram portrays the control flow from a start
point to a finish point showing the various decision paths that exist while the
activity is being executed.
Before drawing an activity diagram, we should identify the following elements
•Activities
•Association
•Conditions
•Constraints
Once the above-mentioned parameters are identified, we need to make a
mental layout of the entire flow. This mental layout is then transformed into
an activity diagram.
Following is an example of an activity diagram for order management system.
In the diagram, four activities are identified which are associated with
conditions. One important point should be clearly understood that an activity
diagram cannot be exactly matched with the code. The activity diagram is
made to understand the flow of activities and is mainly used by the business
users
Following diagram is drawn with the four main activities
•Send order by the customer
•Receipt of the order
•Confirm the order
•Dispatch the order
Activity diagram can be used for
•Modeling work flow by using activities.
•Modeling business requirements.
•High level understanding of the system’s functionalities.
•Investigating business requirements at a later stage.
State Chart Diagram: A state diagram, also known as a state machine diagram
or statechart diagram, is an illustration of the states an object can attain as
well as the transitions between those states in the Unified Modeling Language
(UML).
Before drawing a Statechart diagram we should clarify the following points −
•Identify the important objects to be analyzed.
•Identify the states.
•Identify the events.
Following is an example of a Statechart diagram where the state of Order
object is analyzed
The first state is an idle state from where the process starts. The next states
are arrived for events like send request, confirm request, and dispatch order.
These events are responsible for the state changes of order object.
During the life cycle of an object (here order object) it goes through the
following states and there may be some abnormal exits. This abnormal exit
may occur due to some problem in the system. When the entire life cycle is
complete, it is considered as a complete transaction as shown in the following
figure. The initial and final state of an object is also shown in the following
figure.
The main usage can be described as –
•To model the object states of a system.
•To model the reactive system. Reactive system consists of reactive objects.
•To identify the events responsible for state changes.
•Forward and reverse engineering.
Postscript: Postscript is a programming language that describes the
appearance of a printed page. It was developed by Adobe in 1985 and has
become an industry standard for printing and imaging.
The main purpose of PostScript was to provide a convenient language in which
to describe images in a device independent manner.
UNIT -5
Coding: The coding is the process of transforming the design of a system into a
computer language format. This coding phase of software development is
concerned with software translating design specification into the source code.
Code review: Code reviews, also known as peer reviews, act as quality
assurance of the code base. Code reviews are methodical assessments of code
designed to identify bugs, increase code quality, and help developers learn the
source code.
Code Review is a systematic examination, which can find and remove the
vulnerabilities in the code such as memory leaks and buffer overflows.
•Technical reviews are well documented and use a well-defined defect
detection process that includes peers and technical experts.
•It is ideally led by a trained moderator, who is NOT the author.
•This kind of review is usually performed as a peer review without
management participation.
• prepare for the review meeting and prepare a review report with a list of
findings.
•Technical reviews may be quite informal or very formal and can have a
number of purposes but not limited to discussion, decision making, evaluation
of alternatives, finding defects and solving technical problems.
Documentation : In the software development process, software
documentation is the information that describes the product to the people
who develop, deploy and use it. It includes the technical manuals and online
material, such as online versions of manuals and help capabilities.
Types Of Software Documentation :
•Requirement Documentation: It is the description of how the software shall
perform and which environment setup would be appropriate to have the best
out of it. These are generated while the software is under development and is
supplied to the tester groups too.
•Architectural Documentation: Architecture documentation is a special type
of documentation that concerns the design. It contains very little code and is
more focused on the components of the system, their roles, and working. It
also shows the data flow throughout the system.
•Technical Documentation: These contain the technical aspects of the
software like API, algorithms, etc. It is prepared mostly for software devs.
•End-user Documentation: As the name suggests these are made for the end
user. It contains support resources for the end user.
Testing : Software Testing is evaluation of the software against requirements
gathered from users and system specifications. Testing is conducted at the
phase level in software development life cycle or at module level in program
code. Software testing comprises of Validation and Verification.
Black-box Testing: Black box testing involves testing a system with no prior
knowledge of its internal workings. A tester provides an input, and observes
the output generated by the system under test.

Black box testing can be done in the following ways:

Syntax-Driven Testing – This type of testing is applied to systems that can be


syntactically represented by some language. For example- compilers, language
that can be represented by a context-free grammar. In this, the test cases are
generated so that each grammar rule is used at least once.
Equivalence partitioning – It is often seen that many types of inputs work
similarly so instead of giving all of them separately we can group them and test
only one input of each group. The idea is to partition the input domain of the
system into several equivalence classes such that each member of the class
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error.
White-box Testing: White box testing is an approach that allows testers to
inspect and verify the inner workings of a software system—its code,
infrastructure, and integrations with external systems.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source
code.
Processing: Performing risk analysis for guiding through the entire process.
Proper test planning: Designing test cases so as to cover the entire code.
Execute rinse-repeat until error-free software is reached. Also, the results are
communicated.
Output: Preparing final report of the entire testing process.
Integration testing: Integration testing -- also known as integration and testing
(I&T) -- is a type of software testing in which the different units, modules or
components of a software application are tested as a combined entity.
However, these modules may be coded by different programmers.
Integration test approaches – There are four types of integration testing
approaches. Those approaches are the following:
•Big-Bang Integration Testing – It is the simplest integration testing approach,
where all the modules are combined and the functionality is verified after the
completion of individual module testing. In simple words, all the modules of
the system are simply put together and tested. This approach is practicable
only for very small systems. If an error is found during the integration testing, it
is very difficult to localize the error as the error may potentially belong to any
of the modules being integrated. So, debugging errors reported during big
bang integration testing is very expensive to fix.
•Bottom-Up Integration Testing – In bottom-up testing, each module at lower
levels is tested with higher modules until all modules are tested. The primary
purpose of this integration testing is that each subsystem tests the interfaces
among various modules making up the subsystem. This integration testing uses
test drivers to drive and pass appropriate data to the lower level modules.
•Top-Down Integration Testing – Top-down integration testing technique is
used in order to simulate the behaviour of the lower-level modules that are
not yet integrated. In this integration testing, testing takes place from top to
bottom. First, high-level modules are tested and then low-level modules and
finally integrating the low-level modules to a high level to ensure the system is
working as intended.
•Mixed Integration Testing – A mixed integration testing is also called
sandwiched integration testing. A mixed integration testing follows a
combination of top down and bottom-up testing approaches. In top-down
approach, testing can start only after the top-level module have been coded
and unit tested. In bottom-up approach, testing can start only after the bottom
level modules are ready. This sandwich or mixed approach overcomes this
shortcoming of the top-down and bottom-up approaches. It is also called the
hybrid integration testing. also, stubs and drivers are used in mixed integration
testing.
Object-Oriented testing: Object-Oriented testing is a software testing process
that is conducted to test the software using object-oriented paradigms like,
encapsulation, inheritance, polymorphism, etc. The software typically
undergoes many levels of testing, from unit testing to system or acceptance
testing.
Testing Object-Oriented Systems: Testing is a continuous activity during
software development. In object-oriented systems, testing encompasses three
levels, namely, unit testing, subsystem testing, and system testing.
•Unit Testing: In unit testing, the individual classes are tested. It is seen
whether the class attributes are implemented as per design and whether the
methods and the interfaces are error-free. Unit testing is the responsibility of
the application engineer who implements the structure.
•Subsystem Testing: This involves testing a particular module or a subsystem
and is the responsibility of the subsystem lead. It involves testing the
associations within the subsystem as well as the interaction of the subsystem
with the outside. Subsystem tests can be used as regression tests for each
newly released version of the subsystem.
•System Testing: System testing involves testing the system as a whole and is
the responsibility of the quality-assurance team. The team often uses system
tests as regression tests when assembling new releases.
Smoke testing: Smoke testing, also called build verification testing or build
acceptance testing, is nonexhaustive software analysis that ascertains that the
most crucial functions of a program work but does not delve into finer details.
Smoke testing is the preliminary check of the software after a build and before
a release.

Types of Smoke Testing:


There are three types of Smoke Testing:
•Manual Testing: In this, the tester has to write, develop, modify or update
the test cases for each built product. Either the tester has to write test scripts
for existing features or new features.
•Automated Testing: In this, the tool will handle the testing process by itself
providing the relevant tests. It is very helpful when the project should be
completed in a limited time.
•Hybrid Testing: As the name implies, it is the combination of both manual
and automated testing. Here, the tester has to write test cases by himself and
he can also automate the tests using the tool. It increases the performance of
the testing as it combines both manual checking and tool.

You might also like