Se Monograph CST-220 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 119

Chandigarh University

University Institute of Engineering


Computer Science Engineering
Bachelor of Engineering
BE CS201
Software Engineering
CST-220
Program Educational Objectives

PEO-1 Graduate will be able to serve professionally while


engaging with a Government firm, industry, corporate,
academic and research organization or by contributing
being an entrepreneur.
PEO-2 Graduate will be able to work effectively in different
fields with a core expertise in analysis, design, networking,
security, and development using advanced tools.
PEO-3 Graduate will be able to develop themselves
professionally by lifelong learning through innovation and
research while benefiting the society.
PEO-4 Graduate will be able to show the leadership in diverse
cultures, nationalities and fields while working with
interdisciplinary teams.
PEO-5 Graduate will be committed team member through
complacency and ethical values.
Program Outcome
PO 1 Engineering Knowledge: Apply the knowledge of mathematics, science,
engineering fundamentals and an engineering specialization to the solution of
complex engineering problems.

PO 2 Program analysis: Identify, formulate, review research literature and analyze


complex engineering problems reaching substantiated conclusions using first
principles of mathematics, natural sciences and engineering sciences.
PO 3 Design of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety and the cultural,
societal, and environmental considerations.
PO 4 Conduct investigations of complex problems: Use research based knowledge
and research methods including design of experiments, analysis and
interpretation of data and synthesis of the information to provide valid
conclusions.
PO 5 Modern tool usage: Create, select, and apply appropriate techniques,
resources and modern engineering and IT tools including prediction and modeling
to complex engineering activities with an understanding of the limitations.
Contents
S.No. item page
*
1. Name of the program 1
2. Name and code of the subject 1
3. Outcomes of the subject 2
4. Syllabus 4
5. Important Definitions 6
6. Important /fundamentals/ Theorems 16
7. Important Rules/ Laws/ 18
8. Formulae/formulations /recipes 25
9. Important statements 28
10. Important contents Beyond syllabus 52
11. Any other important information 105
Syllabus

UNIT-1

Introduction: Definition of software and Software engineering, Difference between Program and
Product, Software development life cycle, Different life cycle models (waterfall, Iterative waterfall,
Proto type, Evolutionary and Spiral model), Agile software development and Their characteristics.

Software requirement: Requirement Analysis, Analysis principle, Software prototyping Specification,


Data modeling, Functional modeling and information flow, behavioral modeling, Mechanics of
structural modeling, Data dictionary.

Function and Object oriented design: Structured analysis, Dataflow diagrams, Basic object
Orientation concepts, Unified modeling language, Unified modeling language, Use case model, Class
Diagrams, Interaction diagrams, Activity diagrams, State chart diagrams.

UNIT-2

Software design: Design process and concepts, Effective Modular design, the design model, Design
documentation, Approaches to Software design.

Software Project management: Software project planning, Project estimation techniques, COCOMO
Model, Project scheduling, Risk analysis and management, Software quality and management Staffing,
software configuration management.

User interface Design: Characteristics of good user inter face design, Command language user
interface, Menu based, Direct manipulation interfaces, Fundamentals of command based user
interface.
UNIT-3
Software Testing: Testing levels, Activities, Verification and Validation, Unit testing,
System testing Integration testing, Validation testing, Black box and white box testing.
Quality management: Software quality, Software reliability, Software re-views, Formal
technical reviews, Statistical SQA, Software reliability, The ISO 9000 coding standards, SQA
plan, SEICMM.

Software Maintenance and Reuse: Definition, Types of maintenance, Soft-ware reverse


engineering, Different Maintenance models, Basic issue in any reuse program, reuse approach

1
1.Important Definitions

1.1. Software Engineering-


Software engineering (SE) is the application of engineering to the development of software
in a systematic method.

1.2. Software requirement specification (SRS)


Software requirements specification (SRS) is a document that captures complete
description about how the system is expected to perform.
1.3. Software development life cycle-
The systems development life cycle (SDLC), also referred to as the application
development life-cycle, is a term used in systems engineering, information systems and
software engineering to describe a process for planning, creating, testing, and deploying
an information system.
1.4. Requirements Analysis
It is the first stage in the systems engineering process and software development process.
Requirements analysis in systems engineering and software engineering, encompasses
those tasks that go into determining the needs or conditions to meet for a new or altered
product, taking account of the possibly conflicting requirements of the various
stakeholders, such as beneficiaries or users. -ISO 9000 is a set of international standards
on quality management and quality assurance developed to help companies effectively
document the quality system elements to be implemented to maintain an efficient quality
system. They are not specific to any one industry and can be applied to organizations of
any size.
1.5. Risk Management
Risk is made up of two parts: the probability of something going wrong, and the negative
consequences if it does. Risk can be hard to spot, however, let alone prepare for and
manage. And, if you're hit by a consequence that you hadn't planned for, costs, time, and
reputations could be on the line.

2
This makes Risk Analysis an essential tool when your work involves risk. It can help you
identify and understand the risks that you could face in your role. In turn, this helps you
manage these risks, and minimize their impact on your plans.
1.6. Software Quality
In context of software engineering, software quality refers to two related but distinct
notions that exist wherever quality is defined in a business context:
Software functional quality reflects how well it complies with or conforms to a given
design, based on functional requirements or specifications. That attribute can also be
described as the fitness for purpose of a piece of software or how it compares to
competitors in the marketplace as a worthwhile product. It is the degree to which the
correct software was produced.
Software structural quality refers to how it meets non-functional requirements that
support the delivery of the functional requirements, such as robustness or maintainability.
It has a lot more to do with the degree to which the software works as needed.
1.7. Prototyping-
It is the process of building a model of a system. In terms of an information system,
prototypes are employed to help system designers build an information system that
intuitive and easy to manipulate for end users. Prototyping is an iterative process that is
part of the analysis phase of the systems development life cycle.
During the requirements determination portion of the systems analysis phase, system
analysts gather information about the organization's current procedures and business
processes related the proposed information system. In addition, they study the current
information system, if there is one, and conduct user interviews and collect
documentation. This helps the analysts develop an initial set of system requirements.

1.8. Data Dictionary-


A data dictionary is a collection of descriptions of the data objects or items in a data
model for the benefit of programmers and others who need to refer to them. A first step in
analyzing a system of objects with which users interact is to identify each object and its
relationship to other objects. This process is called data modeling and results in a picture

3
of object relationships. After each data object or item is given a descriptive name, its
relationship is described (or it becomes part of some structure that implicitly describes
relationship), the type of data (such as text or image or binary value) is described,
possible predefined values are listed, and a brief textual description is provided. This
collection can be organized for reference into a book called a data dictionary.

1.9. Data flow diagram-


Data flow diagrams (DFDs) reveal relationships among and between the various
components in a program or system. DFDs are an important technique for modeling a
system‘s high-level detail by showing how input data is transformed to output results
through a sequence of functional transformations. DFDs consist of four major
components: entities, processes, data stores, and data flows. The symbols used to depict
how these components interact in a system are simple and easy to understand; however,
there are several DFD models to work from, each having its own symbols. DFD syntax
does remain constant by using simple verb and noun constructs. Such a syntactical
relationship of DFDs makes them ideal for object-oriented analysis and parsing
functional specifications into precise DFDs for the systems analyst.

4
Level of Data Flow Diagrams

Symbols 0 Data store


Process
name External entity
Information flow

Note that the letters along the arrows in the example diagrams represent the name of the
information being transferred in each direction – you should use a description of what the
information is, e.g. Patient name – these names should match across the different levels of your
DFD. Likewise, each process should give a name, with the number only being used in the top
half of the box. There is a sample DFD fragment overleaf.

Context Diagram

The context diagram just shows the system in the context of the external entities, and
what information flows in and out of the system as a whole.

X 0
Z
Entity A Information System Entity B
Y

The system is then subsequently broken down into its component parts – which are
themselves broken down, until each process represents a single step in your system.
Level 0 DFD
1
X
Entity A Process 1 Entity B

Y
V Z 5
Level 1 DFD
For process 2
2.1
V G
Process
2.1
J

2.2 2.3 Y
N H
Data store N Process Process
N 2.2 2.3 W
Level 2 DFD
For process 2.2
2.2.1 H1
J1
Process
H
2.2.1
2.2.3
J
Process
K H2
2.2.2 2.2.3
J2 L N N
Process
2.2.2
Data store N

Example DFD Fragment

Appointment
Possible

Patient name 2
Patient appointment Maintain
Patient
Appointment to appointments

Patient information
Appointment Available
Appointment
Patients
Appointments
1.10. Object Orientation Concepts-
Object means a real word entity such as pen, chair, table etc. Object-Oriented
Programming is a methodology or paradigm to design a program using classes and
objects. It simplifies the software development and maintenance by providing some
concepts:
Object

Class

Inheritance

Polymorphism

Abstraction

Encapsulation

1.1.1. Object
Any entity that has state and behavior is known as an object. For example: chair, pen,
table, keyboard, bike etc. It can be physical and logical.
1.1.2. Class
Collection of objects is called class. It is a logical entity.
1.1.3. Inheritance
When one object acquires all the properties and behaviors of parent object i.e. known as
inheritance. It provides code reusability. It is used to achieve runtime polymorphism.
1.1.4. Polymorphism
One task is performed by different ways i.e. known as polymorphism. For example: to
convince the customer differently, to draw something e.g. shape or rectangle etc.
In java, we use method overloading and method overriding to achieve polymorphism.
Another example can be to speak something e.g. cat speaks meow, dog barks woof etc.

1.1.5. Abstraction
Hiding internal details and showing functionality is known as abstraction. For
example:
Phone call, we don't know the internal processing.
In java, we use abstract class and interface to achieve abstraction.
1.1.6. Encapsulation
Binding (or wrapping) code and data together into a single unit is known as
encapsulation. For example: capsule, it is wrapped with different medicines.
A java class is the example of encapsulation. Java bean is the fully encapsulated
class because all the data members are private here.
1.11. Modularity-
In software engineering, modularity refers to the extent to which a software/Web
application may be divided into smaller modules. Software modularity indicates that the
number of application modules is capable of serving a specified business domain.
Modularity is successful because developers use prewritten code, which saves resources.
Overall, modularity provides greater software development manageability.
1.12. Software Design-
It may be seen as a software engineering artifact derived from the available specifications
and requirements of a software product, which needs to be developed. It transforms and
implements the requirements, specified in the software requirement specification (SRS)
in a viewable object or form so as to assist developers to carry out the development
process in a particular direction in order to build a desired product.
Software designing is one of the early phases of the Software Development Life Cycle
(SDLC), which provides the necessary outputs for the next phase, i.e. coding &
development.
Further, these designs may be structured using different strategies, and are available in
multiple variants so as to view the logical structure of a software product from multiple
perspectives. Let's go through each of them.

1.13. Design Concepts-


Design: The first step in the development phase for any engineered product. It serves as
the foundation for all software engineering and software maintenance steps that follow.
Abstraction: Each step in the software engineering process is a refinement in the level of
abstraction of the software solution. - Data abstractions: a named collection of data -
Procedural abstractions: A named sequence of instructions in a specific function - Control
abstractions: A program control mechanism without specifying internal details.
Refinement: is actually a process of elaboration. Stepwise refinement is a top-down
design strategy proposed by Nicklaus [WIR71]. The architecture of a program is
developed by successively refining levels of procedural detail. The process of program
refinement is analogous to the process of refinement and partitioning that is used during
requirements analysis. The major difference is in the level of implementation detail,
instead of the approach. Abstraction and refinement are complementary concepts.
Abstraction enables a designer to specify procedure and data w/o details. Refinement
helps the designer to reveal low-level details.
Software architecture: is the hierarchical structure of program components and their
interactions.

11
Reengineering: The examination and alteration of an existing subject system to
reconstitute it in a new form. This process encompasses a combination of sub-processes
such as reverse engineering, restructuring, re-documentation, forward engineering, and
retargeting.

2. Important /fundamentals/ Theorems

Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality software. The SDLC aims to produce high-quality
software that meets or exceeds customer expectations, reaches completion within times
and cost estimates.
SDLC is the acronym of Software Development Life Cycle. It is also called as Software
Development Process. SDLC is a framework defining tasks performed at each step in the
software development process.
ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be
the standard that defines all the tasks required for developing and maintaining software.

2.1. Phase1: Planning and Requirement


Analysis
Requirement analysis is the most important and fundamental stage in SDLC. It is
performed by the senior members of the team with inputs from the customer, the sales
department, market surveys and domain experts in the industry. This information is then
used to plan the basic project approach and to conduct product feasibility study in the
economical, operational and technical areas. Planning for the quality assurance
requirements and identification of the risks associated with the project is also done in th e
planning stage. The outcome of the technical feasibility study is to define the various
technical approaches that can be followed to implement the project successfully with
minimum risks.

12
2.2. Phase2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts.
This is done through an SRS (Software Requirement Specification) document which
consists of all the product requirements to be designed and developed during the project
life cycle.

2.3. Phase3: Designing the Product


Architecture
SRS is the reference for product architects to come out with the best architecture for the
product to be developed. Based on the requirements specified in SRS, usually more than
one design approach for the product architecture is proposed and documented in a DDS -
Design Document Specification. This DDS is reviewed by all the important stakeholders
and based on various parameters as risk assessment, product robustness, design
modularity, budget and time constraints, the best design approach is selected for the
product. A design approach clearly defines all the architectural modules of the product
along with its communication and data flow representation with the external and third
party modules (if any). The internal design of all the modules of the proposed
architecture should be clearly defined with the minutest of the details in DDS.

2.4. Phase4: Building or Developing the


Product
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is performed
in a detailed and organized manner, code generation can be accomplished without much
hassle. Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate the

13
code. Different high level programming languages such as C, C++, Pascal, Java and PHP
are used for coding. The programming language is chosen with respect to the type of
software being developed.

2.5. Phase5: Testing the Product


This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the
testing only stage of the product where product defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined in the SRS.

2.6. Phase6: Deployment in the Market


And Maintenance
Once the product is tested and ready to be deployed it is released formally in the
appropriate market. Sometimes product deployment happens in stages as per the business
strategy of that organization. The product may first be released in a limited segment and
tested in the real business environment (UAT- User acceptance testing).

3. Important Rules
3.1. Classical Waterfall Model
The Waterfall Model was the first Process Model to be introduced. It is also referred to as
a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall
model, each phase must be completed before the next phase can begin and there is no
overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear sequential
flow. This means that any phase in the development process begins only if the previous
phase is complete. In this waterfall model, the phases do not overlap.

14
3.1.1. Waterfall Model - Design
Waterfall approach was first SDLC Model to be used widely in Software Engineering to
ensure success of the project. In "The Waterfall" approach, the whole process of software
development is divided into separate phases. In this Waterfall model, typically, the
outcome of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall
Model.

The sequential phases in Waterfall model are −


Requirement Gathering and analysis − All possible requirements of the system to be
developed are captured in this phase and documented in a requirement specification
document.

System Design −The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system architecture. 

15
Implementation − With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing. 

Integration and Testing −All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.

Deployment of system −Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market. 

Maintenance −There are some issues which come up in the client environment. To fix
those issues, patches are released. Also to enhance the product some better versions are
released. Maintenance is done to deliver these changes in the customer environment. 

All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals are achieved for previous phase and it is signed off, so the name
"Waterfall Model". In this model, phases do not overlap.

3.1.2. Waterfall Model - Application


Every software developed is different and requires a suitable SDLC approach to be
followed based on the internal and external factors. Some situations where the use of
Waterfall model is most appropriate are −
o Requirements are very well documented, clear and fixed.

o Product definition is stable.

o Technology is understood and is not dynamic.

o There are no ambiguous requirements.

o Ample resources with required expertise are available to support the product.

o The project is short.

16
3.1.3. Waterfall Model - Advantages
The advantages of waterfall development are that it allows for departmentalization and
control. A schedule can be set with deadlines for each stage of development and a
product can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation,
troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
o Simple and easy to understand and use


o Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.

o Phases are processed and completed one at a time.

o Works well for smaller projects where requirements are very well understood.

o Clearly defined stages.

o Well understood milestones.

o Easy to arrange tasks.

o Process and results are well documented.

3.1.4. Waterfall Model - Disadvantages


The disadvantage of waterfall development is that it does not allow much reflection or
revision. Once an application is in the testing stage, it is very difficult to go back and
change something that was not well-documented or thought upon in the concept stage.
The major disadvantages of the Waterfall Model are as follows −
 No working software is produced until late during the life cycle.

17
 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.


 Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.

 It is difficult to measure progress within stages.

 Cannot accommodate changing requirements.

 Adjusting scope during the life cycle can end a project.


 Integration is done as a "big-bang. At the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.

3.2. Prototype Model


The basic idea in Prototype model is that instead of freezing the requirements before a
design or coding can proceed, a throwaway prototype is built to understand the
requirements. This prototype is developed based on the currently known requirements.
Prototype model is a software development model. By using this prototype, the client can
get an ―actual feel‖ of the system, since the interactions with prototype can enable the
client to better understand the requirements of the desired system. Prototyping is an
attractive idea for complicated and large systems for which there is no manual process or
existing system to help determining the requirements.
The prototype are usually not complete systems and many of the details are not built in
the prototype. The goal is to provide a system with overall functionality.

3.2.1. Diagram of Prototype model:

18
3.2.2. Advantages of Prototype model:
o Users are actively involved in the development


o Since in this methodology a working model of the system is provided, the users
get a better understanding of the system being developed.

o Errors can be detected much earlier.

o Quicker user feedback is available leading to better solutions.

o Missing functionality can be identified easily

3.2.3. Disadvantages of Prototype model:


 Leads to implementing and then repairing way of building systems.
 Practically, this methodology may increase the complexity of the system as scope of
the system may expand beyond original plans.
 Incomplete application may cause application not to be used as the
full system was designed
Incomplete or inadequate problem analysis.

3.2.4. When to use Prototype model:


Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.


Typically, online systems, web interfaces have a very high amount of interaction with
end users, are best suited for Prototype model. It might take a while for a system to
be built that allows ease of use and needs minimal training for the end user.

Prototyping ensures that the end users constantly work with the system and provide a
feedback which is incorporated in the prototype to result in a useable system. They
are excellent for designing good human computer interface systems.

3.3. Spiral Model


The spiral model combines the idea of iterative development with the systematic,
controlled aspects of the waterfall model. This Spiral model is a combination of iterative
development process model and sequential linear development model i.e. the waterfall
model with a very high emphasis on risk analysis. It allows incremental releases of the
product or incremental refinement through each iteration around the spiral.

3.3.1. Spiral Model - Design


The spiral model has four phases. A software project repeatedly passes through these
phases in iterations called Spirals.

20
3.3.1.1. Identification
This phase starts with gathering the business requirements in the baseline spiral. In
the subsequent spirals as the product matures, identification of system requirements,
subsystem requirements and unit requirements are all done in this phase.
This phase also includes understanding the system requirements by continuous
communication between the customer and the system analyst. At the end of the spiral,
the product is deployed in the identified market.

3.3.1.2. Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final
design in the subsequent spirals.

3.3.1.3. Construct or Build


The Construct phase refers to production of the actual software product at every
spiral. In the baseline spiral, when the product is just thought of and the design is
being developed a POC (Proof of Concept) is developed in this phase to get customer
feedback.
Then in the subsequent spirals with higher clarity on requirements and design details
a working model of the software called build is produced with a version number.
These builds are sent to the customer for feedback.

3.3.1.4. Evaluation and Risk Analysis


Risk Analysis includes identifying, estimating and monitoring the technical feasibility
and management risks, such as schedule slippage and cost overrun. After testing the
build, at the end of first iteration, the customer evaluates the software and provides
feedback.
The following illustration is a representation of the Spiral Model, listing the activities
in each phase.

21
Based on the customer evaluation, the software development process enters the next
iteration and subsequently follows the linear approach to implement the feedback
suggested by the customer. The process of iterations along the spiral continues
throughout the life of the software.

3.3.2. Spiral Model Application


The Spiral Model is widely used in the software industry as it is in sync with the natural
development process of any product, i.e. learning with maturity which involves minimum
risk for the customer as well as the development firms.
The following pointers explain the typical uses of a Spiral Model −
o When there is a budget constraint and risk evaluation is important.

o For medium to high-risk projects.


o Long-term project commitment because of potential changes to economic
priorities as the requirements change with time.

o Customer is not sure of their requirements which is usually the case.

o Requirements are complex and need evaluation to get clarity.

22
o New product line which should be released in phases to get enough customer
feedback.

o Significant changes are expected in the product during the development cycle.

3.3.3. Spiral Model - Pros and Cons


The advantage of spiral lifecycle model is that it allows elements of the product to be
added in, when they become available or known. This assures that there is no conflict
with previous requirements and design.
This method is consistent with approaches that have multiple software builds and releases
which allows making an orderly transition to a maintenance activity. Another positive
aspect of this method is that the spiral model forces an early user involvement in the
system development effort.
On the other side, it takes a very strict management to complete such products and there
is a risk of running the spiral in an indefinite loop. So, the discipline of change and the
extent of taking change requests is very important to develop and deploy the product
successfully.
The advantages of the Spiral SDLC Model are as follows −
o Changing requirements can be accommodated.

o Allows extensive use of prototypes.

o Requirements can be captured more accurately.

o Users see the system early.


o Development can be divided into smaller parts and the risky parts can be
developed earlier which helps in better risk management.

The disadvantages of the Spiral SDLC Model are as follows −


o Management is more complex.

o End of the project may not be known early.

23
o Not suitable for small or low risk projects and could be expensive for small
projects.

o Process is complex

o Spiral may go on indefinitely.

o Large number of intermediate stages requires excessive documentation.

Important statements
4.1. Software Evolution Laws
Lehman has given laws for software evolution. He divided the software into three
different categories:
S-type (static-type) - This is a software, which works strictly according to defined
specifications and solutions. The solution and the method to achieve it, both are
immediately understood before coding. The s-type software is least subjected to changes
hence this is the simplest of all. For example, calculator program for mathematical
computation.
P-type (practical-type) - This is a software with a collection of procedures. This is
defined by exactly what procedures can do. In this software, the specifications can be
described but the solution is not obvious instantly. For example, gaming software.
E-type (embedded-type) - This software works closely as the requirement of real-world
environment. This software has a high degree of evolution as there are various changes in
laws, taxes etc. in the real world situations. For example, Online trading software.

E-Type software evolution


Lehman has given eight laws for E-Type software evolution -
Continuing change - An E-type software system must continue to adapt to the real world
changes, else it becomes progressively less useful.
Increasing complexity - As an E-type software system evolves, its complexity tends to
increase unless work is done to maintain or reduce it.

24
Conservation of familiarity - The familiarity with the software or the knowledge about
how it was developed, why was it developed in that particular manner etc. must be
retained at any cost, to implement the changes in the system.
Continuing growth- In order for an E-type system intended to resolve some business
problem, its size of implementing the changes grows according to the lifestyle changes of
the business.
Reducing quality - An E-type software system declines in quality unless rigorously
maintained and adapted to a changing operational environment.
Feedback systems- The E-type software systems constitute multi-loop, multi-level
feedback systems and must be treated as such to be successfully modified or improved.
Self-regulation - E-type system evolution processes are self-regulating with the
distribution of product and process measures close to normal.
Organizational stability - The average effective global activity rate in an evolving E-type
system is invariant over the lifetime of the product.

5. Formulae/formulations
5.1. Basic COCOMO:
The basic COCOMO equations take the form
Effort Applied (E) = ab (KLOC)bb [ man-months ]
Development Time (D) = cb(Effort Applied)db [months]
People required (P) = Effort Applied / Development Time [count]
where, KLOC is the estimated number of delivered lines (expressed in thousands ) of
code for project. The coefficients ab, bb , cb and db are given in the following table (note:
the values listed below are from the original analysis, with a modern reanalysis producing
different values):

Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38

25
Semi-detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

26
Ratings

Very Extra
Cost Drivers Very Low Low Nominal High High High

Product attributes

Required software reliability 0.75 0.88 1.00 1.15 1.40

Size of application database 0.94 1.00 1.08 1.16

Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65

Hardware attributes

Run-time performance constraints 1.00 1.11 1.30 1.66

Memory constraints 1.00 1.06 1.21 1.56

Volatility of the virtual machine environment 0.87 1.00 1.15 1.30

Required turnabout time 0.87 1.00 1.07 1.15

Personnel attributes

Analyst capability 1.46 1.19 1.00 0.86 0.71

Applications experience 1.29 1.13 1.00 0.91 0.82

27
Software engineer capability 1.42 1.17 1.00 0.86 0.70

Virtual machine experience 1.21 1.10 1.00 0.90

Programming language experience 1.14 1.07 1.00 0.95

Project attributes

Application of software engineering methods 1.24 1.10 1.00 0.91 0.82

Use of software tools 1.24 1.10 1.00 0.91 0.83

Required development schedule 1.23 1.08 1.00 1.04 1.10

5.2. Intermediate COCOMO


Intermediate COCOMO computes software development effort as function of program
size and a set of "cost drivers" that include subjective assessment of product, hardware,
personnel and project attributes. This extension considers a set of four "cost drivers",
each with a number of subsidiary attributes:-
Product attributes


 Required software reliability extent 

 Size of application database

 Complexity of the product

Hardware attributes


 Run-time performance constraints

28
 Memory constraints

 Volatility of the virtual machine environment 

 Required turnabout time 

Personnel attributes


 Analyst capability

 Software engineering capability

 Applications experience

 Virtual machine experience

 Programming language experience

Project attributes


o Use of software tools

o Application of software engineering methods 

o Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from
"very low" to "extra high" (in importance or value). An effort multiplier from the
table below applies to the rating. The product of all effort multipliers results in 

an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.

The Intermediate Cocomo formula now takes the form:


E=ai(KLoC)(bi)(EAF)

Where E is the effort applied in person-months, KLoC is the estimated number of
thousands of delivered lines of code for the project, and EAF is the factor
calculated above. The coefficient ai and the exponent bi are given in the next table. 

Software project ai bi

Organic 3.2 1.05

29
Semi-detached 3.0 1.12

Embedded 2.8 1.20

The Development time D calculation uses E in the same way as in the Basic COCOMO. 

5.3. Detailed COCOMO


Detailed COCOMO incorporates all characteristics of the intermediate version with an
assessment of the cost driver's impact on each step (analysis, design, etc.) of the software
engineering process.
The detailed model uses different effort multipliers for each cost driver attribute. These
Phase Sensitive effort multipliers are each to determine the amount of effort required to
complete each phase. In detailed cocomo, the whole software is divided into different
modules and then we apply COCOMO in different modules to estimate effort and then
sum the effort.
The effort is calculated as a function of program size and a set of cost drivers are given
according to each phase of the software life cycle. A Detailed project schedule is never
static.
The Six phases of detailed COCOMO are:-

o planning and requirements



o system design

o detailed design

o module code and test

o integration and test

o Cost Constructive model

5.4. Function Point

30
A Function Point (FP) is a unit of measurement to express the amount of business
functionality, an information system (as a product) provides to a user. FPs measure
software size. They are widely accepted as an industry standard for functional sizing.
For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −

5.5. ISO Standards


COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size measurement
method.

FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems
engineering - FiSMA 1.1 functional size measurement method. 

IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software
measurement - IFPUG functional size measurement method. 

Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point Analysis -
Counting Practices Manual.

NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size
measurement method version 2.1 - Definitions and counting guidelines for the
application of Function Point Analysis.

5.5.1. Object Management Group


Specification for Automated Function
Point
Object Management Group (OMG), an open membership and not-for-profit computer
industry standards consortium, has adopted the Automated Function Point (AFP)
specification led by the Consortium for IT Software Quality. It provides a standard for

31
Automating FP counting according to the guidelines of the International Function Point
User Group (IFPUG).

Function Point Analysis (FPA) technique quantifies the functions contained within
software in terms that are meaningful to the software users. FPs consider the number of
functions being developed based on the requirements specification.
Function Points (FP) Counting is governed by a standard set of rules, processes and
guidelines as defined by the International Function Point Users Group (IFPUG). These
are published in Counting Practices Manual (CPM).

5.6. History of Function Point Analysis


The concept of Function Points was introduced by Alan Albrecht of IBM in 1979. In
1984, Albrecht refined the method. The first Function Point Guidelines were published in
1984. The International Function Point Users Group (IFPUG) is a US-based worldwide
organization of Function Point Analysis metric software users. The International
Function Point Users Group (IFPUG) is a non-profit, member-governed organization
founded in 1986. IFPUG owns Function Point Analysis (FPA) as defined in ISO standard
20296:2009 which specifies the definitions, rules and steps for applying the IFPUG's
functional size measurement (FSM) method. IFPUG maintains the Function Point
Counting Practices Manual (CPM). CPM 2.0 was released in 1987, and since then there
have been several iterations. CPM Release 4.3 was in 2010.
The CPM Release 4.3.1 with incorporated ISO editorial revisions was in 2010. The ISO
Standard (IFPUG FSM) - Functional Size Measurement that is a part of CPM 4.3.1 is a
technique for measuring software in terms of the functionality it delivers. The CPM is an
internationally approved standard under ISO/IEC 14143-1 Information Technology –
Software Measurement.

5.7. Elementary Process (EP)


Elementary Process is the smallest unit of functional user requirement that −

 Is meaningful to the user.

32
 Constitutes a complete transaction.

 Is self-contained and leaves the business of the application being
counted in a consistent state.

5.8. Functions
There are two types of functions −

 Data Functions

 Transaction Functions

5.8.1. Data Functions


There are two types of data functions −

 Internal Logical Files



 External Interface Files
Data Functions are made up of internal and external resources that affect the system.

Internal Logical Files

Internal Logical File (ILF) is a user identifiable group of logically related data or control
information that resides entirely within the application boundary. The primary intent of
an ILF is to hold data maintained through one or more elementary processes of the
application being counted. An ILF has the inherent meaning that it is internally
maintained, it has some logical structure and it is stored in a file. (Refer Figure 1)
External Interface Files

External Interface File (EIF) is a user identifiable group of logically related data or
control information that is used by the application for reference purposes only. The data
resides entirely outside the application boundary and is maintained in an ILF by another
application. An EIF has the inherent meaning that it is externally maintained, an interface
has to be developed to get the data from the file. (Refer Figure 1)

33
5.8.2. Transaction Functions
There are three types of transaction functions.

 External Inputs

 External Outputs

 External Inquiries

Transaction functions are made up of the processes that are exchanged between the user,
the external applications and the application being measured.

External Inputs

External Input (EI) is a transaction function in which Data goes ―into‖ the application from
outside the boundary to inside. This data is coming external to the application.

o Data may come from a data input screen or another application. 



o An EI is how an application gets information. 

o Data can be either control information or business information. 

34
o Data may be used to maintain one or more Internal Logical Files.

o If the data is control information, it does not have to update an Internal Logical
File. (Refer Figure 1)

External Outputs

External Output (EO) is a transaction function in which data comes ―out‖ of the system.
Additionally, an EO may update an ILF. The data creates reports or output files sent to
other applications. (Refer Figure 1)

External Inquiries

External Inquiry (EQ) is a transaction function with both input and output components
that result in data retrieval. (Refer Figure 1)

Definition of RETs, DETs, FTRs

5.8.3. Record Element Type


A Record Element Type (RET) is the largest user identifiable subgroup of elements
within an ILF or an EIF. It is best to look at logical groupings of data to help identify
them.

5.8.4. Data Element Type


Data Element Type (DET) is the data subgroup within an FTR. They are unique and user
identifiable.

5.8.5. File Type Referenced


File Type Referenced (FTR) is the largest user identifiable subgroup within the EI, EO,
or EQ that is referenced to.

35
The transaction functions EI, EO, EQ are measured by counting FTRs and DETs that
they contain following counting rules. Likewise, data functions ILF and EIF are
measured by counting DETs and RETs that they contain following counting rules. The
measures of transaction functions and data functions are used in FP counting which
results in the functional size or function points.

5.9. COUPLING
An indication of the strength of interconnections between program units.

Highly coupled have program units dependent on each other. Loosely coupled are made
up of units that are independent or almost independent.

Modules are independent if they can function completely without the presence of the
other. Obviously, can't have modules completely independent of each other. Must
interact so that can produce desired outputs. The more connections between modules, the
more dependent they are in the sense that more info about one modules is required to
understand the other module.

Three factors: number of interfaces, complexity of interfaces, type of info flow along
interfaces.

Want to minimize number of interfaces between modules, minimize the complexity of


each interface, and control the type of info flow. An interface of a module is used to pass
information to and from other modules.

In general, modules tightly coupled if they use shared variables or if they exchange
control info.

Loose coupling if info held within a unit and interface with other units via parameter
lists. Tight coupling if shared global data.

If need only one field of a record, don't pass entire record. Keep interface as simple and
small as possible.

36
Two types of info flow: data or control.

Passing or receiving back control info means that the action of the module will
depend on this control info, which makes it difficult to understand the module.

Interfaces with only data communication result in lowest degree of coupling,
followed by interfaces that only transfer control data. Highest if data is hybrid.

Ranked highest to lowest:

5.9.1. Content coupling:


If one directly references the contents of the other.

When one module modifies local data values or instructions in another module.(can
happen in assembly language)

If one refers to local data in another module. 



If one branches into a local label of another. 

5.9.2. Common coupling:


Access to global data modules bound together by global data structures.

5.9.3. Control coupling:


Passing control flags (as parameters or global) so that one module controls the sequence
of processing steps in another module.

5.9.4. Stamp coupling:


Similar to common coupling except that global variables are shared selectively among
routines that require the data. E.g., packages in Ada. More desirable than common

37
coupling because fewer modules will have to be modified if a shared data structure
is modified. Pass entire data structure but need only parts of it.

5.9.5. Data coupling:


Use of parameter lists to pass data items between routines. Data coupling occurs when
modules share data through, for example, parameters. Each datum is an elementary piece,
and these are the only data shared (e.g., passing an integer to a function that computes a
square root).

5.10. COHESION
Measure of how well module fits together.A component should implement a single
logical function or single logical entity. All the parts should contribute to the
implementation.

Many levels of cohesion:

i) Coincidental cohesion: the parts of a component are not related but simply bundled
into a single component. Harder to understand and not reusable.
ii) Logical association: similar functions such as input, error handling, etc. put
together. Functions fall in same logical class. May pass a flag to determine which ones
executed. Interface difficult to understand. Code for more than one function may be
intertwined, leading to severe maintenance problems. Difficult to reuse
iii) Temporal cohesion: all of statements activated at a single time, such as start up or
shut down, are brought together. Initialization, clean up.Functions weakly related to one
another, but more strongly related to functions in other modules so may need to change lots of
modules when do maintenance.
iv) Procedural cohesion: a single control sequence, e.g., a loop or sequence of
decision statements. Often cuts across functional lines. May contain only part of a
complete function or parts of several functions. Functions still weakly connected,
and again unlikely to be reusable in another product.

38
v) Communicational cohesion: operate on same input data or produce same output
data. May be performing more than one function. Generally acceptable if alternate
structures
with higher cohesion cannot be easily identified. Still problems with reusability.
vi) Sequential cohesion: output from one part serves as input for another part. May Comment [C1]:

contain several functions or parts of different functions.


vii) Informational cohesion: performs a number of functions, each with its own entry
point, with independent code for each function, all performed on same data structure.
Different than logical cohesion because functions not intertwined.
viii) Functional cohesion: each part necessary for execution of a single function. e.g.,
compute square root or sort the array. Usually reusable in other contexts.
Maintenance easier.
ix)Type cohesion: modules that support a data abstraction.Not strictly a linear scale.
Functional much stronger than rest while first two much weaker than others. Often
many levels may be applicable when considering two elements of a module. Cohesion
of module considered as highest level of cohesion that is applicable to all elements in
the module.

5.11. System testing


System testing is known as the testing behavior of the system or software according
to the software requirement specification. 

It is a series of various tests.

It allows to test, verify and validate the business requirement and application architecture.

The primary motive of the tests is entirely to test the computer-based

system. Following are the system tests for software-based system


39
5.11.1. Recovery Testing

Recovery testing is a type of non-functional testing technique performed in order to determine


how quickly the system can recover after it has gone through system crash or hardware failure.
Recovery testing is the forced failure of the software to verify if the recovery is successful.

To check the recovery of the software, force the software to fail in various ways.

Re-Initialization, check pointing mechanism, data recovery and restart are


evaluated correctness.

Recovery Plan - Steps:


 Determining the feasibility of the recovery process.

 Verification of the backup facilities.

 Ensuring proper steps are documented to verify the compatibility of backup facilities.

 Providing Training within the team.

 Demonstrating the ability of the organization to recover from all critical failures.

 Maintaining and updating the recovery plan at regular intervals

40
5.11.2. Security testing
It checks the system protection mechanism and secure improper penetration.

5.11.3. Stress testing


System executes in a way which demands resources in abnormal
quantity, frequency.
A variation of stress testing is known as sensitivity testing.

5.11.4. Performance testing


Performance testing is designed to test run-time performance of the system in the context of
an integrated system.

It always combines with the stress testing and needs both hardware and software
requirements.

5.11.5. Deployment testing


It is also known as configuration testing.

The software works in each environment in which it is to be operated.

5.12. Debugging process


Debugging process is not a testing process, but it is the result of testing.

This process starts with the test cases.

41
The debugging process gives two results, i.e the cause is found and corrected second is the
cause is not found.

5.12.1. Debugging Strategies


Debugging identifies the correct cause of error. Following are the debugging strategies:

5.12.1.1. Brute force


Brute force is commonly used and least efficient method for separating the
cause of software error.
This method is applied when all else fails.

5.12.1.2. Backtracking
Backtracking is successfully used in small programs.
The source code is traced manually till the cause is found.

5.13. Cause elimination


Cause elimination establishes the concept of binary partitioning. 

42
It indicates the use of induction or deduction. 

The data related to the error occurrence is arranged in separate potential cause.

5.14. Characteristics of testability


Following are the characteristics of testability:
Operability-If a better quality system is designed and implemented then it easier to test.
Observability -It is an ability to see which type of data is being tested.
Using observability it will easily identify the incorrect output. Catch and report
the internal errors automatically.
Controllability-If the users controlled the software properly then the testing is
automated and optimized better.
Decomposability-The software system is constructed from independent module then
tested independently.
Simplicity-The programs must display the functional, structural, code simplicity so that
programs are easier to test.
Stability-Changes are rare during the testing and do not disprove existing tests.
Understandability-The architectural designs are well understood.
The technical documentation is quickly accessible, organized and accurate.

5.15. Attributes of 'good' test


The possibility of finding an error is high in good test.

Limited testing time and resources. There is no purpose to manage same test as another
test.

43
A test should be used for the highest probability of uncovering the errors of a complete
class.

The test must be executed separately and it should not be too simple nor too complex. 

Difference between white and black box testing

White-Box Testing Black-box Testing

White-box testing known as glass-box testing. Black-box testing also called as behavioral testing.

It starts early in the testing process. It is applied in the final stages of testing.

In this testing knowledge of implementation is needed. In this testing knowledge of implementation is not needed.

White box testing is mainly done by the developer. This testing is done by the testers.

In this testing, the tester must be technically sound. In black box testing, testers may or may not be technically sound.

5.16. Introduction to Testing


Testing is a set of activities which are decided in advance i.e before the start
of development and organized systematically. 

In the literature of software engineering various testing strategies to implement the testing
are defined.

All the strategies give a testing template.

Following are the characteristic that process the testing templates: 

The developer should conduct the successful technical reviews to perform the
testing successful.

Testing starts with the component level and work from outside toward the integration of
the whole computer based system.

Different testing techniques are suitable at different point in time. 

Testing is organized by the developer of the software and by an independent test group. 

Debugging and testing are different activities, then also the debugging should be
accommodated in any strategy of testing. 

44
5.16.1. Difference between Verification
and Validation
Verification Validation

Verification is the process to find whether the software meets the The validation process is checked whether the software
specified requirements for particular phase. meets requirements and expectation of the customer.

It estimates an intermediate product. It estimates the final product.

The objectives of verification is to check whether software is The objectives of the validation is to check whether the
constructed according to requirement and design specification. specifications are correct and satisfy the business need.

It describes whether the outputs are as per the inputs or not. It explains whether they are accepted by the user or not.

Verification is done before the validation. It is done after the verification.

Plans, requirement, specification, code are evaluated during the Actual product or software is tested under validation.
verifications.

It manually checks the files and document. It is a computer software or developed program based
checking of files and document.

5.16.2. Strategy of testing


A strategy of software testing is shown in the context of spiral. Following figure shows the
testing strategy:

45
Unit Testing

Unit testing starts at the center and each unit is implemented in source code.
Integration testing
A integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
The software project management focuses on four P's. They are as follows:
People

It deals with the motivated, highly skilled people.



It consists of the stakeholders, the team leaders and the software team. 

Product

The product objectives and the scope should be established before the project planning. 

Process

Process provides framework for creating the software development plan.



The umbrella activities like software quality assurance, software configuration
management and measurement cover the process model. 

Project

The planned and controlled software projects are managed for one reason. It is known
way of managing complexity.

To avoid the project failure, the developer should avoid a set of common warning, develop
a common sense approach for planning, monitoring and controlling the project etc. 

46
5.17. Problem Decomposition
 Problem decomposition is known as partitioning or problem elaboration. 

 It is an activity present during the software requirement analysis. 

 The problem is not completely decomposed during the scope of software. 

 This activity is applied during the two important areas:

 First, the information and functionality should be delivered. 



 Second, the process used to deliver it.

 A software team must have a significant level of flexibility for choosing the software
process model which is best for the project and populate the model after it chosen.

Following work tasks are needed in simple and small projects for communication
activity:

 Establish a list of clarification issues.



 To address these issues meet the stakeholders.

 A statement of scope should develop together. Review it with all concerned and modify it
as needed.

5.18. Concept of Project Management


Following work tasks are required in complex project for communication activity:

 Review the user or customer request.



 Schedule and plan a formal and facilitated meeting with all customers.

 Research is conducted to specify the proposed solution and existing approaches. 

 For the formal meetings, prepare a schedule and working document. 

 The use cases describe the software from the user point of view. 

 Review the use case for the consistency, correctness and lack of ambiguity. 

47
 Gather the use case into a scoping document, review it with all concerned. 

 Modify the use cases as needed.

5.18.1. Process and Project Metrics


5.18.1.1. Process Metrics
Process metrics are collected over all project and long period of time.
It allows a project manager:

 Access the status of ongoing project.


 Track the potential risks.
 Uncover the problem area before going to critical.
 Adjust the tasks.

 To control the quality of the software work products evaluate the


project team's ability.

5.18.1.2. Project Metrics


 On most software projects the first application of project metrics
occurs through the estimation.

 Metrics are collected from the previous projects act as base using
which effort and time estimates are created for current software work.

 The time and effort are compared to original estimates as a project goes
on.

 If the quality is improved then the defects are minimized and if the defect
goes down, then the amount of rework needed during the project is also
reduced.

48
6. Important contents beyond syllabus
6.1. Advance Agile Project Management
6.1.1. Introduction
Agile Project Management is one of the revolutionary methods introduced for the
practice of project management. This is one of the latest project management
strategies that is mainly applied to project management practice in software
development. Therefore, it is best to relate agile project management to the software
development process when understanding it.
From the inception of software development as a business, there have been a number of
processes following, such as the waterfall model. With the advancement of software
development, technologies and business requirements, the traditional models are not
robust enough to cater the demands.
Therefore, more flexible software development models were required in order to address
the agility of the requirements. As a result of this, the information technology community
developed agile software development models.
'Agile' is an umbrella term used for identifying various models used for agile
development, such as Scrum. Since agile development model is different from
conventional models, agile project management is a specialized area in project
management.

6.1.2. The Agile Process


It is required for one to have a good understanding of the agile development process
in order to understand agile project management.

49
There are many differences in agile development model when compared to
traditional models:
The agile model emphasizes on the fact that entire team should be a tightly
integrated unit. This includes the developers, quality assurance, project management,
and the customer.
Frequent communication is one of the key factors that makes this integration possible.
Therefore, daily meetings are held in order to determine the day's work and
dependencies.
Deliveries are short-term. Usually a delivery cycle ranges from one week to four weeks.
These are commonly known as sprints.
Agile project teams follow open communication techniques and tools which enable the
team members (including the customer) to express their views and feedback openly
and quickly. These comments are then taken into consideration when shaping the
requirements and implementation of the software.

50
6.1.3. Scope of Agile Project
Management
In an agile project, the entire team is responsible in managing the team and it is not
just the project manager's responsibility. When it comes to processes and procedures,
the common sense is used over the written policies.
This makes sure that there is no delay is management decision making and therefore
things can progress faster.
In addition to being a manager, the agile project management function should also
demonstrate the leadership and skills in motivating others. This helps retaining the
spirit among the team members and gets the team to follow discipline.
Agile project manager is not the 'boss' of the software development team. Rather, this
function facilitates and coordinates the activities and resources required for quality
and speedy software development.

7. Any other important information


Q.1 Define Software Engineering.
Ans. Software Engineering is defined as the application of systematic, disciplined, quantified
approach to the development, operations, and maintenance of software.

Q.2 List out the elements in Computer-Based System?

Ans. Elements in Computer-Based System are:

Software

Hardware

People

Database

Documentation

Procedures.

51
Q.3 What are the factors to be considered in the System Model Construction?

Ans. Factors to be considered in the System Model Construction are:

Assumption

Simplification

Limitation

Constraints

Preferences

Q.4 What does a System Engineering Model accomplish?

Ans. System Engineering Model accomplishes the following:

Define Processes that serve needs of view



Represent behavior of process and assumption

Explicitly define Exogenous and Endogenous Input

Represent all Linkages that enable engineer to better understand view.

Q.5 Define Framework.

Ans. Framework is the Code Skeleton that can fleshed out with specific classes or functionality
and Designed to address specifies problem at hand.

Q.6 What are the important roles of Conventional Component within the Software
Architecture?

Ans. The important roles of Conventional component within the Software Architecture are:

Control Component: That coordinates invocation of all other problem domain.



Problem Domain Component: That implements Complete or Partial function
required by customer.

Infrastructure Component: That responsible for functions that support
processing required in problem domain.

51
Q.7 Differentiate Software Engineering methods, tools and procedures.

Ans. Methods: Broad array of tasks like project planning cost estimation etc.

Tools: Automated or semi automated support for methods.

Procedures: Holds the methods and tools together. It enables the timely development of
computer software.

Q.8 Who is called as the Stakeholder?

Ans. Stakeholder is anyone in the organization who has a direct business interest in the system
or product to be built.

Q.9 Write about Real Time Systems.

Ans. It provides specified amount of computation with in fixed time intervals. RTS sense and
control external devices, respond to external events and share processing time between
tasks.

Q.10 Define Distributed system.

Ans. It consists of a collection of nearly autonomous processors that communicate to achieve a


coherent computing system.

Q.11 What are the characteristics of the software?

Ans. Characteristics of the software are:

Software is engineered, not manufactured.



Software does not wear out.

Most software is custom built rather than being assembled from components.

Q.12 What are the various categories of software?

Ans. The various categories of software are:

System software Application.



Software Engineering / Scientific.

52
Software Embedded software.

Web Applications.

Artificial Intelligence software.

Q.13 What are the challenges in software?

Ans. The challenges in software are:

Copying with legacy systems.



Heterogeneity challenge.

Delivery times challenge.

Q.14 Define Software process.

Ans. Software process is defined as the structured set of activities that are required to develop
the software system.

Q.15 What are the fundamental activities of a software process?

Ans. The fundamental activities of a software process are:

Specification

Design and Implementation

Validation

Evolution

Q.16 What are the umbrella activities of a software process?

Ans. The umbrella activities of a software process are:

Software project tracking and control.



Risk Management.

Software Quality Assurance.

Formal Technical Reviews.

Software Configuration Management.

Work product preparation and production.

53
Reusability management, Measurement.

Q.17 List the activities during project Initiation.

Ans. Important activities during project initiation phase:

o Management team building.



o Enables the team members to understand one another.

o Minimize the impact of cultural and language barriers.

o Scope and high level work division agreements.

o Management reporting and escalating procedures.

o Involvement of infrastructure / support groups.

o Team formation.

o Project kick off meeting is attended by formally all concerned so that everyone
has a common understanding of what is expected.

Q.18 What is work breakdown structure?

Ans. Work breakdown structure is the decomposition of the project into smaller and more
manageable parts with each part satisfying the following criteria-

Each WBS unit has a clear outcome.



The outcome has a direct relationship to achieve the overall project goal.

Each point has single point of accountability.

Q.19 What are the issues that get discussed during project closure?

Ans. The issues that get discussed during project closure are:

What were the goals that we set out to achieve?



How effective were the in process metrics?

What were the root causes for under-achievement or over achievement?

Was our estimation effort correct?

What were the factors in the environment that would like to change?

What did we gain from the system or environment?

54
Was our estimation of the hardware correct?

Q.20 Give any two activities of project initiation.

Ans. Management team building and Team formation.

Q.21. What are the external dependencies in project planning?

Ans. Staffing, Training, Acquisition and Commissioning of new hardware, Availability of


modules, Travel.

Q.22. What are internal milestones?

Ans. They are the measurable and quantifiable attributes of progress. They are the
intermediate points in the project which ensure that we are in the right track. They are
under the control of project manager.

Q.23. What is the role of the project board?

Ans. The overall responsibility for ensuring satisfaction progress on a project is the role of
the project board.

Q.24. What is the role of project manager?

Ans. The project manager is responsible for day to day administration of the project.

Q.25. What is closed system?

Ans. Closed systems are those that do not interact with the environment.

Q.26. What is embedded system?

Ans. A system that is a part of a large system whose primary purpose is non computational.

Q.27. What is a Process Framework?

Ans. Process Framework establishes foundation for a complete software process by


identifying a small number of framework activities that are applicable for all software
projects regardless of their size and complexity.

55
Q.28 What are the Generic Framework Activities?

Ans. Generic Framework Activities are:

Communication.

Planning.

Modeling.

Construction.

Deployment.

Q.29. Define Stakeholder.

Ans. Stakeholder is anyone who has stake in successful outcome of project such as:

Business Managers,

End-users,

Software Engineer,

Support People

Q.30. How the Process Model differ from one another?

Ans. Process Model differ from one another due to the following reasons:

Based on flow of Activities.



Interdependencies between Activities.

Manner of Quality Assurance.

Manner of Project Tracking.

Tam Organization and Roles.

Work Products identify a requirement Identifier.

Q.31 Write out the reasons for the Failure of Water Fall Model?

Ans. Reasons for the Failure of Water Fall Model are :

Real project rarely follow sequential Flow. Iterations are made in indirect manner.

Difficult for customer to state all requirements explicitly.

56
Customer needs more patients as working products reach only at deployment
phase.

Q.32 What are the Drawbacks of RAD Model ?

Ans. Drawbacks of RAD Model are :

Require sufficient number of Human Resources to create enough number of


teams.

Developers and Customers are not committed, system result in failure.

Not Properly Modularized building component may Problematic.

Not applicable when there is more possibility for Technical Risk.

Q.33 Define the term Scripts.

Ans. Scripts Specific Process Activities and other detailed work functions that are part of
team process.

Q.34. Write the disadvantages of classic life cycle model.

Ans. Disadvantages of classic life cycle model are :

Real projects rarely follow sequential flow. Iteration always occurs and creates
problem.

Difficult for the customer to state all requirements.

Working version of the program is not available. So the customer must have
patience.

Q.35. What do you mean by task set in spiral Model?

Ans. Each of the regions in the spiral model is populated by a set of work tasks called a task
set that are adopted to the characteristics of the project to be undertaken.

Q.36 What is the main objective of Win-Win Spiral Model ?

57
Ans. The customer and the developer enter into the process of negotiation where the
customer may be asked to balance functionality, performance and other product against
cost and time to market.

Q.37. Which of the software engineering paradigms would be most effective? Why?

Ans. Incremental / Spiral model will be most effective.

Reasons :

It combines linear sequential model with iterative nature of prototyping.



Focuses on delivery of product at each increment.

Can be planned to manage technical risks.

Q.38. What are the merits of incremental model ?

Ans. The merits of incremental model are :

The incremental model can be adopted when there are less number of people
involved in the project.

Technical risks can be managed with each increment.

For a very small time span, at least core product can be delivered to the customer.

Q.39. List the task regions in the Spiral model.

Ans. Task regions in the Spiral model are:

Customer Communication: In this region it is suggested to establish customer


communication.

Planning: All planning activities are carried out in order to define resources
timeline and other project related activities.

Risk Analysis: The tasks required to calculate technical and management risks.

Engineering: In this the task region, tasks required to build one or more
representations of applications are carried out.

Construct and Release: All the necessary tasks required to construct, test, and
install the applications are conducted.

58
Customer Evaluation: Customer's feedback is obtained and based on the customer
evaluation required tasks are performed and implemented at installation stage.

Q.40. What are the drawbacks of spiral model?

Ans. The drawbacks of spiral model are:

It is based on customer communication. If the communication is not proper then


the software product that gets developed will not be the up to the mark.

It demands considerable risk assessment. If the risk assessment is done properly
then only the successful product can be obtained.

Q.41 Name the Evolutionary process Models.

Ans. Evolutionary powers models are:

Incremental model

Spiral model

WIN-WIN spiral model

Concurrent Development

Q.42 Define Software Prototyping.

Ans. Software prototyping is defined as a rapid software development for validating the
requirements.

Q.43 What are the benefits of prototyping ?

Ans. The benefits of prototyping are :

Prototype services as a basis for deriving system specification.



Design quality can be improved.

System can be maintained easily.

Development efforts may get reduced.

System usability can be improved.

Q.44. What are the prototyping approaches in software process?

59
Ans. The prototyping approaches in software process are :

Evolutionary prototyping: In this approach of system development, the initial


prototype is prepared and it is then refined through number of stages to final
stage.

Throw-away prototyping:Using this approach a rough practical implementation
of the system is produced. The requirement problems can be identified from this
implementation. It is then discarded. System is then developed using some
different engineering paradigm.

Q.45. What are the advantages of evolutionary prototyping ?

Ans. The advantages of evolutionary prototyping are :

Fast delivery of the working system.



User is involved while developing the system.

More useful system can be delivered.

Specification, design and implementation work in coordinated manner.

Q.46. What are the various Rapid prototyping techniques ?

Ans. The various rapid prototyping techniques are :

Dynamic high level language development.



Database programming.

Component and application assembly.

Q.47. What is the use of User Interface prototyping ?

Ans. This prototyping is used to pre-specify the look and feel of user interface in an effective
way.

Q.48. Give the phases of product development life cycle.

Ans. The phase of product development life cycle are:

60
Idea generation: Ideas come from various sources like customers, suppliers,
employees, market place demands.

Prototype development phase: This entails buildings simplistic model of final
product.

Beta phase: This iron out the kinks in the product and add necessary supporting
infrastructure to roll out the product.

Production phase: In this phase product is ready for prime time.

Maintenance and obsolescence phase: In this critical bugs are fixed after which
the product goes into obsolescence.

Q.49. Explain water fall model in detail.

Ans. The project is divided into sequence of well defined phases. One phase is completed
before next starts. There is a feedback loop between adjacent phases. What the actual
phase are depends on the project.

Advantages :

Simplicity

Lining up resources with appropriate skills is easy

Disadvantages :

Highly impractical for most projects



Phases are tightly coupled.

Q.50 Explain RAD model in detail

Ans. The customer and developer agree on breaking the product into small units.
Development is carried out using modeling tools and CASE tools. Customer is kept in
touch so the changes are reflected time. Quality assurance is imposed.

Advantages:

Responsiveness to change

Ability to capture user requirements effectively.

61
Application turnaround time is shorter.

Disadvantages:

Need for modeling tools which adds expense.

Places restriction on type and structure.

Q.51. What is the principle of prototype model?

Ans. A prototype is built to quickly demonstrate to the customer what the product would
look like. Only minimal functionality of the actual product is provided during the
prototyping phase.

Q.52. What is the advantage of Spiral model?

Ans. The main advantages of spiral model is, it is realistic and typifies most software
development products/projects. It combines the best features of most of the earlier
models. It strikes a good balance mechanism for early problem identification and
correction while not missing out proactive problem prevention.

Q.53 What is lifecycle model?

Ans. Here different terms have specialization and responsibility in different life cycle phase.

Q.54. Why Formal Methods are not widely used?

Ans. Formal Methods are not widely used due to the following reasons:

It is Quite Time Consuming and Expensive.



Extensive expertise is needed for developers to apply formal methods.

Difficult to use as their technically sophisticated maintenance may become risk.

Q.55. What are the Objectives of Requirement Analysis ?

Ans. Objectives of Requirement Analysis are :

It describes what customer requires.

62
It establishes a basis for creation of software design.

It defines a set of requirements that can be validated once the software design is
built.

Q.56. Define System Context Diagram (SCD)?

Ans. System Context Diagram (SCD):

Establish information boundary between System being implemented and


Environment in which system operates.

Defines all external producers, external consumers and entities that communicate
through user interface.

Q.57. Define System Flow Diagram (SFD)?

Ans. System Flow Diagram (SFD):

Indicates Information flow across System Context Diagram region.



Used to guide system engineer in developing system.

Q.58. What are the Requirements Engineering Process Functions?

Ans. Requirements Engineering Process Functions are:

Inception

Elaboration

Specification

Management

Elicitation

Negotiation

Validation

63
Q.59. What are the Difficulties in Elicitations?

Ans. Difficulties in Elicitation are:

Problem of Scope

Problem of Volatility

Problem of Understanding

Q.60. Define Quality Function Deployment (QFD)?

Ans. Quality Function Deployment (QFD) is a technique that translates needs of customer
into technical requirement. It concentrates on maximizing customer satisfaction from
the software engineering process.

Q.61. Write a short note on structure charts.

Ans. These are used in architectural design to document hierarchical structure, parameters and
interconnections in a system. No Decision box. The chart can be augmented with
I o

and
P p
module by module specification of attributes.
Q.62. What are the contents of HIPO diagrams?

Ans. The contents of HIPO diagrams are:

Visual table of contents,



Set overview diagrams,

Set of details diagrams.

Q.63 Explain software Requirement Specification.

Ans. Software Requirement Specification includes;

Q.64 What is Requirement Engineering ?

64
Ans. Requirement engineering is the process of establishing to services that the customer
required from the system and constraints under which it operates and is developed.

Q.65 What are the characteristics of SRS?

Ans. The characteristics of SRS are as follows:

Correct: The SRS should be made up the date when appropriate requirements are identified.

Unambiguous: When the requirements are correctly understood then only it is possible to
write unambiguous software.

Complete: To make SRS complete, its hold be specified what a software designer wants to
create software.

IV. Consistent: It should be consistent with reference to the functionalities identified.

Specific: The requirements should be mentioned specifically.

VI. Traceable: What is the need for mentioned requirement? This should be correctly
identified.
VII. Q.66 What are the objectives of Analysis modeling ?

Ans. The objectives of analysis modeling are:

To describe what the customer requires.

II.To establish a basis for the creation of software design.

III.To devise a set of valid requirements after which the software can be build.

Q.67 What is ERD?

Ans. Entity Relationship Diagram is the graphical representation of the object relationship
pair. It is mainly used in database application.

Q.68 What is DFD?

65
Ans. Data Flow Diagram depicts the information flow and the transforms that are applied on
the data as it moves from input to output.

Q.69 What does Level 0 DFD represent?

Ans. Level-0 DFD is called as fundamental system model or context model. In the context
model the entire software system is represented by a single bubble with input and
output indicated by incoming and outgoing arrows.

Q.70 What is a state transition diagram?

Ans. State transition diagram is basically a collection of states and events. The events cause
the system to change its state. It also represents what actions are to be taken on the
occurrence of particular events.

Q.71 Define Data Dictionary.

Ans. The data dictionary can be defined as an organized collection of all the data elements of
the system with precise and rigorous definitions so that user and system analyst will
have a common understanding of inputs, outputs, components of stores and
intermediate calculations.

Q.72 What are the elements of analysis model?

Ans. The elements of analysis model are:

Data Dictionary
Entity Relationship Diagram
Data flow Diagram
State Transition Diagram
Control Specification
Process Specification

Q.73 What are the elements of design model?

Ans. The elements of design model are;

66
Data Design
Architectural design
Interface design
Component-level design.

Q.74 What are the dimensions of requirements gathering?

Ans. The dimensions of requirements gathering are:

Responsibilities: Commitments on either side Requirement form the basis for the success of
further in a project.

Current system requirements

Functionality requirements
Performance requirements
Availability needs
Security
Environmental definitions

Targets

Acceptance criteria

Ongoing needs: Documentation

Training
Ongoing support

Q.75 List the skill sets required during the requirements phase.

Ans. The skill sets required during the requirements phase are:

67
 Availability to look the requirements

 domain expertise

 Storing interpersonal skills

 Ability to tolerate ambiguity

 Technology awareness

 Strong negotiation skills

 Strong communication skills

Q.76 What is the primary objective of project closure ?

Ans. Evaluating effectiveness of the original project goals and providing to improve the
system.

Q.77 What are the dimensions of requirements gathering?

Ans. The dimensions of requirements gathering are:

Responsibilities

Targets

Current system needs

Ongoing needs

Q.78 Give the classifications of system requirements.

Ans. The classification of system requirements are:

Functionality Requirements

Availability needs

68
Performance requirements

Security

Q.79 List some of the skills essential for requirements gathering phase.

Ans. The skills essential for requirements gathering phase are:

Ability to see from customers point of view



Technology awareness

Strong interpersonal skills

Domain expertise

Strong communication skills

Q.80 What does P-CMM model stand for?

Ans. P-CMM stand for people CMM.

Q.81 What are the components of the Cost of Quality?

Ans. Components of the Cost of Quality are:

Quality Costs.

Appraisal Costs

Prevention Costs.

Q.82 What is Software Quality Control?

Ans. Software Quality Control involves series of inspections, reviews and tests which are
used throughout software process to ensure each work product meets requirements
placed upon it.

69
Q.83 What is Software Quality Assurance?

Ans. Software Quality Assurance is a set of auditing and reporting functions that assess
effectiveness and completeness of quality control activities.

Q.84 What steps are required to perform Statistical SQA?

Ans. Following steps are required to perform Statistical SQA:

Information about software defects is collected and categorized.



Attempt is to trace each defect.

Using Pareto principal, isolate 20%

Once vital causes are identified, correct problems can be enforced to overcome.

Q.85 Define SQA Plan

Ans. SQA Plan provides road map for instituting SQA and it serves as template for SQA
activities that are instituted for each software project.
Question 86. Explain the SE paradigms, Explain each and every model with diagram
also compare each model-
Answer 1. The software engineering paradigm which is also referred to as a software process
model or Software Development Life Cycle model is the development strategy that encompasses
the process, methods and tools. In simple words Software Engineering Paradigms is the Software
Development Life Cycle only. The objectives of the use of software engineering paradigms
include the following points:

The Software Development Process becomes a Structured process.



It determines the order of states involved in software development, evolution and
to establish the transitions criteria for the next stage.

It also provides the guidance to the software engineer.

There are 4 basic Software Development Models. They are as follows:

70
1. Waterfall Model:

Waterfall Model is the basic of all models of Software Development. It is also known as
Linear Sequential Model as every stage is reached or executed sequentially. Waterfall model is
considered to be the classic life cycle model as it is the basic and the first model to be proposed
and accepted worldwide.
The below is the image of the different stages of the Waterfall Model.

We can see that every stage is executed after the first stage is executed completely. No
backward movement in this model can be done therefore we cannot correct an error occurring in
the previous stages before executing the whole software.
Advantages of Waterfall Model:
 The model has well-defined phases with well-defined inputs.

 The model recognizes the sequences of software engineering activities that result in a
software product.

Disadvantages of Waterfall Model:


 The model has no mechanism to handle changes to the requirements that are identified
during the design phase.

 For large projects, the users have to wait a long time for the delivery of the system.

 The model assumes that requirements are clearly specified at the beginning of the project.

The model reduces the customer involvement between design and testing phase of the
project.
As the Disadvantages of the Waterfall Model were way more than the advantages and
disadvantages were significant therefore there was need to update the waterfall model and
bring in a new model to overcome the disadvantages of the waterfall model.
71
2. Spiral Model:

The Spiral Model involves the same steps which waterfall model has but the spiral model is a
model which worked upon the disadvantages of waterfall model. The spiral model motivates
from the concept that the requirements given at the beginning are incomplete so there is a
need of requirement analysis phase and design phase to evolve periodically as the project
processes. Below is the image of the structure of the Spiral Model:
From the above image we can see that the stages are repeating periodically as per the need.
The spiral model focuses on the constant re-evaluation of the design and risks involved. It can
be described as being iterative and sequential. As we can see every stage is executed
sequentially and is repeated after a period of time to re-evaluate some points or some
requirements.

It is flexible and can be tailored for a variety of situations such as reuse component
based development and prototyping.
It can be adjusted to be used as any other model.

It takes into consideration the change of the requirements during the development process.

Disadvantages of Spiral Model:


High Cost

Very complex therefore not suitable for small projects.


Incremental Process Model:

The incremental process model uses the same phases as the waterfall process model. The
difference between the incremental process model and the waterfall model is that in this model
the phases are much smaller than the waterfall phases. This model is similar to the spiral model
in that the requirements are not complete at project start but the difference with the spiral model
is that rather than going in a cyclical cycle like the spiral model, the incremental model has
something similar to multiple waterfalls within the model i.e. in the Incremental Process Model
follows a sequential execution of the phases after the last phase is completed and any updation
is required than it moves to the first phase or the phase where the updation is required.
72
Advantages of Incremental Process Model:

 It is easier to test and debug during a smaller iteration.

 It is easier to manage risk in incremental process model because risky pieces are
identified and handled during iteration.

 It lowers the initial delivery cost.

 This model is more flexible, less costly to change scope and requirements.

73
Disadvantages of Incremental Process Model:
Incremental process model needs a clear and complete definition of the whole system
before it can be broken down and built incrementally.

The cost needed for this model is higher than the Waterfall model.

To build Incremental Process Model we need good planning and design.

Question 87. Draw level 0 or level 1 DFD for the Railway Reservation System ? -4
Answer .
Level 0 DFD of Railway Reservation System: Advantages of Spiral Model:

74
Question88. Draw the E-R Diagram of banking system
Answer . ER diagram for Bank system is:

75
Question89. Outline the major goals & key challenges faced by software engineering ? -
Answer . The Major Goals of Software Engineering are:
Readability

Reusability

Correctness

Reliability

Flexibility

Efficiency

The Key Challenges faced by Software Engineering are:


 The methods used to develop small or medium-scale projects are not suitable when it
comes to the development of large-scale or complex systems.

 In today's world, changes occur rapidly and accommodating these changes to develop
complete software is one of the major challenges faced by the software engineers.

 The user generally has only a vague idea about the scope and requirements of the
software system. This usually results in the development of software, which does not
meet the user's requirements.

 Changes are usually incorporated in documents without following any standard
procedure. Thus, verification of all such changes often becomes difficult.

 Informal communications take up a considerable portion of the time spent on software
projects. Such wastage of time delays the completion of projects in the specified time.

 The development of high-quality and reliable software requires the software to
be thoroughly tested.

76
Question 90. Distinguish between generic and customize software production?
Answer .

 Generic Software Production  Customize Software Production

It is done for the general purpose It is done to satisfy particular


audience. need of a particular client.
 
It is tough on the basis of design and It is easier as design and structure
structure as we have to build are already prepared we have to
whole design and structure. just make some modifications.
 
It is done for the project whose It is being done as Hire-in-Project
result product is owned by the for another company.
company itself. 
 
It is not better as the end result It is better as the end result is the
may satisfy the majority or may result which the client wants and
not satisfy them on the general will always satisfy the client
purpose. needs.

Question91. Describe the following


(1). Product
(2). Process
(3). Milestones

Answer: Software Engineering Institute defines a software product line as "a set of software-
intensive systems that share a common, managed set of features satisfying the specific needs of a
particular market segment or mission and that are developed from a common set of core assets in
a prescribed way

77
In software engineering, a software development methodology (also known as a system
development methodology, software development life cycle, software development process,
software process) is a splitting of software development work into distinct phases (or stages)
containing activities with the intent of better.
A milestone is a significant event in the course of a project that is used to give visibility of
progress in terms of achievement of predefined milestone goals. Failure to meet a milestone
indicates that a project is not proceeding to plan and usually triggers corrective action by
management.
Question 92Difference between logical and physical DFD-2
Answer:Data flow diagrams (DFDs) are categorized as either logical or physical. A logical
DFD captures the data flows that are necessary for a system to operate. It describes the
processes that are undertaken, the data required and produced by each process, and the stores
needed to hold the data.
Question93. What do you mean by Black Box View? -3
Answer:Black Box view is the view in which we are considered with the outside element of the
project not with the interior of the project that what we have to deal inside the project in not of
anconcern. Black box view is basically use for the input and output of a software we are
developing. Black box testing is also generated from this black box view only. In black box
testing we check the input we give and check for the output we getting for that particular input.
We check for the robustness, boundary values of the software. Black Box view is the same as we
are seeing through an opaque body i.e. we will be able to see the exterior of that body not the
interior, it‘s same for the black box view only outside things will be seen which are input and
output.

Question 94: What is phase containment of errors? -3


Answer: Phase Containment of Errors:
Phase Containment in a nutshell is finding and removing defects/errors early in the process of
Software Development Life Cycle. It is the act of containing faults in one phase of software
development before they escape and are found in subsequent phases. An error is a fault that is
introduced in the current phase of software development. A defect is a fault that was
introduced in prior phases of software development and discovered in subsequent phases.

78
Now we can relate all this to answer Phase Containment errors, there are some errors
which escape the phase and were found in the later phases which will affect the cost and
SE-
productivity of the software. We will calculate the cost of removing this error which increase
the budget of the software. This escaping of error in the phase and finding the errors in the later
phase is called phase containment of errors. There are many ways to remove these errors or to
deal with these errors.

Question 95: What is a structured design? Draw structure chart, HIPO diagram.
Answer:Structured analysis and design technique (SADT) is a systems engineering and
software engineering methodology for describing systems as a hierarchy of functions. SADT is
a structured analysis modeling language, which uses two types of diagrams: activity models
and data models.
A Structure Chart (SC) in software engineering and organizational theory, is a chart which
shows the breakdown of a system to its lowest manageable levels. They are used in structured
programming to arrange program modules into a tree. Each module is represented by a box,
which contains the module's name.
Que 96. Software doesn’t wear out. Elaborate it.
Ans: Software doesn‘t ―wear out‖ :

This figure 01 is often called the ―bathtub curve‖. It indicates that, at the beginning of the life of
hardware it shows high failure rate as it contains many defects. By time, the manufacturers or the
designers repair these defects and it becomes idealized or gets into the steady state and continues.
79
But after that, as time passes, the failure rate rises again and this may be caused by excessive
temperature, dust, vibration, improper use and so on and at one time it becomes totally unusable.
This state is the ―wear out‖ state. On the other hand, software does not wear out. Like hardware,
software also shows high failure rate at its infant state. Then it gets modifications and the defects
get corrections and thus it comes to the idealized state. But though a software not having any
defects it may get the need of modification as the users demand from the software may change.
And when it occurs, the unfulfilled demands will be considered as defects and thus the failure
rate will increase. After one modification another may get the necessity. In that way, slowly, the
minimum failure rate level begins to rise which will cause the software deteriorated due to
change, but it does not ―wear out‖. What is the advantage of using prototype software
development model instead of waterfall model? Also explain the effect of defining a prototype
on the overall cost of the software project? What is the difference between function oriented and
object oriented design?
Ques 97. Differentiate between iterative Enhancement Model and Evolutionary
Development model.
Ans: Iterative Enhancement Model: This model has the same phases as the waterfall model, but
with fewer restrictions. Generally the phases occur in the same order as in the waterfall model,
but these may be conducted in several cycles. A useable product is released at the end of the
each cycle, with each release providing additional functionality. Evolutionary Development
Model: Evolutionary development model resembles iterative enhancement model. The same
phases as defined for the waterfall model occur here in a cyclical fashion. This model differs
from iterative enhancement model in the sense that this does not require a useable product at the
end of each cycle. In evolutionary development, requirements are implemented by category
rather than by priority.
Ques 98.How does the risk factor affect the spiral model of software development?
Ans: Risk Analysis phase is the most important part of "Spiral Model". In this phase all possible
(and available) alternatives, which can help in developing a cost effective project are analyzed
and strategies are decided to use them. This phase has been added specially in order to identify
and resolve all the possible risks in the project development. If risks indicate any kind of
uncertainty in requirements, prototyping may be used to proceed with the available data and
find out possible solution in order to deal with the potential changes in the requirements.

80
Ques 99. Why is SRS also known as the black box specification of system?
Ans: SRS document is a contract among the development team and the customer. Once the SRS
document is accepted by the customer, any subsequent controversies are settled by referring the
SRS document. The SRS document is called as black-box specification. Since the system is
considered as a black box whose internal details are not known and only its visible external (i.e.
input/output) behavior is recognized.
Ques 100. Consider a program which registers students for different programs. The
students fill a form and submit it. This is sent to the departments for confirmation. Once it
is confirmed, the form and the fees are sent to the account section. Draw a data flow
diagram using SRD technique.
Ans:

81
Ques101.What is modularity? List the important properties of modular system?
ANS) Modularity is the degree to which a system's components may be separated and
recombined. The meaning of the word, however, can vary somewhat by context: In
biology, modularity is the concept that organisms or metabolic pathways are composed
of modules.
Self- Contained: "Agile & Autonomous"

A module is a self-contained component of a larger software system. This doesn't mean that it
is an atomic component. In fact a module consists a several smaller pieces which are
collectively contributed to the functionality/performance of the module.

We cannot remove or modify at least any of these tiny (compared to larger software system)
components and if we do so, the 'Module' will cease it expected functionality. A module can
be installed, un-installed or moved as a whole (single unit) and it won‘t affect the functionality
of the other modules.

Highly Cohesive: "To do one thing and do it well"

Cohesive is strong related to the 'responsibility' in real life. A responsibility is a kind of


action that we identify that a given entity is subjected to act on.

82
'High Cohesiveness' means that a component (module) is strongly related or focussed to carry
out a specific task and also not doing any unrelated tasks. Therefore, cohesive modules are fine-
grained, robust, reusable and less in complexity.

In our compartment examples, you can see each compartment (module) contains a predefined set
of sub components and they are responsible to carry out a well defined task and doing it with
absolute efficiency.

Loose Coupling: "Hassle free interaction"

A given module's internal implementation is not dependent on the other module that it interacts
with. Modules are interacting with a well defined clean interface and any of modules can change
its internal implementation without affecting other modules.

It‘s vital to define the interfaces between the modules with extreme care. In the ideal case an
interface should be define based on the what a given module offers to other modules and what it
requires from other modules.

So back to our real world scenario, in the compartment example, we can clearly see that the
interfaces are well defined and any of the internal modification inside a compartment would not
affect other modules.

Ques102. Define cohesion & coupling. Explain various types of cohesion & coupling. What
are the effects of module cohesion & coupling?
ANS) Coupling: Two modules are considered independent if one can function completely
without the presence of other. Obviously, if two modules are independent, they are solvable and
modifiable separately. However, all the modules in a system cannot be independent of each
other, as they must interact so that together they produce the desired external behavior of the
system
Cohesion: Cohesion is the concept that tries to capture this intra-module. With cohesion we are
interested in determining how closely the elements of a module are related to each other.

83
Cohesion of a module represents how tightly bound the internal elements of the module are to
one another. Cohesion of a module gives the designer an idea about whether the different
elements of a module belong together in the same module. Cohesion and coupling are clearly
related. Usually the greater the cohesion of each module in the system, the lower the coupling
between modules is. There are several levels of Cohesion:
Coincidental
Logical
Temporal
Procedural
Communicational
Sequential
Functional
Coincidental is the lowest level, and functional is the highest. Coincidental Cohesion occurs
when there is no meaningful relationship among the elements of a module. Coincidental
Cohesion can occur if an existing program is modularized by chopping it into pieces and
making different pieces modules.
A module has logical cohesion if there is some logical relationship between the elements of a
module, and the elements perform functions that fill in the same logical class. A typical
example of this kind of cohesion is a module that performs all the inputs or all the outputs.
Temporal cohesion is the same as logical cohesion, except that the elements are also related in
time and are executed together. Modules that perform activities like ―initialization‖, ―clean-
up‖ and ―termination‖ are usually temporally bound.
A procedurally cohesive module contains elements that belong to a common procedural unit. For
example, a loop or a sequence of decision statements in a module may be combined to form a
separate module. A module with communicational cohesion has elements that are related by a
reference to the same input or output data. That is, in a communication ally bound module, the
elements are together because they operate on the same input or output data.

When the elements are together in a module because the output of one forms the input to another,
we get sequential cohesion. Functional cohesion is the strongest cohesion. In a functionally
bound module, all the elements of the module are related to performing a single function. By

84
function, we do not mean simply mathematical functions; modules accomplishing a single goal
are also included

Ques103. It is possible to estimate software size before coding. Justify your answer?

ANS) A Function Point (FP) is a unit of measurement to express the amount of business
functionality, an information system (as a product) provides to a user. FPs measure software size.
They are widely accepted as an industry standard for functional sizing.

For sizing software based on FP, several recognized standards and/or public specifications have
come into existence. As of 2013, these are −

ISO Standards

COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size measurement method.

FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems engineering -


FiSMA 1.1 functional size measurement method.

IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software measurement -


IFPUG functional size measurement method.

Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point Analysis - Counting


Practices Manual.

NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size measurement


method version 2.1 - Definitions and counting guidelines for the application of Function Point
Analysis.

Object Management Group Specification for Automated Function Point

Object Management Group (OMG), an open membership and not-for-profit computer industry
standards consortium, has adopted the Automated Function Point (AFP) specification led by the
Consortium for IT Software Quality. It provides a standard for automating FP counting
according to the guidelines of the International Function Point User Group (IFPUG).

85
Function Point Analysis (FPA) technique quantifies the functions contained within software in
terms that are meaningful to the software users. FPs the number of functions being developed
based on the requirements specification.

Ques104. Discuss various types of COCOMO model? Explain the phase wise distribution of
effort?

ANS) Any cost estimation model can be viewed as a function that outputs the cost estimate. The
basic idea of having a model or procedure for cost estimation is that it reduces the problem of
estimation of determining the value of he ―key parameters‖ that characterize the project, based
on which the cost can be estimated.

The primary factor that controls the cost is the size of the project. That is, the larger the project,
the greater the cost & resource requirement. Other factors that affect the cost include
programmer ability, experience of developers, complexity of the project, & reliability
requirements.

The goal of a cost model is to determine which of these many parameters have significant effect
on cost & then to discover the relationships between the cost. The most common approach for
estimating effort is to make a function of a single variable. Often this variable is the project size,
the equation of efforts is:

EFFORT = a x size b

Where a & b are constants.

If the size estimate is in KDLOC, the total effort, E, in person-months can be given by the
equation.

E = 5.2 (KDLOC) 91

The Constructive cost model (COCOMO) was developed by Boehm. This model also estimates
the total effort in terms of person-months of the technical project staff. The effort estimate
includes development, management, and support tasks but does not include the cost of the

86
secretarial and other staff that might be needed in an organization. The basic steps in this model
are: -

Obtain an initial estimate of the development effort from the estimate of thousands of
delivered lines of source code (KDLOC).

Determine a set of 15 multiplying factors from different attributes of the project.

Adjust the effort estimate by multiplying the initial estimate with all the multiplying factors.

The initial estimate is determined by an equation of the form used in the static single – variable
models, using KDLOC as the measures of size The value of the constants a and b depend on the
project type. In COCOMO, projects are categorized into three types – organic, semidetached,
and embedded.

Organic projects are in an area in which the organization has considerable experience and
requirements are less stringent. A small team usually develops such systems. Examples of this
type of project are simple business systems, simple inventory management systems, and data
processing systems.

Ques105. What do you mean by software quality? Explain its attributes?


ANS) Quality can be define in different manner. Quality definition may differ from person to
person. But finally there should be some standards. So Quality can be defined as
Degree of excellence
Fitness for purpose
Best for the customer‘s use and selling price
The totality of characteristics of an entity that bear on its ability to satisfy stated or implied
needs – ISO
How a Product developer will define quality – The product which meets the customer
requirements.
How Customer will define Quality – Required functionality is provided with user friendly
manner.
These are some quality definitions from different perspective. Now lets see how can one
measure some quality attributes of product or application.
Following factors are used to measure software development quality. Each attribute can be

87
used to measure the product performance. These attributes can be used for Quality assurance
as well as Quality control. Quality Assurance activities are oriented towards prevention of
introduction of defects and Quality control activities are aimed at detecting defects in products
and services.
Reliability
Measure if product is reliable enough to sustain in any condition. Should give
consistently correct results.
Product reliability is measured in terms of working of project under different working
environment and different conditions.
Maintainability
Different versions of the product should be easy to maintain. For development its should be easy
to add code to existing system, should be easy to upgrade for new features and new technologies
time to time. Maintenance should be cost effective and easy. System be easy to maintain and
correcting defects or making a change in the software.
Usability
This can be measured in terms of ease of use. Application should be user friendly. Should
be easy to learn. Navigation should be simple.

Ques106. Write down the characteristics of good user interface design?


ANS) Characteristics of A Great User Interface Design
1. Clarity
Clarity is of prime importance in user interface design. It helps prevent user errors, makes
important information clear and gives a perfect user experience. Clarity means the
information content is conveyed accurately.

2. Conciseness
By clarity in user interface design it however doesn‘t mean that you need not overflow your
design with information. It is not at all difficult to add definitions and explanations but over flow
of the content will surely bug the user as it will ask them to spend too much time reading them. It
is highly suggestive to keep things clear and concise which means if something can be explained

88
without sparing extra words do that! The idea is to save the valuable time of the users by
keeping things as concise as possible.
3. Consistency
Consistency is yet another characteristic of a good user interface design. It enables users to
develop usage patterns which will make them learn what the different buttons, tabs, icons and
other interface elements look like thereby easily recognizing them. A unique design with
conformity on user‘s end speaks for a good user interface design.
4. Legibility
While designing a user interface what must be kept in mind is legibility which means you need
not use complicated words which might be difficult to read and understand instead use easy
language and make sure your design includes information that is easy to read.
5. Responsiveness
By responsive user interface design we mean that there should be no time lag in loading. It
should be quite fast! Witnessing good loading speed is sure to enhance the user experience.
Besides, it should provide informative stuff to the user about the task in hand. Also, the
interface should provide some form of feedback to the user. The interface should let users know
as to what‘s happening. Its a wise idea to play a spinning wheel or show a progress bar to let the
user know the current status.
6. Efficient
By making the user interface it means is to figure out what exactly the user is trying to achieve
and then let them do exactly what they choose that too without any fuss. Prepare an interface
that enables people easily accomplish what they want instead of fussy listing which can be
bugging thereby marring the overall experience.
7. Attractiveness
Last but not the least, your user interface design should focus mainly on user experience which
besides cool user-friendly features should include the visual appeal. If the visual appeal is
missing in your user interface design, the overall effort goes waste. So, while preparing the user
interface design do not underestimate the value of visual appeal.

Ques107. Explain staffing?

89
ANS) Staffing is the process of hiring, positioning and overseeing employees in an
organisation. Nature of Staffing Function
Staffing is an important managerial function- Staffing function is the most important
mangerial act along with planning, organizing, directing and controlling. The operations of
these four functions depend upon the manpower which is available through staffing function.
Staffing is a pervasive activity- As staffing function is carried out by all mangers and in
all types of concerns where business activities are carried out.

Staffing is a continuous activity- This is because staffing function continues throughout the life
of an organization due to the transfers and promotions that take place.

The basis of staffing function is efficient management of personnels- Human resources can be
efficiently managed by a system or proper procedure, that is, recruitment, selection, placement,
training and development, providing remuneration, etc.
Staffing helps in placing right men at the right job. It can be done effectively through proper
recruitment procedures and then finally selecting the most suitable candidate as per the job
requirements.
Staffing is performed by all managers depending upon the nature of business, size of the
company, qualifications and skills of managers etc. In small companies, the top management
generally performs this function. In medium and small scale enterprise, it is performed especially
by the personnel department of that concern.

Ques108. What are the fundamentals of command based user interface?


ANS) User interface is the front-end application view to which user interacts in order to use
the software. User can manipulate and control the softwareas well as hardware by means of
user interface. Today, user interface is found at almost every place where digital technology
exists, right from computers, mobile phones, cars, music players, airplanes, ships etc.
User interface is part of software and is designed such a way that it is expected to provide the
user insight of the software. UI provides fundamental platform for human-computer
interaction. UI can be graphical, text-based, audio-video based, depending upon the underlying
hardware and software combination. UI can be hardware or software or a combination of both.

90
Command Line Interface (CLI)
CLI has been a great tool of interaction with computers until the video display monitors
came into existence. CLI is first choice of many technical users and programmers. CLI is
minimum interface a software can provide to its users.
CLI provides a command prompt, the place where the user types the command and feeds to the
system. The user needs to remember the syntax of command and its use. Earlier CLI were not
programmed to handle the user errors effectively.
A command is a text-based reference to set of instructions, which are expected to be executed by
the system. There are methods like macros, scripts that make it easy for the user to operate.

Ques109.It is economical to do risk management? Explain risk management activity?-3


ANS) Risk can range between over-reliance on a single customer, to the merger of two
competitive companies in a business. You can safeguard your business and increase its success
rate by having an effective risk management policy in place. By identifying the risks before
they occur, you will have the time and space to prepare and to put solutions in place if needed.
A Risk Management Process Involves
Methodical identification of the risks surrounding the activities of your business.
Reviewing the probability of the occurrence of events.
Identifying the events before they create problems and dealing with them accordingly.
Understanding the events and ways to respond.
Systematizing the tools required to tackle the penalty.
Supervising the risk management approach, effectiveness and control.
Risk Management Process Results In
Improving your decision-making, planning and prioritizing skills.
Well-organized allocation of the resources and the capital.
Allows you to anticipate the problems and utilizes the best minimizing amount of fire fighting
and preventing a disaster, which could lead to sever financial crunch.
Risk management significantly improves the probability of the delivery of the business plan,
within your time frame and budget.
Risk Management Helps In:

91
Risk Identification- Risk management outlines various categories of risks faced by new business
including operational, financial, strategic, compliance related and environmental, political, safety
and health risks.
Risk Management- Clarifies the importance and events for tackling the risks that your new
business establishments may face. This includes the information about the evaluation of various
risks and four options for managing each risk. This also helps in outlining some preventive ideas
to decrease the likely hood of risks immobilizing your business.
Business recovery planning- Outlines disaster planning and also minimizes the impact of the
disaster on your business and this includes aspects such as data security, employees, insurance
policies and equipment.
Prevention of crime- This outlines crimes disturbing small businesses and derives some
simple steps to tackle it.
Scams-Risk management discusses scams and how they could hamper your business. It also lists
the methods that could help to avoid scams such as investigating the source of the scam, keeping
and maintaining proceedings and filtering the scam.
Shop Theft- Risk management discusses theft problems in a business and the areas to protect,
such as adopting simple safety measures and by keeping track of the staff and inventory.
Data Security- This offers a variety of information, which protects the businesses and also
secures data. Includes disaster recovery, risk assessment, backups and policies regarding data
security.

Ques110. What is the need of software configuration management?-2


ANS) SCM defines a mechanism to deal with different technical difficulties of a project plan. In
a software organization, effective implementation of software configuration management can
improve productivity by increased coordination among the programmers in a team. SCM helps to
eliminate the confusion often caused by miscommunication among team members. The SCM
system controls the basic components such as software objects, program code, test data, test
output, design documents, and user manuals.

The SCM system has the following advantages:

92
Reduced redundant work.
Effective management of simultaneous updates.
Avoids configuration-related problems.
Facilitates team coordination.
Helps in building management; managing tools used in builds.

Ques111. How is software design different from coding?


Ans: Design :Design is most crucial and time-consuming activity
Screen of the system depends on the correct design specifications which is a key activity of the
process. Software design is based on the findings collected in the initial investigation phase.
Design includes the following: User interface design, Process Design, Database designDesigns
are transformed into actual code or program during the implementation phase.it is more feasible
to rectify design as different users may have conflicting user requirements and only the final
and valid design goes for next phase.
Coding:-Involves conversion of detailed design specification laid out by designers into actual
code, files or database. Less time consuming then the design phase and performed by
programmers or coders. More concerned with technical aspect of the software rather than its
functional aspect. Different software such as programming languages front-end tools,
database management system, utilities etc are used to facilitate the coding process.
Ques112: What problems arise if two modules have high coupling?
Ans: Coupling means the interconnection of dissimilar modules with each other or we can say, it
tells about the interrelationship of dissimilar modules of a system. A system with high coupling
means there are strong interconnections among its modules. If two modules are concerned in
high coupling, it means their interdependence will be very high. Any changes applied to single
module will affect the functionality of the other module. Greater the degree of change, greater
will be its effect on the other. As the dependence is higher, such modify will affect modules in a
negative manner and in-turn, the maintainability of the project is decreased. This will further
decrease the reusability factor of individual modules and as lead to unsophisticated software. So,
it is always desirable to have inter-connection & interdependence among modules.
Ques113: What is a modular system? List the important properties of a modular system.
Ans: Desirable properties are:-Every module is a well defined subsystem useful to others

93
Every module has a well defined single purpose
Modules can be separately compiled and kept in library
Modules can use other modules
Modules should be simpler to use than build
Modules should have a easy interface
Ques114: What are the objectives of software design? How do we transform an informal design
to a detailed design?
Ans: Ans Objectives of software design
The purpose of the design phase is to plan a solution of the problem specified by the
requirements document. This phase is the first step in moving from the problem domain to the
solution domain. In other words, starting with what is needed; design takes us toward how to
satisfy the needs, so the basic objectives are:
Identify different types of software, based on the usage.
Show differences between design and coding.
Define concepts of structured programming.
Illustrate some basic design concepts.
See how to design for testability and maintainability.
Non-formal methods of specification can lead to problems during coding, particularly if the
coder is a different person from the designer that is often the case. Software designers do not
arrive at a finished design document immediately but develop the design iteratively through a
number of different phases. The design process involves adding details as the design is
developed with constant backtracking to correct earlier, less formal, designs. The transformation
is done as per the following diagram.
Ques115: Explain the cost drivers and EAF of the intermediate COCOMO model.
Ans: There are 15 different attributes, called cost drivers attributes that determine the
multiplying factors. These factors depend on product, computer, personnel, and technology
attributes also know as project attributes. Of the attributes are required software reliability
(RELY), product complexity (CPLX), analyst capability(ACAP), application experience
(AEXP), use of modern tools (TOOL), and required development schedule (SCHD). Each cost
driver has a rating scale, and for each rating, there is multiplying factor is provided. For eg. for

94
the product attributes RELY, the rating scale is very low, low, nominal, high, and very high.
The multiplying factor for these ratings is .75, .88, .1.00, .1.15, and 1.40,
respectively. So, if the reliability requirement for the project is judged to be low when the
multiplying factor is .75, while if it is judged to be very high the factor is1.40. The attributes and
their multiplying factors for different ratings are shown in the table below

Cost Drivers
Ratings
Very
Low Low Nominal High
Very
High
Extra
High
Product attributes
Required software reliability 0.75 0.88 1.00 1.15 1.40
Size of application database 0.94 1.00 1.08 1.16
Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65
Hardware attributes
Run-time performance constraints 1.00 1.11 1.30 1.66
Memory constraints 1.00 1.06 1.21 1.56
Volatility of the virtual machine environment 0.87 1.00 1.15 1.30
Required turnabout time 0.87 1.00 1.07 1.15
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Virtual machine experience 1.21 1.10 1.00 0.90
Programming language experience 1.14 1.07 1.00 0.95
Project attributes
Use of software tools 1.24 1.10 1.00 0.91 0.82

95
Application of software engineering methods 1.24 1.10 1.00 0.91
0.83 Required development schedule 1.23 1.08 1.00 1.04 1.10
Ques116:Compare the following (i) Productivity and difficulty (ii) Manpower and
development time (iii) Static single variable model and static multivariable model
(iv)Intermediate and Detailed COCOMO model Ans: Productivity and difficulty

Productivity refers to metrics and measures of output from production processes, per unit of
input. Productivity P may be conceived of as a metrics of the technical or engineering efficiency
of production. In software project planning , productivity is defined as the number of lines of
code developed per person-month Difficulty The ratio (K/td), where K is software development
cost and td is peak
Development time, is called difficulty and denoted by D, which is measured in person/year.
D= (K/td2)
The relationship shows that a project is more difficult to develop when
the Man power demand is high or when the time schedule is short.
Putnam has observed that productivity is proportional to the difficulty
Pα Dβ
The average productivity may be defined as
P=Lines of code produced/Cumulative manpower used to produce code=S/E Where
S is the lines of code produced and E is cumulative manpower used from t=0 to td
(inception of the project to the delivery time) (ii) Time and cost

In software projects, time cannot be freely exchanged against cost. Such a trade off is limited by
the nature of the software development. For a given organization, developing software of size
S, the quantity obtained is constant. We know K1/3 td
4/3 =S/C If we raise power by 3, then Ktd4 is constant for constant size software. A compression
of the development time td will produce an increase of manpower cost.. If compression is
excessive, not only would the software cost much more, but also the development would become
so difficult that it would increase the risk of being unmanageable.
Ques117: Discuss the various strategies of design. Which design strategy is most popular
and practical?

96
Ans: Software design is a process to conceptualize the software requirements into software
implementation. Software design takes the user requirements as challenges and tries to find
optimum solution. While the software is being conceptualized, a plan is chalked out to find the
best possible design for implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:
Structured Design
Structured design is a conceptualization of problem into several well-organized elements of
solution. It is basically concerned with the solution design. Benefit of structured design is, it
gives better understanding of how the problem is being solved. Structured design also makes it
simpler for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‗divide and conquer‘ strategy where a problem is
broken into several small problems and each small problem is individually solved until the
whole problem is solved.
The small pieces of problem are solved by means of solution modules. Structured design
emphasis that these modules be well organized in order to achieve precise solution.
These modules are arranged in hierarchy. They communicate with each other. A good
structured design always follows some rules for communication among multiple modules,
namely - Cohesion - grouping of all functionally related elements. Coupling - communication
between different modules.
A good structured design has high cohesion and low coupling arrangements.
Function Oriented Design
In function-oriented design, the system is comprised of many smaller sub-systems known as
functions. These functions are capable of performing significant task in the system. The
system is considered as top view of all functions
Function oriented design inherits some properties of structured design where divide and
conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation.. These functional modules
can share information among themselves by means of information passing and using
information available globally.

97
Another characteristic of functions is that when a program calls a function, the function changes
the state of the program, which sometimes is not acceptable by other modules. Function oriented
design works well where the system state does not matter and program/functions work on input
rather than on a state.
Design Process
The whole system is seen as how data flows in the system by means of data flow diagram.
DFD depicts how functions changes data and state of entire system.
The entire system is logically broken down into smaller units known as functions on the basis of
their operation in the system.
Each function is then described at large.
Object Oriented Design
Object oriented design works around the entities and their characteristics instead of functions
involved in the software system. This design strategies focuses on entities and its
characteristics. The whole concept of software solution revolves around the engaged entities.
Let us see the important concepts of Object Oriented Design:
Objects - All entities involved in the solution design are known as objects. For example, person,
banks, company and customers are treated as objects. Every entity has some attributes associated
to it and has some methods to perform on the attributes.
Classes - A class is a generalized description of an object. An object is an instance of a class.
Class defines all the attributes, which an object can have and methods, which defines the
functionality of the object.
In the solution design, attributes are stored as variables and functionalities are defined by means
of methods or procedures.
Encapsulation - In OOD, the attributes (data variables) and methods (operation on the data) are
bundled together is called encapsulation. Encapsulation not only bundles important information
of an object together, but also restricts access of the data and methods from the outside world.
This is called information hiding.
Inheritance - OOD allows similar classes to stack up in hierarchical manner where the lower or
sub-classes can import, implement and re-use allowed variables and methods from their
immediate super classes. This property of OOD is known as inheritance. This makes it easier to
define specific class and to create generalized classes from specific ones.

98
Polymorphism - OOD languages provide a mechanism where methods performing similar tasks
but vary in arguments, can be assigned same name. This is called polymorphism, which allows a
single interface performing tasks for different types. Depending upon how the function is
invoked, respective portion of the code gets executed.
Ques118: Why does the software design improve when we use object-oriented concepts?
Ans: Object oriented design:-Object oriented design transforms the analysis model created using
object-oriented analysis into a design model that serves as a blueprint for software construction.
It is a design strategy based on information hiding. Object oriented design is concerned with
developing an object-oriented model of a software system to implement the identified
requirements. Object oriented design establishes a design blueprint that enables a software
engineer to define object oriented architecture in a manner that maximizes reuse, thereby
improving development speed and end product quality.
Ques119: Define coupling. Discuss various types of coupling.
Ans: Coupling is the measure of the degree of interdependence between modules.
Type of coupling : Different types of coupling are content, common,external, control, stamp and
data. The strength of coupling from lowest coupling (best) to highest coupling (worst) is given in
the figure.
Data coupling Best
Stamp coupling
Control coupling
External coupling
Common coupling
Content coupling (Worst)
Given two procedures A and B, we can identify a number of ways in whichthey can be coupled.
Data coupling
The dependency between module A and B is said to be data coupled if theirdependency is based
on the fact they communicate by only passing of data.Other than communicating through data,
the two modules are independent. Agood strategy is to ensure that no module communication
contains ―tramp data‖ only the necessary data is passed. Students name, address, course are
example of tramp data that are unnecessarily communicated between modules. By ensuring that
modules communicate only necessary data, module dependency is minimized.

99
Stamp coupling: Stamp coupling occurs between module A and B when complete data structure
is passed from one module to another. Since not all data making up the structure is usually
necessary in communication between the modules, stamp coupling typically involves data. If
one procedure only needs a part of a data structure, calling modules pass just that part, not the
complete data structure. communicate by passing of control information. This is usually
accomplished by means of flags that are set by one module and reacted upon by the dependent
module. External coupling: A form of coupling in which a module has a dependency to other
module, external to the software being developed or to a particular type of hardware. This is
basically related to the communication to external tools and devices.
Ques120. Explain the concept of bottom-up, top-down and hybrid design.
Ans: Top-down and bottom-up are both strategies of information processing and knowledge
ordering, used in a variety of fields including software, humanistic and scientific theories
(see systemics), and management and organization. In practice, they can be seen as a style of
thinking, teaching, or leadership.
A top-down approach (also known as stepwise design and in some cases used as a synonym of
decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is
formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in
yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
which make it easier to manipulate. However, black boxes may fail to elucidate elementary
mechanisms or be detailed enough to realistically validate the model. Top down approach starts with
the big picture. It breaks down from there into smaller segments A bottom-up approach is the piecing
together of systems to give rise to more complex systems, thus making the original systems sub-
systems of the emergent system. Bottom-up processing is a type of information processing based on
incoming data from the environment to form a perception. From a Cognitive Psychology perspective,
information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into
an image by the brain that can be interpreted and recognized as a perception (output that is "built up"
from processing to final cognition). In a bottom-up approach the individual base elements of the
system are first specified in great detail. These elements are then linked together to form larger
subsystems, which then in

100
turn are linked, sometimes in many levels, until a complete top-level system is formed. This
strategy often resembles a "seed" model, by which the beginnings are small but eventually
grow in complexity and completeness. However, "organic strategies" may result in a tangle of
elements and subsystems, developed in isolation and subject to local optimization as opposed to
meeting a global purpose.
Ques 11: Explain the following Software Metrics (i) Lines of Code (ii) Function Count (iii)
Token Count (iv) Equivalent size measure
Ans: Lines of code (LOC) is a software metric used to measure the size of asoftware program
by counting the number of lines in the text of the program's sourcecode. LOC is typically used
to predict the amount of effort that will be required todevelop a program, as well as to estimate
programming productivity or effort oncethe software is produced. Advantages:-
Scope for Automation of Counting: Since Line of Code is a physical entity; manual counting
effort can be easily eliminated by automating the counting process.Small utilities may be
developed for counting the LOC in a program. However, a code counting utility developed for
a specific language cannot be used for other languages due to the syntactical and structural
differences among languages.
An Intuitive Metric: Line of Code serves as an intuitive metric for measuring the
size of software due to the fact that it can be seen and the effect of it can be
visualized. Function Point is more of an objective metric which cannot be
imagined as being a physical entity, it exists only in the logical space. This way,
LOC comes in handy to express the size of software among programmers with low
levels of experience.
Ques121. Software project planning entails what activities? What are the difficulties faced
in measuring the Software Costs?
Ans: Software project planning entails the following activities:
• Estimation:
o –Effort, cost, resource, and project duration
Project scheduling:
Staff organization:
o –staffing plans
• Risk handling:

101
o -identification, analysis, and abatement procedures
• Miscellaneous plans:
o –quality assurance plan, configuration management plan, etc.
Software costs are due to the requirement for software, hardware and human resources. One
can perform cost estimation at any point in the software life cycle.
As the cost of software depends on the nature and characteristics of the project, the accuracy of
estimate will depend on the amount of reliable information we have about the final product. So
when the product is delivered, the costs can be actually determined as everything spend is
known by then. However when the software is being initiated or during feasible study, we have
only some idea about the functionality of software. There is very high uncertainty about the
actual specifications of the system hence cost estimations based on uncertain information
Ques123. Compute function point value for a project with the following domain
characteristics: No. of Input = 30 No. of Outputs = 62 No. of user Inquiries = 24 No. of files
8 No. of external interfaces = 2 Assume that all the complexity adjustment values
are average. Assume that 14 algorithms have been counted.
Ans: s. We know
UFP=Σ Wij Zij where j=2 because all weighting factors are average.
=30*4+62*5+24*4+8*10+2*7
=120+310+96+80+14
=620
CAF=(0.65+0.01Σ Fi)
=0.65+0.01(14*3)
=0.65+0.42
=1.07
nFP=UFP*CAF
=620*1.07
=663.4≈663
Ques124. What is the difference between the “Known Risks” andKnowable risk?
Ans: Known risk – This is actually the easiest risk to cope with for most people for 1 reason. It‘s
controllable. You can do qualitative and quantitative risk analysis with known risks. You can make
millions doing this, even if you get it really wrong – just talk to Moody‘s about the risk

102
Knowable risk – These may be the toughest. You don‘t know these risks right now, but they are
knowable. I try to mitigate through education. I read voraciously and try to stay up to date on
recent research and trends. As I gather data, more risks can become known so I can move it to
the Known bucket and manage it accordingly. A recent example of this is Apple stock (AAPL).
Unknowable risk – It is by definition unknowable and will always be present in some form. It‘s a
sunk cost associated with participating in life. Small bits of the unknowable may eventually
become knowable,
Ques125. Explain the types of COCOMO Models and give phase-wise distribution of
effort. Ans: COCOMO model stand for Constructive Cost Model. It is an empirical model based
on project experience. It is well-documented, independent model, which is notified to a specific
software vendor. Long history from initial version published in 1981(COCOMO-81) through
various instantiations to COCOMO 2. COCOMO 2takes into account different approaches to
software development, reuse etc. This model gives 3 levels of estimation namely basic,
intermediate and detail.1) Basic COCOMO model:- It gives an order of magnitude of cost. This
model uses estimated size of software project and the type of software being developed. The
estimation varies for various types of projects and these various kinds are:-• Organic-mode
project:- These project include relatively small teams working in a familiar environment,
developing a well understood application, the feature o fsuch project are-1) The communication
overheads are low.2) The team members know what they can achieve.3) This project is much
common in nature.• Semi-detached model: A project consists of mixed project team of
experienced and fresh engineers. The team has limited experience of related system development
and some of them are unfamiliar with output and also some aspects of system being developed.•
Embedded model:- There will be very strong coupled hardware, software regulations and
operational procedures. Validation costs are very high. For e.g. System program and
development of OCR for English.2) Intermediate COCOMO model:-The intermediate
COCOMO model estimates the software development effort by using 15 cost drivers‘ variables
besides the size variable used in basic COCOMO.3) Detailed COCOMO model:-The detailed
COCOMO model can estimate the staffing cost and duration of each of the development phases,
subsystem, and modules. It allows you to experiment with different development strategies, to
find the plan that best suits your needs and resources.

103
Ques126 Illustrate of Risk Management-Activity and explain various Software
risks.

Ans: Software development is activity that uses a variety of technological advancements


and requires high levels of knowledge. Because of these and other factors, every
software development project contains elements of uncertainty. This is known as project risk.
The success of a software development project depends quite heavily on the amount of risk that
corresponds to each project activity. As a project manager, it‘s not enough to merely be aware of
the risks. To achieve a successful outcome, project leadership must identify, assess, prioritize,
and manage all of the major risks.

The goal of most software development and software engineering projects is to be distinctive—
often through new features, more efficiency, or exploiting advancements in software
engineering. Any software project executive will agree that the pursuit of such opportunities
cannot move forward without risk.

Risk management includes the following tasks:

 Identify risks and their triggers


 Classify and prioritize all risks
 Craft a plan that links each risk to a mitigation
 Monitor for risk triggers during the project
 Implement the mitigating action if any risk materializes
 Communicate risk status throughout project

Ques127 Explain COCOMO model with its relevant equations. Explain various
attributes of cost drivers used in COCOMO model.
Ans: COCOMO model stand for Constructive Cost Model. It is an empirical model based on
project experience. It is well-documented, independent model, which is notified to a specific
software vendor. Long history from initial version published in1981(COCOMO-81) through
various instantiations to COCOMO 2. COCOMO 2takes into account different approaches
to software development, reuse etc.

104
This model gives 3 levels of estimation namely basic, intermediate and detail.
1)Basic COCOMO model :- It gives an order of magnitude of cost. This model uses
estimated size of software project and the type of software being developed. The estimation
varies for various types of projects and these various kinds are:-
Organic-mode project:- These project include relatively small teams working in a
familiar environment, developing a well understood application, the feature of such project
are-
1) The communication overheads are low.
2) The team members know what they can achieve.
3) This project is much common in nature
experienced and fresh engineers. The team has limited experience of related system development
and some of them are unfamiliar with output and also some aspects of system being developed.
Embedded model:- There will be very strong coupled hardware, software
regulations and operational procedures. Validation costs are very high. For e.g. System program
and development of OCR for English.
2) Intermediate COCOMO model:-
The intermediate COCOMO model estimates the software development effort by using 15 cost
drivers‘ variables besides the size variable used in basic COCOMO.
3) Detailed COCOMO model:-
The detailed COCOMO model can estimate the staffing cost and duration of each of the
development phases, subsystem, and modules. It allows you to experiment with different
development strategies, to find the plan that best suits your needs and

105
Quies128: What is function point? Explain its importance. What is function-
oriented metrics?
Ans: Function points: Function point measures the functionality from the user point of view,
that is, on the basis of what the user request and receives in return. Therefore, it deals with the
functionality being delivered, and not with the lines of code, source modules, files etc.
Measuring size in this way has the
advantage that size measure is independent of the technology used to deliver the function.
Importance of function point:
This is independent of the languages tools, or methodology used
for implementation.
They can be estimated from requirement specification or
design specification.
They are directly linked to the statement of request.
Ques129 Explain the following with the help of an example (i) Common coupling (ii)
Communicational cohesion (iii) Class diagram (iv) Structure chart.
Ans: With common coupling, module A and module B have shared data. Global data areas are
commonly found in programming languages. Making a change to the common data means
tracing back to all the modules which access that data to evaluate the effect of change. With
common coupling, it can be difficult to determine which module is responsible for having set a
variable to a particular value.
Communicational cohesion: Communicational cohesion is when parts of a module are
grouped because they operate on the same data (e.g. a module which operates on the same
record of information). In this all of the elements of a component operate on the same input
data or produce the same output data. So
we can say if a module performs a series of actions related be a sequence of steps to be followed
by the product and all actions to be performed on the same data.
(iii)Class diagram: A class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's
classes, their

106
attributes, and the relationships between the classes. The UML specifies two types of scope for
members: instance and
classifier. In the case of instance members, the scope is a specific instance. For attributes, it
means that its value can vary between instances. For methods, it means that its invocation
affects the instance state, in other words, affects the instance attributes. Otherwise, in the
classifier member, the scope is the class.
For attributes, it means that its value is equal for all instances. For methods, it means that its
invocation do not affect the instance state. Classifier members are commonly recognized as
"static" in many programming languages.
Ques130: What are some of the steps of software project? Why it is important to
assign different roles to team members?
Ans: A Software Project is the complete procedure of software development from requirement
gathering to testing and maintenance, carried out according to the execution methodologies, in
a specified period of time to achieve intended software product. Need of software project
management
Software is said to be an intangible product. Software development is a kind of all new stream
in world business and there‘s very little experience in building software products. Most software
products are tailor made to fit client‘s requirements. The most important is that the underlying
technology changes and advances so frequently and rapidly that experience of one product may
not be applied to the other one. All such business and environmental constraints bring risk in
software development hence it is essential to manage software projects efficiently.
Time_Cost_Quality
The image above shows triple constraints for software projects. It is an essential part of
software organization to deliver quality product, keeping the cost within client‘s budget
constrain and deliver the project as per scheduled. There are several factors, both internal and
external, which may impact this triple constrain triangle. Any of three factor can severely
impact the other two.
Therefore, software project management is essential to incorporate user requirements along
with budget and time constraints.
107

You might also like