Sen 4

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Software Engineering Notes

Chapter 4th – Software Modelling & Design

The Management Spectrum:

Effective software project management focuses on the four P’s: people, product,
process, and project. The order is not arbitrary. The manager who forgets that
software engineering work is an intensely human endeavor will never have
success in project management. A manager who fails to encourage
comprehensive customer communication early in the evolution of a project risks
building an elegant solution for the wrong problem. The manager who pays
little attention to the process runs the risk of inserting competent technical
methods and tools into a vacuum. The manager who embarks without a solid
project plan jeopardizes the success of the product.

The People

The cultivation of motivated, highly skilled software people has been discussed
since the 1960s. In fact, the “people factor” is so important that the Software
Engineering Institute has developed a people management capability maturity
model (PM-CMM), “to enhance the readiness of software organizations
to undertake increasingly complex applications by helping to attract, grow,
motivate, deploy, and retain the talent needed to improve their software
development capability”.

The people management maturity model defines the following key practice
areas for software people: recruiting, selection, performance management,
training, compensation, career development, organization and work design, and
team/culture development. Organizations that achieve high levels of maturity in
the people management area have a higher likelihood of implementing effective
software engineering practices.

The PM-CMM is a companion to the software capability maturity model that


guides organizations in the creation of a mature software process.

The Product

Before a project can be planned, product objectives and scope should be


established, alternative solutions should be considered, and technical and
management constraints should be identified. Without this information, it is
impossible to define reasonable (and accurate) estimates of the cost, an effective
assessment of risk, a realistic breakdown of project tasks, or a manageable
project schedule that provides a meaningful indication of progress.

The software developer and customer must meet to define product objectives
and scope. In many cases, this activity begins as part of the system engineering
or business process engineering and continues as the first step in software
requirements analysis . Objectives identify the overall goals for the product
(from the customer’s point of view) without considering how these goals will be
chieved. Scope identifies the primary data, functions and behaviors that
characterize the product, and more important, attempts to bound these
characteristics in a quantitative manner.

Once the product objectives and scope are understood, alternative solutions are
considered. Although very little detail is discussed, the alternatives enable
managers and practitioners to select a "best" approach, given the constraints
imposed by delivery deadlines, budgetary restrictions, personnel availability,
technical interfaces, and myriad other factors.

The Process

A software process provides the framework from which a comprehensive plan


for software development can be established. A small number of framework
activities are applicable to all software projects, regardless of their size or
complexity. A number of different task sets—tasks, milestones, work products,
and quality assurance points—enable the framework activities to be adapted to
the characteristics of the software project and the requirements of the project
team. Finally, umbrella activities—such as software quality assurance, software
configuration management, and measurement—overlay the process model.
Umbrella activities are independent of any one framework activity and occur
throughout the process.

The Project

We conduct planned and controlled software projects for one primary reason—
it is the only known way to manage complexity. And yet, we still struggle. In
1998, industry data indicated that 26 percent of software projects failed outright
and 46 percent experienced cost and schedule overruns . Although the success
rate for software projects has improved somewhat, our project failure rate
remains higher than it should be.

In order to avoid project failure, a software project manager and the software
engineers who build the product must avoid a set of common warning signs,
understand the critical success factors that lead to good project management,
and develop a commonsense approach for planning, monitoring and controlling
the project.

Software Engineering | Project size estimation techniques

Estimation of the size of software is an essential part of Software Project


Management. It helps the project manager to further predict the effort and time
which will be needed to build the project. Various measures are used in project
size estimation. Some of these are:

 Lines of Code
 Number of entities in ER diagram
 Total number of processes in detailed data flow diagram
 Function points

1. Lines of Code (LOC): As the name suggest, LOC count the total number of
lines of source code in a project. The units of LOC are:

 KLOC- Thousand lines of code


 NLOC- Non comment lines of code
 KDSI- Thousands of delivered source instruction

The size is estimated by comparing it with the existing systems of same kind.
The experts use it to predict the required size of various components of software
and then add them to get the total size.

Advantages:

 Universally accepted and is used in many models like COCOMO.


 Estimation is closer to developer’s perspective.
 Simple to use.

Disadvantages:

 Different programming languages contains different number of lines.


 No proper industry standard exist for this technique.
 It is difficult to estimate the size using this technique in early stages of
project.

2. Number of entities in ER diagram: ER model provides a static view of the


project. It describes the entities and its relationships. The number of entities in
ER model can be used to measure the estimation of size of project. Number of
entities depends on the size of the project. This is because more entities needed
more classes/structures thus leading to more coding.

Advantages:

 Size estimation can be done during initial stages of planning.


 Number of entities is independent of programming technologies used.

Disadvantages:

 No fixed standards exist. Some entities contribute more project size than
others.
 Just like FPA, it is less used in cost estimation model. Hence, it must be
converted to LOC.

3. Total number of processes in detailed data flow diagram: Data Flow


Diagram(DFD) represents the functional view of a software. The model depicts
the main processes/functions involved in software and flow of data between
them. Utilization of number of functions in DFD to predict software size.
Already existing processes of similar type are studied and used to estimate the
size of the process. Sum of the estimated size of each process gives the final
estimated size.

Advantages:

 It is independent of programming language.


 Each major processes can be decomposed into smaller processes. This
will increase the accuracy of estimation

Disadvantages:

 Studying similar kind of processes to estimate size takes additional time


and effort.
 All software projects are not required to construction of DFD.

4. Function Point Analysis: In this method, the number and type of functions
supported by the software are utilized to find FPC(function point count). The
steps in function point analysis are:

 Count the number of functions of each proposed type.


 Compute the Unadjusted Function Points(UFP).
 Find Total Degree of Influence(TDI).
 Compute Value Adjustment Factor(VAF).
 Find the Function Point Count(FPC).

The explanation of above points given below:

 Count the number of functions of each proposed type: Find the


number of functions belonging to the following types:
o External Inputs: Functions related to data entering the system.
o External outputs:Functions related to data exiting the system.
o External Inquiries: They leads to data retrieval from system but
don’t change the system.
o Internal Files: Logical files maintained within the system. Log files
are not included here.
o External interface Files: These are logical files for other
applications which are used by our system.
 Compute the Unadjusted Function Points(UFP): Categorise each of
the five function types as simple, average or complex based on their
complexity. Multiply count of each function type with its weighting
factor and find the weighted sum. The weighting factors for each type
based on their complexity are as follows:

Function type Simple Average Complex

External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
Internal Logical Files 7 10 15
External Interface Files 5 7 10

 Find Total Degree of Influence: Use the ’14 general characteristics’ of a


system to find the degree of influence of each of them. The sum of all 14
degrees of influences will give the TDI. The range of TDI is 0 to 70. The
14 general characteristics are: Data Communications, Distributed Data
Processing, Performance, Heavily Used Configuration, Transaction Rate,
On-Line Data Entry, End-user Efficiency, Online Update, Complex
Processing Reusability, Installation Ease, Operational Ease, Multiple
Sites and Facilitate Change.
Each of above characteristics is evaluated on a scale of 0-5.
 Compute Value Adjustment Factor(VAF): Use the following formula
to calculate VAF
VAF = (TDI * 0.01) + 0.65
 Find the Function Point Count: Use the following formula to calculate
FPC
FPC = UFP * VAF

Advantages:

 It can be easily used in the early stages of project planning.


 It is independing on the programming language.
 It can be used to compare different projects even if they use different
technologies(database, language etc).

Disadvantages:

 It is not good for real time systems and embedded systems.


 Many cost estimation models like COCOMO uses LOC and hence FPC
must be converted to LOC.

Cost Estimation Models in Software Engineering

Cost estimation simply means a technique that is used to find out the cost
estimates. The cost estimate is the financial spend that is done on the efforts to
develop and test software in Software Engineering. Cost estimation models are
some mathematical algorithms or parametric equations that are used to estimate
the cost of a product or a project.
Various techniques or models are available for cost estimation, also known as
Cost Estimation Models as shown below :

1. Empirical Estimation Technique –

Empirical estimation is a technique or model in which empirically


derived formulas are used for predicting the data that are a required and
essential part of the software project planning step. These techniques are
usually based on the data that is collected previously from a project and
also based on some guesses, prior experience with the development of
similar types of projects, and assumptions. It uses the size of the software
to estimate the effort.

In this technique, an educated guess of project parameters is made.


Hence, these models are based on common sense. However, as there are
many activities involved in empirical estimation techniques, this
technique is formalized. For example Delphi technique and Expert
Judgement technique.

2. Heuristic Technique –
Heuristic word is derived from a Greek word that means “to discover”.
The heuristic technique is a technique or model that is used for solving
problems, learning, or discovery in the practical methods which are used
for achieving immediate goals. These techniques are flexible and simple
for taking quick decisions through shortcuts and good enough
calculations, most probably when working with complex data. But the
decisions that are made using this technique are necessary to be optimal.

In this technique, the relationship among different project parameters is


expressed using mathematical equations. The popular heuristic technique
is given by COCOMO. This technique is also used to increase or speed
up the analysis and investment decisions.

3. Analytical Estimation Technique –

Analytical estimation is a type of technique that is used to measure work.


In this technique, firstly the task is divided or broken down into its basic
component operations or elements for analyzing. Second, if the standard
time is available from some other source, then these sources are applied
to each element or component of work.

Third, if there is no such time available, then the work is estimated based
on the experience of the work. In this technique, results are derived by
making certain basic assumptions about the project. Hence, the analytical
estimation technique has some scientific basis.
COCOMO (Constructive Cost Model) is a regression model based on

LOC, i.e number of Lines of Code. It is a procedural cost estimate model for
software projects and often used as a process of reliably predicting the various
parameters associated with making a project such as size, effort, cost, time and
quality. It was proposed by Barry Boehm in 1970 and is based on the study of
63 projects, which make it one of the best-documented models.

The key parameters which define the quality of any software products, which
are also an outcome of the Cocomo are primarily Effort & Schedule:

 Effort: Amount of labor that will be required to complete a task. It is


measured in person-months units.
 Schedule: Simply means the amount of time required for the completion
of the job, which is, of course, proportional to the effort put. It is
measured in the units of time such as weeks, months.

Different models of Cocomo have been proposed to predict the cost estimation
at different levels, based on the amount of accuracy and correctness required.
All of these models can be applied to a variety of projects, whose characteristics
determine the value of constant to be used in subsequent calculations. These
characteristics pertaining to different system types are mentioned below.

Boehm’s definition of organic, semidetached, and embedded systems:

1. Organic – A software project is said to be an organic type if the team


size required is adequately small, the problem is well understood and has
been solved in the past and also the team members have a nominal
experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if
the vital characteristics such as team-size, experience, knowledge of the
various programming environment lie in between that of organic and
Embedded. The projects classified as Semi-Detached are comparatively
less familiar and difficult to develop compared to the organic ones and
require more experience and better guidance and creativity. Eg:
Compilers or different Embedded Systems can be considered of Semi-
Detached type.
3. Embedded – A software project with requiring the highest level of
complexity, creativity, and experience requirement fall under this
category. Such software requires a larger team size than the other two
models and also the developers need to be sufficiently experienced and
creative to develop such complex models.

All the above system types utilize different values of the constants used in
Effort Calculations.

Types of Models: COCOMO consists of a hierarchy of three increasingly


detailed and accurate forms. Any of the three forms can be adopted according to
our requirements. These are types of COCOMO model:

1. Basic COCOMO Model


2. Intermediate COCOMO Model
3. Detailed COCOMO Model

The first level, Basic COCOMO can be used for quick and slightly rough
calculations of Software Costs. Its accuracy is somewhat restricted due to the
absence of sufficient factor considerations.

Intermediate COCOMO takes these Cost Drivers into account and Detailed
COCOMO additionally accounts for the influence of individual project phases,
i.e in case of Detailed it accounts for both these cost drivers and also
calculations are performed phase wise henceforth producing a more accurate
result. These two models are further discussed below.

Estimation of Effort: Calculations –

1. Basic Model –

The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a,b,c
and d for the Basic Model for the different categories of system:

The effort is measured in Person-Months and as evident from the formula is


dependent on Kilo-Lines of code.
The development time is measured in Months.

These formulas are used as such in the Basic Model calculations, as not
much consideration of different factors such as reliability, expertise is
taken into account, henceforth the estimate is rough.

Intermediate Model –

The basic Cocomo model assumes that the effort is only a function of the
number of lines of code and some constants evaluated according to the different
software system. However, in reality, no system’s effort and schedule can be
solely calculated on the basis of Lines of Code. For that, various other factors
such as reliability, experience, Capability. These factors are known as Cost
Drivers and the Intermediate Model utilizes 15 such drivers for cost estimation.

Classification of Cost Drivers and their attributes:

(i) Product attributes –

 Required software reliability extent


 Size of the application database
 The complexity of the product

(ii) Hardware attributes –

 Run-time performance constraints


 Memory constraints
 The volatility of the virtual machine environment
 Required turnabout time

(iii) Personnel attributes –

 Analyst capability
 Software engineering capability
 Applications experience
 Virtual machine experience
 Programming language experience

(iv) Project attributes –

 Use of software tools


 Application of software engineering methods
 Required development schedule
Detailed Model –

Detailed COCOMO incorporates all characteristics of the intermediate version


with an assessment of the cost driver’s impact on each step of the software
engineering process. The detailed model uses different effort multipliers for
each cost driver attribute. In detailed cocomo, the whole software is divided into
different modules and then we apply COCOMO in different modules to
estimate effort and then sum the effort.

The Six phases of detailed COCOMO are:

1. Planning and requirements


2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model

The effort is calculated as a function of program size and a set of cost drivers
are given according to each phase of the software lifecycle.

You might also like