Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Software Project Planning 67

Unit 4: Software Project Planning


Notes
Structure
4.1 Introduction
4.2 Size Estimation
4.3 Decomposition Techniques
4.4 Cost Estimation
4.5 Cost Estimation Models
4.6 Putnam Resource Allocation Model
4.7 Software Risk Management
4.8 Summary
4.9 Check Your Progress
4.10 Questions and Exercises
4.11 Key Terms
4.12 Further Readings

Objectives
After studying this unit, you should be able to:
z Understand the effort estimates
z Discuss the concept cost estimates
z Understanding LOC
z Examining Function point and extended function point
z Describe COCOMO-81
z Explain COCOMO-II
z Identify the purpose of Risk Management

4.1 Introduction
Risk in itself is not bad; risk is essential to progress, and failure is often a key part of
learning. But we must learn to balance the possible negative consequences of risk
against the potential benefits of its associated opportunity. – Van Scoy, 1992.
This lesson divided into two parts, in the first part, we will learn COCOMO-81 and
COCOMO-II and in the other part we will learn all about the Risk Management.

4.2 Size Estimation


Initial size estimates are typically based on the known system requirements. You must
hunt for every known detail of the proposed system, and use these details to develop
and validate the software size estimates.

Estimating Software Size


An accurate estimate of software size is an essential element in the calculation of
estimated project costs and schedules. The fact that these estimates are required very
early on in the project (often while a contract bid is being prepared) makes size
estimation a formidable task.

Amity Directorate of Distance and Online Education


68 Software Engineering

In general, you present size estimates as lines of code (KSLOC or SLOC) or as


function points. There are constants that you can apply to convert function points to
Notes lines of code for specific languages, but not vice versa. If possible, choose and adhere
to one unit of measurement, since conversion simply introduces a new margin of error
into the final estimate.
Regardless of the unit chosen, you should store the estimates in the metrics
database. You will use these estimates to determine progress and to estimate future
projects. As the project progresses, revise them so that cost and schedule estimates
remain accurate.
The following section describes techniques for estimating software size.

Developer Opinion
Developer opinion is otherwise known as guessing. If you are an experienced
developer, you can likely make good estimates due to familiarity with the type of
software being developed.

Previous Project Experience


Looking at previous project experience serves as a more educated guess. By using
the data stored in the metrics database for similar projects, you can more accurately
predict the size of the new project.
If possible, the system is broken into components, and estimated independently.

Estimating Software Cost


The cost of medium and large software projects is determined by the cost of developing
the software, plus the cost of equipment and supplies. The latter is generally a constant
for most projects.
The cost of developing the software is simply the estimated effort, multiplied by
presumably fixed labour costs. For this reason, we will concentrate on estimating the
development effort, and leave the task of converting the effort to dollars to each
company.

Estimating Effort
There are two basic models for estimating software development effort (or cost): holistic
and activity-based. The single biggest cost driver in either model is the estimated
project size.
Holistic models are useful for organizations that are new to software development,
or that do not have baseline data available from previous projects to determine labour
rates for the various development activities.
Estimates produced with activity-based models are more likely to be accurate, as
they are based on the software development rates common to each organization.
Unfortunately, you require related data from previous projects to apply these
techniques.

Holistic Models for Cost Estimating


Holistic models relate size, effort, and (sometimes) schedule by applying equations to
determine the overall cost, and then applying a percent of the overall cost to each
development activity. They do not consider the actual labour rates and costs of each
activity.
Popular holistic models include the following:
z SDM (Software Development Model - Putnam - 1978)
z SLIM (Software Lifecycle Management - Putnam - 1979)

Amity Directorate of Distance and Online Education


Software Project Planning 69
z COCOMO (Constructive Cost Model - Boehm - 1981)
z COPMO (Cooperative Programming Model - Conte, Dunsmuir, Shen - 1986)
Notes
Of these models, COCOMO is most widely used, and will suffice if there is
insufficient data to carry out activity-based cost estimation.

Expert Judgment Estimating


Expert Judgment estimating is easy to do - provided you have an expert on the project.
This technique looks to the expert to create an estimate based upon their understanding
of the project requirements. Many, if not most, project estimates are created in this
fashion.
The advantage of this is that it is quick and if the expert is knowledgeable, it is often
the most accurate estimate for uncertain activities. The disadvantages are that you may
not have an expert available and even if you do, the expert often can provide no solid
rationale for their estimate beyond, "That's what I think it will take to do this."
Advantages: Relatively cheap estimation method. Can be accurate if experts have
direct experience of similar systems
Disadvantages: Very inaccurate if there are no experts.

4.3 Decomposition Techniques


Size-oriented software metrics are derived by normalizing quality and/or productivity
measures by considering the size of the software that has been produced. If a software
organization maintains simple records, a table of size-oriented measures, such as the
one shown in Figure 4.1, can be created. The table lists each software development
project that has been completed over the past few years and corresponding measures
for that project. Referring to the table entry (Figure 4.1) for project alpha: 12,100 lines of
code were developed with 24 person-months of effort at a cost of $168,000. It should be
noted that the effort and cost recorded in the table represent all software engineering
activities (analysis, design, code, and test), not just coding.
Further information for project alpha indicates that 365 pages of documentation
were developed, 134 errors were recorded before the software was released, and 29
defects were encountered after release to the customer within the first year of
operation.

Figure 4.1: Size-oriented Metrics

Three people worked on the development of software for project alpha.


Amity Directorate of Distance and Online Education
70 Software Engineering

In order to develop metrics that can be assimilated with similar metrics from other
projects, we choose lines of code as our normalization value. From the rudimentary
Notes data contained in the table, a set of simple size-oriented metrics can be developed for
each project:
z Errors per KLOC (thousand lines of code).
z Defects per KLOC.
z $ per LOC.
z Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
z Errors per person-month.
z LOC per person-month.
z $ per page of documentation.
Size-oriented metrics are not universally accepted as the best way to measure the
process of software development. Most of the controversy swirls around the use of lines
of code as a key measure.
Proponents of the LOC measure claim that LOC is an "artifact" of all software
development projects that can be easily counted, that many existing software estimation
models use LOC or KLOC as a key input, and that a large body of literature and data
predicated on LOC already exists.
Opponents argue that LOC measures are programming language dependent, that
they penalize well-designed but shorter programs, that they cannot easily accommodate
nonprocedural languages, and that their use in estimation requires a level of detail that
may be difficult to achieve (i.e., the planner must estimate the LOC to be produced long
before analysis and design have been completed).

Function Point
Function-oriented software metrics use a measure of the functionality delivered by the
application as a normalization value. Since ‘functionality’ cannot be measured directly, it
must be derived indirectly using other direct measures. Function-oriented metrics were
first proposed by Albrecht, who suggested a measure called the function point. Function
points are derived using an empirical relationship based on countable (direct) measures
of software's information domain and assessments of software complexity.

Figure 4.2: Computing Function Point

Function points are computed by completing the table shown in Figure 4.2. Five
information domain characteristics are determined and counts are provided in the
appropriate table location. Information domain values are defined in the following
manner:
Amity Directorate of Distance and Online Education
Software Project Planning 71
z Number of user inputs: Each user input that provides distinct application-oriented
data to the software is counted. Inputs should be distinguished from inquiries, which
are counted separately. Notes
z Number of user outputs: Each user output that provides application-oriented
information to the user is counted. In this context output refers to reports, screens,
error messages, etc. Individual data items within a report are not counted
separately.
z Number of user inquiries: An inquiry is defined as an on-line input that results in
the generation of some immediate software response in the form of an on-line
output. Each distinct inquiry is counted.
z Number of files: Each logical master file (i.e., a logical grouping of data that may
be one part of a large database or a separate file) is counted.
z Number of external interfaces: All machine readable interfaces (e.g., data files on
storage media) that are used to transmit information to another system are counted.
Once these data have been collected, a complexity value is associated with each
count. Organizations that use function point methods develop criteria for determining
whether a particular entry is simple, average, or complex. Nonetheless, the
determination of complexity is somewhat subjective.
To compute function points (FP), the following relationship is used:
FP = Count total × [0.65 + 0.01 × ∑(Fi)
where count total is the sum of all FP entries obtained from Figure 4.2.
The Fi (i = 1 to 14) are "complexity adjustment values" based on responses to the
following questions:
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the user?
Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential). The constant values in Equation and
the weighting factors that are applied to information domain counts are determined
empirically.
Once function points have been calculated, they are used in a manner analogous to
LOC as a way to normalize measures for software productivity, quality, and other
attributes:
1. Errors per FP.
2. Defects per FP.

Amity Directorate of Distance and Online Education


72 Software Engineering

3. $ per FP.
4. Pages of documentation per FP.
Notes
5. FP per person-month.

Decomposition Techniques for Estimation


The first step in estimation is to predict the size of the project. Typically, this will be
done using either LOC (the direct approach) or FP (the indirect approach). Then we use
historical data (on similar types of projects) about the relationship between LOC or FP
and time or effort to predict the estimate of time or effort for this project.
If we choose to use the LOC approach, then we will have to decompose the project
quite considerably into as many component as possible and estimate the LOC for each
component.
The size s is then the sum of the LOC of each component.
If we choose to use the FP approach, we don’t have to decompose quite so much.
In both cases, we make three estimates of size:
sopt an optimistic estimate
sm the most likely estimate
spess an optimistic estimate
and combine them to get a three-point or expected value EV
EV = (sopt + 4sm + spess)/6
EV is the value that is used in the final estimate of effort or time.
Example: LOC Example
Consider the following project to develop a Computer Aided Design (CAD) application
for mechanical components.
z The software is to run on an engineering workstation
z It must interface with computer graphics peripherals (mouse, digitizer, colour
display, laser printer)
z It is to accept 2-D and 3-D geometric data from an engineer
z The engineer will interact with and control the CAD system through a graphic user
interface
z All geometric data and other supporting information will be maintained in a CAD
database
z Design analysis modules will be develop to produce the required output which will
be displayed on a number of graphics devices.
After some further requirements analysis and specification, the following major
software functions are identified:
z User interface and control facilities (UICF)
z 2D geometric analysis (2DGA)
z 3D geometric analysis (3DGA)
z Database management (DBM)
z Computer graphics display functions (CGDF)
z Peripheral control (PC)
z Design analysis modules (DAM)

Amity Directorate of Distance and Online Education


Software Project Planning 73
Table 4.1: Example of LOC

Notes

Historical data indicates that the organizational average productivity for systems of
this type is 630 LOC/per-month and the cost per LOC is $13.
Thus, the LOC estimate for this project is
z 33983 / 620 = 55 person months
z 33983 * 13 = $441700
Example: FP
Table 4.2: Example of FP

Factor Value
Backup and recovery 4
Data communications 2
Distributed processing 0
Performance critical 4
Existing operating environment 3
On-line data entry 4
Input transactions over multiple screens 5
Master file updated on-line 3
Information domain values complex 5
Internal processing complex 5
Code designed for reuse 4
Conversion/installation in design 3
Multiple installations 5
Application designed for change 5
Estimated number of FP is Count total * (0.65 + 0.01 * Σ Fi ) = 372
Historical data indicates that the organizational average productivity for systems of
this type is 6.5 FP/per-month and the cost per FP is $1230.
Thus, the FP estimate for this project is
z 372 / 6.5 = 58 person months
z 372 * 1230 = $45700
Amity Directorate of Distance and Online Education
74 Software Engineering

4.4 Cost Estimation


Notes Cost estimation is the process of approximating the costs involved in the software
project. Cost estimation should be done before software development is initiated since it
helps the project manager to know about resources required and the feasibility of the
project.
Accurate software cost estimation is important for the successful completion of a
software project. However, the need and importance of software cost estimation is
underestimated due to the reasons listed below:
z Analysis of the software development process is not considered while estimating
cost.
z It is difficult to estimate software cost accurately, as software is intangible and
intractable.
There are many parameters (also called factors), such as complexity, time
availability, and reliability, which are considered during cost estimation process.
However, software size is considered as one of the most important parameters for cost
estimation.
Cost estimation can be performed during any phase of software development. The
accuracy of cost estimation depends on the availability of software information
(requirements, design, and source code). It is easier to estimate the cost in the later
stages, as more information is available during these stages as compared to the
information available in the initial stages of software development.

Software Cost Estimation Process


To lower the cost of conducting business, identify and monitor cost and schedule risk
factors, and to increase the skills of key staff members, software cost estimation
process is followed. This process is responsible for tracking and refining cost estimate
throughout the project life cycle. This process also helps in developing a clear
understanding of the factors which influence software development costs.
Cost of estimating software varies according to the nature and type of the product to
be developed. For example, the cost of estimating an operating system will be more
than the cost estimated for an application program. Thus, in the software cost
estimation process, it is important to define and understand the software, which is to be
estimated.
In order to develop a software project successfully, cost estimation should be well
planned, review should be done at regular intervals, and process should be continually
improved and updated. The basic steps required to estimate cost are
(a) Project Objectives and Requirements: In this phase, the objectives and
requirements for the project are identified, which is necessary to estimate cost
accurately and accomplish user requirements. The project objective defines the end
product, intermediate steps involved in delivering the end product, end date of the
project, and individuals involved in the project.
This phase also defines the constraints/limitations that affect the project in meeting
its objectives. Constraints may arise due to the factors listed below:
™ Start date and completion date of the project.
™ Availability and use of appropriate resources.
™ Policies and procedures that require explanations regarding their
implementation.
Project cost can be accurately estimated once all the requirements are known.
However, if all requirements are not known, then the cost estimate is based only on
the known requirements. For example, if software is developed according to the
incremental development model, then the cost estimation is based on the
requirements that have been defined for that increment.
Amity Directorate of Distance and Online Education
Software Project Planning 75
(b) Plan Activities: Software development project involves different set of activities,
which helps in developing software according to the user requirements. These
activities are performed in fields of software maintenance, software project Notes
management, software quality assurance, and software configuration management.
These activities are arranged in the work breakdown structure according to their
importance.
Work breakdown structure (WBS) is the process of dividing the project into tasks
and ordering them according to the specified sequence. WBS specifies only the
tasks that are performed and not the process by which these tasks are to be
completed. This is because WBS is based on requirements and not the manner in
which these tasks are carried out.
(c) Estimating Size: Once the WBS is established, product size is calculated by
estimating the size of its components. Estimating product size is an important step
in cost estimation as most of the cost estimation models usually consider size as
the major input factor. Also, project managers consider product size as a major
technical performance indicator or productivity indicator, which allows them to track
a project during software development.
(d) Estimating Cost and Effort: Once the size of the project is known, cost is
calculated by estimating effort, which is expressed in terms of person-month (PM).
Various models (like COCOMO, COCOMO II, expert judgement, top-down, bottom-
up, estimation by analogy, Parkinson’s principal, and price to win) are used to
estimate effort. Note that for cost estimation, more than one model is used, so that
cost estimated by one model can be verified by another model.
(e) Estimating Schedule: Schedule determines the start date and end date of the
project. Schedule estimate is developed either manually or with the help of
automated tools. To develop a schedule estimate manually, a number of steps are
followed, which are listed below:
™ The work breakdown structure is expanded, so that the order in which
functional elements are developed can be determined. This order helps in
defining the functions, which can be developed simultaneously.
™ A schedule for development is derived for each set of functions that can be
developed independently.
™ The schedule for each set of independent functions is derived as the average of
the estimated time required for each phase of software development.
™ The total project schedule estimate is the average of the product development,
which includes documentation and various reviews.
Manual methods are based on past experience of software engineers. One or more
software engineers, who are experts in developing application, develop an estimate
for schedule.
However, automated tools (like COSTAR, COOLSOFT) allow the user to customise
schedule in order to observe the impact on cost.
(f) Risk Assessment: Risks are involved in every phase of software development
therefore, risks involved in a software project should be defined and analysed, and
the impact of risks on the project costs should also be determined. Ignoring risks
can lead to adverse effects, such as increased costs in the later stages of software
development.
(g) Inspect and Approve: The objective of this phase is to inspect and approve
estimates in order to improve the quality of an estimate and get an approval from
top-level management.
The other objectives of this step are listed below:
™ Confirm the software architecture and functional WBS.
™ Verify the methods used for deriving the size, schedule, and cost estimates.

Amity Directorate of Distance and Online Education


76 Software Engineering

™ Ensure that the assumptions and input data used to develop the estimates are
correct.
Notes ™ Ensure that the estimate is reasonable and accurate for the given input data.
™ Confirm and record the official estimates for the project.
Once the inspection is complete and all defects have been removed, project
manager, quality assurance group, and top-level management sign the estimate.
Inspection and approval activities can be formal or informal as required but should
be reviewed independently by the people involved in cost estimation.
(h) Track Estimates: Tracking estimate over a period of time is essential, as it helps in
comparing the current estimate to previous estimates, resolving any discrepancies
with previous estimates, comparing planned cost estimates and actual estimates.
This helps in keeping track of the changes in a software project over a period of
time. Tracking also allows the development of a historical database of estimates,
which can be used to adjust various cost models or to compare past estimates to
future estimates.
(i) Process Measurement and Improvement: Metrics should be collected (in each
step) to improve the cost estimation process. For this, two types of process metrics
are used namely, process effective metrics and process cost metrics. The benefit of
collecting these metrics is to specify a reciprocal relation that exists between the
accuracy of the estimates and the cost of developing the estimates.
™ Process effective metrics: Keeps track of the effects of cost estimating process.
The objective is to identify elements of the estimation process, which enhance
the estimation process. These metrics also identify those elements which are of
little or no use to the planning and tracking processes of a project. The
elements that do not enhance the accuracy of estimates should be isolated and
eliminated.
™ Process cost metrics: Provides information about implementation and
performance cost incurred in the estimation process. The objective is to quantify
and identify different ways to increase the cost effectiveness of the process. In
these metrics, activities that cost-effectively enhance the project planning and
tracking process remain intact, while activities that have negligible affect on the
project are eliminated.

4.5 Cost Estimation Models


Estimation models use derived formulas to predict effort as a function of LOC or FP.
Various estimation models are used to estimate cost of a software project. In these
models, cost of software project is expressed in terms of effort required to develop the
software successfully.
These cost estimation models are broadly classified into two categories, which are
listed below:
z Algorithmic models: Estimation in these models is performed with the help of
mathematical equations, which are based on historical data or theory. In order to
estimate cost accurately, various inputs are provided to these algorithmic models.
These inputs include software size and other parameters. To provide accurate cost
estimation, most of the algorithmic cost estimation models are calibrated to the
specific software environment. The various algorithmic models used are COCOMO,
COCOMO II, and software equation.
z Non-algorithmic models: Estimation in these models depends on the prior
experience and domain knowledge of project managers. Note that these models do
not use mathematical equations to estimate cost of software project. The various
non-algorithmic cost estimation models are expert judgement, estimation by
analogy, and price to win.

Amity Directorate of Distance and Online Education


Software Project Planning 77
COCOMO
COCOMO'81 is derived from the analysis of 63 software projects in 1981. Boehm Notes
proposed three levels of the model: Basic, intermediate, detailed.
z The basic COCOMO'81 model is a single-valued, static model that computes
software development effort (and cost) as a function of program size expressed in
estimated lines of code (LOC).
z The intermediate COCOMO'81 model computes software development effort as a
function of program size and a set of "cost drivers" that include subjective
assessments of product, hardware, personnel, and project attributes.
z The detailed COCOMO'81 model incorporates all characteristics of the intermediate
version with an assessment of the cost driver’s impact on each step (analysis,
design, etc.) of the software engineering process.
COCOMO'81 models depend on the two main equations:
First is development effort (based on MM - man-month / Person-month / staff-month
is one month of effort by one person. In COCOMO'81, there are 152 hours per Person-
Month. According to organization this values may differ from the standard by 10% to
20%. ) for the basic model:
MM = aKDSIb
Second is effort and development time (TDEV)
TDEV = cMMd
KDSI means the number of thousand delivered source instructions and it is a
measure of size The coefficients a, b, c and d are depend on the mode of the
development. There are three modes of development:
Development Mode Project Characteristics
Deadline/
Size Innovation Dev. Environment
constraints
Organic Small Little Not tight Stable

Semi-detached Medium Medium Medium Medium

Complex
Embedded Large Greater Tight hardware/customer
interfaces

Here are the coefficients related to development modes for intermediate model:
Development Mode a b c d
Organic 3.2 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 2.8 1.20 2.5 0.32

Basic mode uses only size in estimation. Intermediate mode also uses 15 cost
drivers as well as size.
In intermediate mode development effort equation becomes:
MM=aKDSIbC
C (effort adjustment factor) is calculated simply multiplying the values of cost
drivers. So the intermediate model is more accurate than the basic model.
The steps in producing an estimate using the intermediate model COCOMO'81 are:
1. Identify the mode (organic, semidetacted, embedded) of development for the new
product.

Amity Directorate of Distance and Online Education


78 Software Engineering

2. Estimate the size of the project in KDSI to derive a nominal effort prediction.
3. Adjust 15 cost drivers to reflect your project.
Notes
4. Calculate the predicted project effort using first equation and the effort adjustment
factor (C)
5. Calculate the project duration using second equation.
Example: Estimate using the intermediate COCOMO'81
Mode is organic
Size = 200KDSI
Cost drivers:
™ Low reliability = .88
™ High product complexity = 1.15
™ Low application experience = 1.13
™ High programming language experience = .95
™ Other cost drivers assumed to be nominal = 1.00
C = .88 * 1.15 * 1.13 * .95 = 1.086
Effort = 3.2 * ( 2001.05 ) * 1.086 = 906 MM
Development time = 2.5 * 9060.38

Advantages of COCOMO'81
z COCOMO is transparent, you can see how it works unlike other models such as
SLIM.
z Drivers are particularly helpful to the estimator to understand the impact of different
factors that affect project costs.

Drawbacks of COCOMO'81
z It is hard to accurately estimate KDSI early on in the project, when most effort
estimates are required.
z KDSI, actually, is not a size measure it is a length measure.
z Extremely vulnerable to mis-classification of the development mode
z Success depends largely on tuning the model to the needs of the organization,
using historical data which is not always available.

COCOMO-II
Researchs on COCOMO II is started in late 90s because COCOMO'81 is not enough to
apply to newer software development practices. You can find latest information about
COCOMO'II from COCOMO II Home Page.
These changes and others began to make applying the original COCOMO® model
problematic. The solution to the problem was to reinvent the model for the 1990s. After
several years and the combined efforts of USC-CSSE, ISR at UC Irvine, and the
COCOMO® II Project Affiliate Organizations, the result is COCOMO® II, a revised cost
estimation model reflecting the changes in professional software development practice
that have come about since the 1970s. This new, improved COCOMO® is now ready to
assist professional software cost estimators for many years to come.
COCOMO'II differs from COCOMO'81 with such differences:
z COCOMO'81 requires software size in KSLOC as an input, but COCOMO'II
provides different effort estimating models based on the stage of development of
the project.

Amity Directorate of Distance and Online Education


Software Project Planning 79
z COCOMO'81 provides point estimates of effort and schedule, but COCOMO'II
provides likely ranges of estimates that represent one standard deviation around
the most likely estimate. Notes
z COCOMO'II adjusts for software reuse and reengineering where automated tools
are used for translation of existing software, but COCOMO'81 made little
accommodation for these factors
z COCOMO'II accounts for requirements volatility in its estimates.
z The exponent on size in the effort equations in COCOMO'81 varies with the
development mode. COCOMO'II uses five scale factors to generalize and replace
the effects of the development mode.
COCOMO II has three different models:

The Application Composition Model


Suitable for projects built with modern GUI-builder tools. Based on Object Points.

The Early Design Model


This model is used to make rough estimates of a project's cost and duration before it is
entire architecture is not determined. It uses a small set of new Cost Drivers, and new
estimating equations. Based on Unadjusted Function Points or KSLOC.
z For the Early Design and Post Architecture Model:
⎛ AT ⎞

π
⎜ ⎟ b
ASLOC⎝ 100 ⎠ ⎡ ⎛ BRAK ⎞ ⎤
Effort = + a ⎢size ⎜1 + ⎟ EMi
ATPROD ⎣ ⎝ 100 ⎠ ⎥⎦
1
b = 1.01 +
100
∑ SFj
⎛ 100 − AT ⎞
Size = KSLOC + KASLOC ⎜ AAM
⎝ 100 ⎟⎠

Where,
a = 2.5, SFj = scale factor,
Emi = effort multiplier
BRAK = Percentage code discarted due to requirement volatility
ASLOC = size of adapted components
AT = percents of components adapted
ATPROD = Automatic Translation Productivity
AAM = Adaptation Adjustment Multiplier
COCOMO'II adjusts for the effects of reengineering in its effort estimate.
When a project includes automatic translation, following list must be estimated:
z Automatic translation productivity (ATPROD), estimated from previous development
efforts
z The size, in thousands of Source Lines of Code, of untranslated code (KSLOC) and
of code to be translated (KASLOC) under this project.
z The percentage of components being developed from reengineered software
(ADAPT)
z The percentage of components that are being automatically translated (AT).
The effort equation is adjusted by 15 cost driver attributes in COCOMO'81, but
COCOMO'II defines seven cost drivers (EM) for the Early Design estimate:
z Personnel capability

Amity Directorate of Distance and Online Education


80 Software Engineering

z Product reliability and complexity


z Required reuse
Notes
z Platform difficulty
z Personnel experience
z Facilities
z Schedule constraints.
Some of these effort multipliers are disaggregated into several multipliers in the
Post-Architecture COCOMO'II model.
COCOMO'II models software projects as exhibiting decreasing returns to scale.
Decreasing returns are reflected in the effort equation by an exponent for SLOC greater
than unity. This exponent varies among the three COCOMO'81 development modes
(organic, semidetached, and embedded). COCOMO'II does not explicitly partition
projects by development modes.
Instead the power to which the size estimate is raised is determined by five scale
factors:
z Precedentedness (how novel the project is for the organization)
z Development flexibility
z Architecture/risk resolution
z Team cohesion
z Organization process maturity.

The Post-Architecture Model


This is the most detailed COCOMO II model. It is used after project's overall
architecture is developed. It has new cost drivers, new line counting rules, and new
equations.
Use of reengineered and automatically translated software is accounted for as in
the Early Design equation (ASLOC, AT, ATPROD, and AAM). Breakage (BRAK), or the
percentage of code thrown away due to requirements change is accounted for in 2.0.
Reused software (RUF) is accounted for in the effort equation by adjusting the size by
the adaptation adjustment multiplier (AAM). This multiplier is calculated from estimates
of the percent of the design modified (DM), percent of the code modified (CM),
integration effort modification (IM), software understanding (SU), and assessment and
assimilation (AA). Seventeen effort multipliers are defined for the Post-Architecture
model grouped into four categories:
z Product factors
z Platform factors
z Personnel factors
z Project factors
These four categories parallel the four categories of COCOMO'81 - product
attributes, computer attributes, personnel attributes and project attributes, respectively.
Many of the seventeen factors of COCOMO'II are similar to the fifteen factors of
COCOMO'81. The new factors introduced in COCOMO'II include required reusability,
platform experience, language and tool experience, personnel continuity and turnover,
and a factor for multi-site development. Computer turnaround time, the use of modern
programming practices, virtual machine experience, and programming language
experience, which were effort multipliers in COCOMO'81, are removed in COCOMO'II.

Amity Directorate of Distance and Online Education


Software Project Planning 81
A single development schedule estimate is defined for all three COCOMO'II models:

Schedule = c ⎡⎣Effort 0.33 + 0.2(b −1.01) ⎤⎦


SCED% Notes
100
1
b = 1.01 +
100
∑ SFj
Where c = 3, SFj scale factor
SCED% = schedule compression/expansion parameter.

Differences between COCOMO I and COCOMO II


The major differences between COCOMO I AND COCOMO II are:
z COCOMO'81 requires software size in KDSI as an input, but COCOMO II is based
on KSLOC (logical code). The major difference between DSI and SLOC is that a
single Source Line of Code may be several physical lines.
Example: An "if-then-else" statement would be counted as one SLOC, but might be
counted as several DSI.
z COCOMO II addresses the following three phases of the spiral life cycle:
applications development, early design and post architecture
z COCOMO'81 provides point estimates of effort and schedule, but COCOMO II
provides likely ranges of estimates that represent one standard deviation around
the most likely estimate.
z The estimation equation exponent is determined by five scale factors (instead of the
three development modes)
z Changes in cost drivers are:
™ Added cost drivers (7): DOCU, RUSE, PVOL, PLEX, LTEX, PCON, SITE
™ Deleted cost drivers (5): VIRT, TURN, VEXP, LEXP, MODP
™ Alter the retained ratings to reflect more up-do-date software practices
™ Data points in COCOMO I: 63 and COCOMO II: 161
z COCOMO II adjusts for software reuse and reengineering where automated tools
are used for translation of existing software, but COCOMO'81 made little
accommodation for these factors
z COCOMO II accounts for requirements volatility in its estimates

4.6 Putnam Resource Allocation Model


Putnam studied the problem of staffing of software projects and found that the software
development has characteristics very similar to other R & D projects studied by Norden
and that the Rayleigh-Norden curve can be used to relate the number of delivered lines
of code to the effort and the time required to develop the project. It analyzing a large
number of army projects, Putnam derived the following expression:
While managing R&D projects for the Army and later at GE, Putnam noticed
software staffing profiles followed the well-known Rayleigh distribution.
Putnam used his observations about productivity levels to derive the software
equation:

B1/3 ⋅ Size
= Effort1/3 ⋅ Time 4/3
Productivity

Amity Directorate of Distance and Online Education


82 Software Engineering

where,
z Size is the product size (whatever size estimate is used by your organization is
Notes appropriate). Putnam uses ESLOC (Effective Source Lines of Code) throughout his
books.
z B is a scaling factor and is a function of the project size.
z Productivity is the Process Productivity, the ability of a particular software
organization to produce software of a given size at a particular defect rate.
z Effort is the total effort applied to the project in person-years.
z Time is the total schedule of the project in years.
In practical use, when making an estimate for a software task the software equation
is solved for effort:
3
⎡ Size ⎤
Effort = ⎢ 4/3 ⎥
⋅B
⎣ Productivity ⋅ Time ⎦
An estimated software size at project completion and organizational process
productivity is used. Plotting effort as a function of time yields the Time-Effort Curve.
The points along the curve represent the estimated total effort to complete the project at
some time. One of the distinguishing features of the Putnam model is that total effort
decreases as the time to complete the project is extended. This is normally represented
in other parametric models with a schedule relaxation parameter.

Figure 4.3: Putnam Resource allocation graph

This estimating method is fairly sensitive to uncertainty in both size and process
productivity. Putnam advocates obtaining process productivity by calibration:
Size
Process Productivity = 1/3
⎡ Effort ⎤
⎢⎣ B ⎥⎦ ⋅ Time 4/3

Amity Directorate of Distance and Online Education


Software Project Planning 83
Putnam makes a sharp distinction between 'conventional productivity' : size / effort
and process productivity.
Notes
One of the key advantages to this model is the simplicity with which it is calibrated.
Most software organizations, regardless of maturity level can easily collect size, effort
and duration (time) for past projects. Process Productivity, being exponential in nature
is typically converted to a linear productivity index an organization can use to track their
own changes in productivity and apply in future effort estimates.

4.7 Software Risk Management


Software Risk is future uncertain events with a probability of occurrence and a potential
for loss .Risk identification and management are the main concerns in every software
project. Effective analysis of software risks will help to effective planning and
assignments of work.

Categories of Risks
1. Schedule Risk
2. Operational risk
3. Technical risk
Risk management is the identification, assessment, and prioritization of risks
followed by coordinated and economical application of resources to minimize, monitor,
and control the probability and/or impact of unfortunate events.
Risk Management Process, describes the steps you need to take to identify,
monitor and control risk. Within the Risk Process, a risk is defined as any future event
that may prevent you to meet your team goals. A Risk Process allows you to identify
each risk, quantify the impact and take action now to prevent it from occurring and
reduce the impact should it eventuate.
This Risk Process helps you:
z Identify critical and non-critical risks
z Document each risk in depth by completing Risk Forms
z Log all risks and notify management of their severity
z Take action to reduce the likelihood of risks occurring
z Reduce the impact on your business, should risk eventuate
Risk Planning Risk Planning is developing and documenting organized,
comprehensive, and interactive strategies and methods for identifying risks. It is also
used for performing risk assessments to establish risk handling priorities, developing
risk handling plans, monitoring the status of risk handling actions, determining and
obtaining the resources to implement the risk management strategies. Risk planning is
used in the development and implementation of required training and communicating
risk information up and down the project stakeholder organization.
Risk monitoring and control is the process of identifying and analyzing new risk,
keeping track of these new risks and forming contingency plans incase they arise. It
ensures that the resources that the company puts aside for a project is operating
properly. Risk monitoring and control is important to a project because it helps ensure
that the project stays on track.

4.8 Summary
An accurate estimate of software size is an essential element in the calculation of
estimated project costs and schedules. The fact that these estimates are required very
early on in the project (often while a contract bid is being prepared) makes size
estimation a formidable task. In general, you present size estimates as lines of code
(KSLOC or SLOC) or as function points. There are constants that you can apply to
Amity Directorate of Distance and Online Education
84 Software Engineering

convert function points to lines of code for specific languages, but not vice versa. If
possible, choose and adhere to one unit of measurement, since conversion simply
Notes introduces a new margin of error into the final estimate. Developer opinion is otherwise
known as guessing. If you are an experienced developer, you can likely make good
estimates due to familiarity with the type of software being developed. The cost of
medium and large software projects is determined by the cost of developing the
software, plus the cost of equipment and supplies. The latter is generally a constant for
most projects. There are two basic models for estimating software development effort
(or cost): holistic and activity-based. The single biggest cost driver in either model is the
estimated project size.
Size-oriented software metrics are derived by normalizing quality and/or
productivity measures by considering the size of the software that has been produced.
Function-oriented software metrics use a measure of the functionality delivered by the
application as a normalization value. Since ‘functionality’ cannot be measured directly, it
must be derived indirectly using other direct measures. The first step in estimation is to
predict the size of the project. Typically, this will be done using either LOC (the direct
approach) or FP (the indirect approach). Then we use historical data (on similar types of
projects) about the relationship between LOC or FP and time or effort to predict the
estimate of time or effort for this project. The function point measure was originally
designed to be applied to business information systems applications.
To accommodate these applications, the data dimension (the information domain
values discussed previously) was emphasized to the exclusion of the functional and
behavioral (control) dimensions. The feature point measure accommodates applications
in which algorithmic complexity is high. Real-time, process control and embedded
software applications tend to have high algorithmic complexity and are therefore
amenable to the feature point. It measure the size from a different dimension. This
measurement is based on the number and complexity of the following objects: screens,
reports and 3GL components. Basic COCOMO'81 model is a single-valued, static
model that computes software development effort as a function of program size
expressed in estimated lines of code (LOC). Intermediate COCOMO'81 model
computes software development effort as a function of program size and a set of "cost
drivers" that include subjective assessments of product, hardware, personnel, and
project attributes. Detailed COCOMO'81 model incorporates all characteristics of the
intermediate version with an assessment of the cost driver’s impact on each step of the
software engineering process. COCOMO'81 provides point estimates of effort and
schedule, but COCOMO'II provides likely ranges of estimates that represent one
standard deviation around the most likely estimate. COCOMO'II adjusts for software
reuse and reengineering where automated tools are used for translation of existing
software, but COCOMO'81 made little accommodation for these factors COCOMO'II
accounts for requirements volatility in its estimates.

4.9 Check Your Progress


Multiple Choice Questions
1. Risk management is particularly important for software projects because of
_________.
(a) inherent uncertainties that most projects face.
(b) many risks are universal.
(c) replacement may not be experienced.
(d) project depend on organizational environment.
2. The risk that is derived from the software or hardware that is used to develop a
system is called ____________.
(a) people risks.
(b) technology risks.

Amity Directorate of Distance and Online Education


Software Project Planning 85
(c) tool risks.
(d) organizational risk. Notes
3. The likelihood and consequences of the risk that are assessed is called as
_______.
(a) risk identification.
(b) risk analysis.
(c) risk planning.
(d) risk management.
4. What is risk monitoring?
(a) Risk Management process in risk management plan.
(b) Identification of possible of project, product and business risks.
(c) Constantly assessed and plans for risk mitigation are revised as risks becomes
available.
(d) Planning to address the risk either by avoiding it or minimizing it.
5. Which of the following are parameters involved in computing the total cost of
software development project?
(a) Hardware and software costs
(b) Effort costs
(c) Travel and training costs
(d) All of the mentioned
6. Which of the following costs is not part of the total effort cost?
(a) Costs of networking and communications
(b) Costs of providing heating and lighting office space
(c) Costs of lunch time food
(d) Costs of support staff
7. What is related to the overall functionality of the delivered software?
(a) Function-related metrics
(b) Product-related metrics
(c) Size-related metrics
(d) None of the mentioned
8. Which of the following states that work expands to fill the time available.
(a) CASE tools
(b) Pricing to win
(c) Parkinson’s Law
(d) Expert judgement
9. Which model is used during early stages of the system design after the
requirements have been established?
(a) An application-composition model
(b) A post-architecture model
(c) A reuse model
(d) An early design model

Amity Directorate of Distance and Online Education


86 Software Engineering

10. Which model is used to compute the effort required to integrate reusable
components or program code that is automatically generated by design or program
Notes translation tools?
(a) An application-composition model
(b) A post-architecture model
(c) A reuse model
(d) An early design model

4.10 Questions and Exercises


1. Compute the nominal effort and development time for m organic type software
product with an estimated size of 50,00 line of code.
2. If a software product for busines aplication costs Rs. 1, 0,00 to buy and that its size
is 60 KLOC asuming that in house developers cost Rs, 10,00 per month.
3. How Expert Judgment can help in correct estimations of current project?
4. Assume that the size of an organic type software product has been estimated to be
32,000 lines of source code. Assume that the average salary of software engineers
be Rs. 15,000/- per month. Determine the effort required to develop the software
product and the nominal development time.
5. A college student often has several assignments due each week. What are some of
the factors a college student should think about in doing risk management for his or
her Risk Management assignments?
6. Calculate COCOMO effort, TDEV, average staffing, and productivity for an organic
project that is estimated to be 39,800 lines of code.
7. Estimate the cost parameters from the given set of data:
Project Size (KLOC) Cost (programmer-months)
a 30 95
b 5 80
c 20 65
d 50 155
e 100 305
f 10 35

8. Estimate the cost parameters from the given set of data:


Project Size (KLOC) Cost (programmer-months)
a 30 84
b 5 14
c 20 56
d 50 140
e 100 280
f 10 28

4.11 Key Terms


z SDM: It is Software Development Model developed by Putnam in 1978.
z SLIM: It is Software Lifecycle Management developed by Putnam in 1979
z COCOMO: It is Constructive Cost Model developed by Boehm in 1981
z COPMO: It is a Cooperative Programming Model developed by Conte, Dunsmuir,
Shen in 1986

Amity Directorate of Distance and Online Education


Software Project Planning 87
z Function points: They are derived using an empirical relationship based on
countable (direct) measures of software's information domain and assessments of
software complexity. Notes
z Basic COCOMO'81 model: Basic COCOMO'81 model is a single-valued, static
model that computes software development effort (and cost) as a function of
program size expressed in estimated lines of code (LOC).
z Intermediate COCOMO'81 model: Intermediate COCOMO'81 model computes
software development effort as a function of program size and a set of "cost drivers"
that include subjective assessments of product, hardware, personnel, and project
attributes.
z Detailed COCOMO'81 model: Detailed COCOMO'81 model incorporates all
characteristics of the intermediate version with an assessment of the cost driver’s
impact on each step of the software engineering process.

Check Your Progress: Answers


1. (a) inherent uncertainties that most projects face
2. (b) technology risks.
3. (b) risk analysis.
4. (c) Constantly assessed and plans for risk mitigation are revised as risks 5.
becomes available
5. (d) All of the mentioned
6. (c) Costs of lunch time food
7. (a) Function-related metrics
8. (c) Parkinson’s Law
9. (d) An early design model
10. (c) A reuse model

4.12 Further Readings


z Pankaj Jalote, A Concise Introduction to Software Engineering, Springer Science &
Business Media, 2008
z Vasudeva Varma, Varma Vasudeva, Software Architecture: A Case Based
Approach, Pearson Education India, 2009
z K. K. Aggarwal, Yogesh Singh, Software Engineering, New Age International, 2006
z Ajeet Pandey, Neeraj Kumar Goyal, Early Software Reliability Prediction: A Fuzzy
Logic Approach, Springer Science & Business Media, 2013

Amity Directorate of Distance and Online Education


88 Software Engineering

Unit 5: Software Design


Notes
Structure
5.1 Introduction
5.2 Design Objectives
5.3 Design Process
5.4 Software Design Principles
5.5 Design Concepts
5.6 Design Strategy
5.6.1 Function Oriented Design
5.6.2 Design Notations
5.6.3 Pseudocode
5.7 Object Oriented Design
5.8 User Interface Design
5.8.1 User Interface Design Process
5.9 Module level Concepts
5.9.1 Functional Independence
5.9.2 Module Coupling
5.10 Structured Design Methodologies
5.10.1 Data Flow Diagrams
5.10.2 Data Dictionary
5.10.3 DFD
5.10.4 Structured Design
5.11 Summary
5.12 Check Your Progress
5.13 Questions and Exercises
5.14 Key Terms
5.15 Further Readings

Objectives
After studying this unit, you should be able to:
z Features of a software design and objectives
z Design principles
z Design concepts-abstraction, refinement, modularity, control hierarchy,
information hiding, etc.
z Various design strategies
z Function oriented design
z Object oriented design
z User interface design.

Amity Directorate of Distance and Online Education


Software Design 89
5.1 Introduction
Design is a representation of something that has to be built. Design focuses on four Notes
major areas: data, architecture, interfaces and components. At the data and
architecture level, design deals with the patterns that are needed to be developed. At
the interface level, human ergonomics dictate the design approach. At the component
level, a programming approach leads to effective data and procedural designs.
Design begins with the requirements model and transforms this requirement into
detailed design: the data structure, system architecture, interface representation and
component-level design.

5.2 Design Objectives


Design fills the gap between the specification and coding as it is not possible to directly
begin coding from the specifications. Design answers the question so as to how the
software would be built, thus bridging the gap between the specifications by clearly
mentioning the organization of the program, methods to be used, interfaces to be
developed, etc. Thus, a design needs to be:
z Correct and complete
z Understandable
z At the right level
z Maintainable and to facilitate maintenance of the produced code.
Software design evolves in stages but not in one go. Each new phase adds more
details to the previous phase design with constant backtracking to correct the earlier
less formal designs. Figure 5.1 explains the transformation of an informal design into a
detailed design.

Informal Informal More Finished


Design Design Formal Design
Outline Design

Figure 5.1: Transformation of an Informal Design to a Detailed Design

5.3 Design Process


Design transforms the information model created during analysis stage into data
structures in order to implement the software. E-R diagrams and the data dictionaries
are used as the basis for the design activities.
The architectural design defines the relationship between structural elements of the
software, the design patterns and the constraints. The interface design model describes
how the software communicates with itself, with the system and with the humans who
use it. Thus an interface depicts a flow of information and behavior. The component-
level design converts the structural elements of the software architecture into a
procedural description of software components. This information can be obtained in the
form of PSPEC, CSPEC and STD.
Thus, a design affects the effectiveness of the software that is being built. It fosters
the quality of the software engineering.

Amity Directorate of Distance and Online Education


90 Software Engineering

Features of a Good Design


1. It must implement all the explicit requirements contained in the analysis stage and
Notes all the implicit requirements mentioned by the customer.
2. It must be readable and understandable for those who code, test and support.
3. It must address the data, functional and behavioral aspects for the implementation
of the software.
4. It must exhibit an architectural structure.
5. It must be modular i.e. the software must be partitioned into separate logical units to
perform specific functions.
6. It must contain distinct representation of data, architecture, interface and modules.

5.4 Software Design Principles


Design is both a process and a model. Good design is a result of creative skills, past
experience and a commitment to quality. Basic design principles allow a software
engineer to ship through the design process. The principles listed below are suggested
by Davis:
1. The design process must not suffer from “tunnel” vision. It must consider alternative
approaches based on alternative requirements, resources available and the design
concepts.
2. The design should be traceable to the analysis model. Because the design includes
a lot of requirements, there must be a clear indication of the tracking of all these
requirements in the design document.
3. The design must not reinvent the wheel. Because of time and resource constraint
design must always try to implement new ideas using the already existing design
patterns.
4. The design should minimize the distance between the software and the problem.
i.e. Design must mimic the structure of the problem domain.
5. The design must exhibit uniformity and integration. While creating the design
standard styles and formats should be followed by the design team. The design
components should be carefully integrated and the interfaces should be clearly
defined.
6. The design should be structured to accommodate change.
7. The design should be structured to degrade eventually in the event of abnormal
data, events and circumstances.
8. Design must be separated from coding in terms of data abstraction. Thus, the level
of data hiding is higher in design than in coding. While coding every minute detail
must be taken care of.
9. The design must be assessed for quality during creation.
10. The design must be reviews to minimize the semantic errors.

5.5 Design Concepts


A number of design concepts exist which provide the software designer sophisticated
design methods to be applied. These concepts answer a lot of questions like the
following:
1. Which criteria should be used to decompose the software into multiple
components?
2. How is the data or functional detail isolated from the conceptual representation of
the software?
3. What uniform criteria must be applied to define the design quality?

Amity Directorate of Distance and Online Education


Software Design 91
Abstraction
Abstraction is the elimination of the irrelevant and amplification of the essentials. A Notes
number of levels of abstraction exist in a modular design. At the highest level, the
solution is mentioned at a very broad level in terms of the problem. At the lower levels,
a procedure oriented action is taken wherein problem oriented design is coupled with
implementation level design. At the lowest level, the solution is mentioned in the form
that can be directly used for coding.
The various levels of abstraction are as follows:
1. Procedural Abstraction: It is a sequence of instructions that have limited functions
in a specific area. E.g. the word “prepare” for tea. Although it involves a lot of
actions like going to the kitchen, boiling water in the kettle, adding tea leaves, sugar
and milk, removing the kettle from the gas stove and finally putting the gas off.
2. Data Abstraction: It is a named collection of data that describes the data object.
E.g. data abstraction for prepare tea will used kettle as a data object which in turn
would contain a number of attributes like brand, weight, color, etc.
3. Control Abstraction: It implies a program control mechanism without specifying its
internal details. E.g. synchronization semaphore in operating system which is used
for coordination amongst various activities.

Refinement
Stepwise refinement is a top-down strategy wherein a program is developed by
continually refining the procedural details at each level. This hierarchy is obtained by
decomposing a statement of function step-by-step until the programming language
statements are obtained.
At each step one or more instructions are decomposed into more detailed
instructions until all the instructions have been expressed in terms of programming
language. Parallel to the instructions, data is also refined along with program and
specifications. Each refinement involves design decisions.
Refinement is nothing but elaboration. A statement of function is defined at a high
level of abstraction which mentions the functions conceptually without specifying the
internal working or structure. With continuous refinement designer provides more detail
at each step continuously.

Modularity
Modularity refers to the division of software into separate modules which are differently
named and addressed and are integrated later on in order to obtain the completely
functional software. It is the only property that allows a program to be intellectually
manageable. Single large programs are difficult to understand and read due to large
number of reference variables, control paths, global variables, etc. The desirable
properties of a modular system are:
1. Each module is a well defined system that can be used with other applications.
2. Each module has a single specific purpose.
3. Modules can be separately compiled and stored in a library.
4. Modules can use other modules.
5. Modules should be easier to use than to build.
6. Modules are simpler from outside than inside.
Modularity thus enhances the clarity of design which in turn eases coding, testing,
debugging, documenting and maintenance of the software product.
It might seem to you that on dividing a problem into sub problems indefinitely, the
effort required to develop it becomes negligibly small. However, the fact is that on
dividing the program into numerous small modules, the effort associated with the

Amity Directorate of Distance and Online Education


92 Software Engineering

integration of these modules grows. Thus, there is a number N of modules that result in
the minimum development cost. However, there is no defined way to predict the value
Notes of this N.
In order to define a proper size of the modules, we need to define an effective
design method to develop a modular system. Following are some of the criteria defined
by Meyer for the same:
1. Modular decomposability: The overall complexity of the program will get reduced
if the design provides a systematic mechanism to decompose the problem into sub-
problems and will also lead to an efficient modular design.
2. Modular composability: If a design method involves using the existing design
components while creating a new system it will lead to a solution that does not re-
invent the wheel.
3. Modular understandability: If a module can be understood as a separate
standalone unit without referring to other modules it will be easier to build and edit.
4. Modular continuity: If a change made in one module does not require changing all
the modules involved in the system, the impact of change-induced side effects gets
minimized.
5. Modular protection: If an unusual event occurs affecting a module and it does not
affect other modules, the impact of error-induced side effects will be minimized.

Software Architecture
Software architecture refers to the overall structure of the software and the way in which
this structure provides conceptual integrity for a system. Architecture is the hierarchical
structure of the program components, the way these components interact and the
structure of data that is being used in these components.
Software architecture design must possess the following properties:
1. Structural properties: This design specifies the components of a system and the
manner in which these components are included and interact in the system.
2. Extra-functional properties: The design must include the requirements for
performance, capacity, security, reliability, adaptability, etc.
3. Families of related systems: The design must include those components that can
be re-used while designing similar kinds of systems.
An architectural design can be represented in one of the following ways:
1. Structural model: It represents the architecture as a collection of components.
2. Framework model: It increases the design abstraction level by identifying the
similar kinds of design frameworks contained in similar applications.
3. Dynamic model: It answers the behavioral aspect of the architecture, specifying
the structural or system configuration changes as a result of changes in the external
environment.
4. Process model: It focuses on the designing of business or technical processes that
must be included in the system.
5. Functional model: It represents the functional hierarchy of the system.

Control Hierarchy
Control hierarchy, or program structure, represents the program components’
organization and the hierarchy of control. Different notations are used to represent the
control hierarchy, the most common of these being a tree-like diagram. Depth of a tree
describes the number of levels of control and width refers to the span of control.
Fan-out is the measure of the number of modules that are directly controlled by a
module. Fan-in indicates the number of modules that directly control a particular
module. A module being controlled is called a subordinate and the controlling module is
called a superordinate.
Amity Directorate of Distance and Online Education
Software Design 93
The control hierarchy defines two other characteristics of the software architecture.
Visibility indicates the set of program components that will be invoked or used directly
by a component. Connectivity refers to the set of components that are invoked or used Notes
directly by a program component.

Structural Partitioning
The hierarchical style of the system allows partitioning of the program structure, both
vertically and horizontally. Horizontal partitioning defines a separate branch for each
major program function. Control modules are used to coordinate between the modules
for communication and execution. Horizontal partitioning produces three things as a
result: input, data transformation and output. Partitioning has the following advantages:
1. Software becomes easier to test.
2. Software is easier to maintain.
3. Low side-effects propagation.
4. Software becomes easy to extend.
Disadvantage of the horizontal partitioning is that it causes a lot of data to be
transferred among the various modules which further complicates the control flow.
Vertical partitioning, also called factoring, separates the control and the flow in a
top-down manner. Top level modules perform only control functions while the
low-level modules perform the actual processing, input and output tasks. Thus, a
change made in the control model has higher chances of being propagated to the
subordinate modules. However, the changes in the lower level modules are not likely to
be propagated to the higher level modules. Thus, the vertically partitioned programs are
easily maintainable.

Figure 5.2(a): Horizontal Partitioning

Figure 5.2(b): Vertical Partitioning

Amity Directorate of Distance and Online Education


94 Software Engineering

Information Hiding

Notes In order to design best set of modules out of a single software solution, the concept of
information hiding is useful. The principle of information hiding suggests that modules
should be characterized by design decision that hides each one from the other. In other
words, modules should be designed in such a way that the information in one module is
not accessible to other modules that do not require these modules.
Hiding ensures that effective modularity is achieved by defining a set of
independent modules that communicate with one another only the information that is
needed for proper software functioning. Abstraction defines the procedural entities for
the software while hiding defines the access constraints to procedural details and local
data present in the modules.

5.6 Design Strategy


In order to reduce the time required to come up with the code from the design, a
number of designing techniques have been developed. These work at a relatively
higher level and permit multiple levels of design. Some of the design strategies are:
z Bottom-up design: This approach works by first identifying the various modules
that are required by many programs. These modules are grouped together in the
form of a library. Thus we need to decide about the approach to combine these
modules to produce larger and even larger modules till the time the entire program
is obtained. Since the design progresses from the bottom layers toward the top, it is
called the bottom-up design. The chances of re-coding a module are higher in this
technique of design if we start the coding soon after the design. But the coded
module is tested and design can be verified comparatively sooner than the modules
which have not yet been designed. This method, however, involves a lot of intuition
work to be able to think about the total functionality of the module at an early stage.
In case of a mistake at a higher level the rework needs to be done right from the
lower level. However, this technique is more suitable for the system that is being
developed from some existing modules.
z Top-down design: The main idea behind this technique is that the specification is
viewed as a black-box program and the designer needs to decide the inner
functioning of the black boxes from smaller black boxes. This process is repeated
till the time the black boxes can be directly coded. The approach begins by
identifying the major modules, decomposing them into smaller modules and
iterating till the desired level is achieved. This methodology is suitable if all the
specifications are clear in the beginning and the development is from the scratch.
z Hybrid design: Pure top-down and bottom-up approaches are not practical as both
the approaches have their own shortcomings as discussed above. This has lead to
the evolution of a hybrid approach of design based upon the reusability of the
modules.

5.6.1 Function Oriented Design


Function oriented design is an approach of software design where the design is divided
into a number of interacting units where each unit has a clearly defined purpose. Thus,
a system is designed from functional viewpoint.
This approach was advocated by Niklaus Wirth who called it a stepwise refinement.
It is a top-down approach. We begin with a high level design of what the program does
and then later on, this is refined at each stage giving grater details of each of these
parts.
It works fine for smaller programs. Because it is not easy to know what a large
program does, it’s not useful for large programs.
The refinement of modules continues till the point they can be easily coded in a
programming language. Because the program is being developed in a top-down

Amity Directorate of Distance and Online Education


Software Design 95
manner, the modules are highly specialized. Each module is being used by at most one
other modules i.e. its parent. For a module to be reusable it must conform to the design
reusable structure given in Figure 5.3. Notes

Figure 5.3: Design Reusability Structure

Although the program is function-oriented, it should not necessarily be created in a


top-down manner. If we wish to delay the decision of what the program is supposed to
do, a better choice is to structure the program around the data rather than the actions
taken by the program.

5.6.2 Design Notations


Design notations are meant to be used during the design phase to represent design and
design decisions. A function-oriented design can be represented graphically or
mathematically using the following:
z Data flow diagrams
z Data dictionaries
z Structure charts
z Pseudocode
The first two techniques have been discussed in detail in the lesson 4 and the other
two are discussed below:

Structure Charts
A structure chart is a diagram consisting of rectangular boxes representing modules
and connecting arrows. Structure charts encourage top-down structured design and
modularity. Top-down structured design deals with the size and complexity of an
application by breaking it up into a hierarchy of modules that result in an application that
is easier to implement and maintain.
Top-down design allows the systems analyst to judge the overall organizational
objectives and how they can be met in an overall system. Then, the analyst moves to
dividing that system into subsystems and their requirements. The modular programming
concept is useful for the top-down approach: once the top-down approach is taken, the
whole system is broken into logical, manageable-sized modules, or subprograms.
Modular design is the decomposition of a program or application into modules. A
module is a group of executable instructions (code) with a single point of entry and a
single point of exit. A module could be a subroutine, subprogram, or main program. It
also could be a smaller unit of measure such as a paragraph in a COBOL program.
Data passed between structure chart modules has either Data Coupling where only
the data required by the module is passed or Stamp Coupling where more data than
necessary is passed between modules.

Amity Directorate of Distance and Online Education


96 Software Engineering

The modules in a structure chart fall into three categories:


z Control modules, determining the overall program logic
Notes
z Transformational modules, changing input into output
z Specialized modules, performing detailed, functional work
A lower level module should not be required to perform any function of the calling,
higher level module. This would be “improper subordination.”
Modules are represented by rectangles or boxes that include the name of the
module. The highest level module is called the system, root, supervisor, or executive
module. It calls the modules directly beneath it which in turn call the modules beneath
them.
A connection is represented by an arrow and denotes the calling of one module by
another. The arrow points from the calling (higher) module to the called (subordinate)
module.
Note: The structure charts drawn in the Kendall and Kendall text book do not
include the arrowhead on the connections between modules. Kendall and Kendall draw
plain lines between module boxes.
A data couple indicates that a data field is passed from one module to another for
operation and is depicted by an arrow with an open circle at the end.
A flag, or control couple, is a data field (message, control parameter) that is passed
from one module to another and tested to determine some condition. Control flags are
depicted by an arrow with a darkened circle at the end. Sometimes a distinction is made
between a control switch (which may have two values, e.g., yes-no, on-off, one-zero,
etc.) and a control flag (which may have more than two values).
Modules that perform input are referred to as afferent while those that produce
output are called efferent.
A structure chart is a graphic tool that shows the hierarchy of program modules and
interfaces between them. Structure charts include annotations for data flowing between
modules. An arrow from a module P to module Q represents that module P invokes
module Q. Q is called the subordinate of P and P is called the super ordinate of Q. The
arrow is labeled by the parameters receives as inputs by Q and the parameters
returned as output by Q with the appropriate direction of flow. E.g.
void main()
{
avg=calculate_average(a,n);
product=calculate_product(x,y);
}

float calculate_average( float *x, int size)


{
float sum=0.0;
int I;
for (i=0;i<size;i++)
sum = sum + x[i];
return (sum/size);
}
int calculate_product( int a, int b)
{
return(a * b);
}

Amity Directorate of Distance and Online Education


Software Design 97

a,n main x,y


Notes

mean product

calculate_avera Calculate_produc
t

Figure 5.4: Structure Chart of the Program Above

5.6.3 Pseudocode
Pseudocode notation can be used both at the preliminary stage as well as the
advanced stage. It can be used at any abstraction level. In a Pseudocode, a designer
describes system characteristics using short English language phrases with the help of
keywords like while, if..the..else, End, etc. Keywords and indentation describe the flow
of control while the English phrases describe the processing action that is being taken.
Using the top-down strategy, each English phrase is expanded into a more detailed
Pseudocode till the point it reaches the level of implementation.

Functional Procedural Layers


z Functions are built in layers. Layers are used to depict additional information.
z Level 0:
Function or procedure name
Relationship with other systems
Description of the function purpose
Author, date
z Level 1:
Functional parameters
Global variables
Routines called by functions
Side effects
Input/output assertions
z Level 2:
Local data structures, variables, etc
Timing constraints
Exception handling
Any other limitations
z Level 3:
Body including Pseudocode, structure chart, decision tables, etc.

5.7 Object Oriented Design


Object-oriented design focuses not on the functions performed by the program but on
the data that is to be manipulated by the program. Thus, it is orthogonal to function-
oriented design. This design concept does not depend upon any specific programming
language. Rather, problems are modeled using objects. Objects have behavior and
state. The various terms related to object-oriented design are: Objects, Classes,
Abstraction, Inheritance and Polymorphism.

Amity Directorate of Distance and Online Education


98 Software Engineering

z Objects: An object is an entity which has a state and a defined set of operations
which operate on that state. The state is represented as a collection of attributes.
Notes The operations on these objects are performed when requested from some other
object. Objects communicate by passing messages to each other which initiate
operations on objects.
z Messages: Objects communicate through messages. A message comprises of the
identity of the target object, the name of the operation and any operator needed to
perform this operation. In a distributed system, messages are in the form of text
which is exchanged among the objects. Messages are implemented as procedures
or function calls.
z Abstraction: Abstraction is the elimination of the irrelevant information and the
amplification of the essentials. It is discussed in detail in lesson 5.
z Class: It is a set of objects that share a common structure and behavior. They act
as blueprint for objects. E.g. a circle class has attributes: center and radius. It has
two operations: area and circumference.
z Inheritance: The property of extending the features of one class to another class is
called inheritance. The low level classes are called subclasses or derived classes
and the high level class is called the super or base class. The derived class inherits
state and behavior from the super class. E.g. A shape class is a super class for two
derived classes: circle and triangle.
z Polymorphism: When we abstract only the interface of an operation and leave the
implementation to subclasses, it is called polymorphic operation and the process is
called polymorphism.

Why is Object Orientation better?


Object oriented technology produces better solution than its structures counterpart. This
is because these systems are easier to adapt to changing requirements, easier to
maintain, promote design and code re-use to a larger extent. The reasons behind this
are:
z Object orientation works at a higher level of abstraction.
z The object oriented software life cycle does not require vaulting between phases.
z The data is more stable than the functionality it supports.
z It encourages good programming techniques.
z It promotes code and design re-use.

5.8 User Interface Design


Four different models come into picture when a user-interface is designed. The software
engineer will create a design model; a human engineer creates a user model; the end
user develops a mental image called user’s model or the system perception; and the
implementers of the system create a system image. Each of these models differs
significantly and to arrive at a consistent representation of the interface is the role of the
interface designer.
The user model begins with the creation of a user profile. Users can be classified
as:
z Novices: No syntactic knowledge but only semantic knowledge of the system.
z Knowledgeable, intermittent users: Sufficient semantic knowledge or application
but low information about the syntactic knowledge to use the interface.
z Knowledgeable, frequent users: Good semantic and syntactic knowledge to be
able to build shortcuts and abbreviated modes of interaction.
The system perception is the image of the system as viewed by the end-users. The
accuracy of the system description will depend upon the user profile and his familiarity
with the software in the application domain. The system image combines the outward

Amity Directorate of Distance and Online Education


Software Design 99
manifestation of the application along with all supporting documentation which
describes the system syntax and semantics. When the system image and the system
perception are similar the users are able to understand the system better. To Notes
accomplish this, the design must cater to both the system perception and the system
image.

5.8.1 User Interface Design Process


The design process for user interfaces is iterative and can be represented suing a
spiral model.
The spiral model as shown in Figure 5.5 comprises of four different sections:
z User, task and environment analysis and modeling
z Interface design
z Interface construction
z Interface validation

Figure 5.5: Spiral Model Indicating the User Interface Design Process

The spiral indicates that each of these tasks will occur more than once with each
pass around the spiral representing additional elaboration of requirements and the
resulting design. In most cases, the implementation involves prototyping.
The initial analysis focuses on the user profiles which will interact with the system.
Different user categories are defined based on their skills, understanding, etc. The
requirements are elicited as per these categories i.e. the software engineer tries to
interpret the system perception from each of the class’ view.
Once general requirements have been designed, a more detailed task analysis is
carried out. The analysis of the user environment focuses on the physical work
environment. This information gathered during analysis phase is used to create an
analysis model for the interface. Using this model as the basis the design
activity begins.
The goal of interface design is to define a set of interface objects and actions that
allow a user to perform all define tasks in such a manner to meet all the usability goals.
The implementation activity begins with the creation of a prototype that allows user
scenarios to be evaluated.
Validation focuses on (i) the ability of the interface to implement every user task
correctly, to accommodate all task variations and to achieve all general user
requirements; (ii) the degree of ease of using and learning the interface and (iii) the
acceptance of the interface as a useful tool in the user’s work.
Amity Directorate of Distance and Online Education
100 Software Engineering

Because the activities occur iteratively it is not necessary to mention all the details
in the first pass. Subsequent passes elaborate the task detail, design information and
Notes the operations of the interface.

5.9 Module level Concepts


A modular design reduces the design complexity and results in easier and faster
implementation by allowing parallel development of different parts of a system. We
discuss the various concepts of modular design in detail in this section.

5.9.1 Functional Independence


This concept evolves directly from the concepts of abstraction, information hiding and
modularity. Functional independence is achieved by developing functions that perform
only one kind of task and do not excessively interact with other modules.
Independence is important because it makes implementation easier and faster. The
independent modules are easier to maintain, test, and reduce error propagation and
can be reused in other programs as well. Thus, functional independence is a good
design feature which ensures software quality. It is measured using two criteria:
coupling and cohesion.

5.9.2 Module Coupling


Coupling is the measure of degree of interdependence amongst modules. Two modules
that are tightly coupled are strongly dependent on each other. However, two modules
that are loosely coupled are not dependent on each other. Uncoupled modules have no
interdependence at all within them. The various types of coupling techniques are
depicted in Figure 5.6.

Uncoupled: no Loosely coupled: Highly coupled:


dependence some dependence large dependence

Figure 5.6: Module Coupling

A good design is the one that has low coupling. Coupling is measured by the
number of interconnections between the modules. I.e. the coupling increases as the
number of calls between modules increase or the amount of shared data is large. Thus,
it can be said that a design with high coupling will have more errors. Different types of
coupling are content, common, external, control, stamp and data. The level of coupling
in each of these types is given in Figure 5.7.

Data coupling
Stamp coupling
Control coupling
External coupling
Common coupling
Content coupling

Figure 5.7: Types of Modules Coupling

Amity Directorate of Distance and Online Education


Software Design 101
The direction of the arrow in Figure 5.7 points from the lowest coupling to highest
coupling. The strength of coupling is influenced by the complexity of the modules, the
type of connection and the type of communication. Modifications done in the data of one Notes
block may require changes in other block of a different module which is coupled to the
former module. However, if the communication takes place in the form of parameters
then the internal details of the modules are no required to be modified while making
changes in the related module.
Given two procedures X and Y, the type of coupling can be identified in them.

Cohesion
1. Data coupling: When X and Y communicates by passing parameters to one another
and not unnecessary data. Thus, if a procedure needs a part of a data structure, it
should be passed just that and not the complete thing.
2. Stamp Coupling: Although X and Y make use of the same data type but perform
different operations on them.
3. Control Coupling (activating): X transfers control to Y through procedure calls.
4. Common Coupling: Both X and Y use some shared data e.g. global variables. This
is the most undesirable, because if we wish to change the shared data, all the
procedures accessing this shared data will need to be modified.
5. Content Coupling: When X modifies Y either by branching in the middle of Y or by
changing the local data values or instructions of Y.
Cohesion is the measure of the degree of functional dependence of modules. A
strongly cohesive module implements functionality that interacts little with the other
modules. Thus, in order to achieve higher interaction amongst modules a higher
cohesion is desired. Different types of cohesion are listed in Figure 5.8.

Functional cohesion
Sequential cohesion
Communication cohesion
Procedural cohesion
Temporal cohesion
Logical cohesion
Coincidental cohesion

Figure 5.8: Types of Module Cohesion

The direction of the arrow in Figure 5.8 indicates the worst degree of cohesion to
the best degree of cohesion. The worst degree of cohesion, coincidental, exists in the
modules that are not related to each other in any way. So, all the functions, processes
and data exist in the same module.
Logical is the next higher level where several logically related functions or data are
placed in the same module.
At times a module initializes a system or a set of variables. Such a module performs
several functions in sequence, but these functions are related only by the timing
involved. Such cohesion is said to be temporal.
When the functions are grouped in a module to ensure that they are performed in a
particular order, the module is said to be procedurally cohesive. Alternatively, some
functions that use the same data set and produce the same output are combined in one
module. This is known as communicational cohesion.
If the output of one part of a module acts as an input to the next part, the module is
said to have sequential cohesion. Because the module is in the process of construction,
it is possible that it does not contain all the processes of a function. The most ideal of all
module cohesion techniques is functional cohesion. Here, every processing element is
important to the function and all the important functions are contained in the same

Amity Directorate of Distance and Online Education


102 Software Engineering

module. A functionally cohesive function performs only one kind of function and only
that kind of function.
Notes
Given a procedure carrying out operations A and B, we can identify various forms of
cohesion between A and B:
1. Functional Cohesion: A and B are part of one function and hence, are contained
in the same procedure.
2. Sequential Cohesion: A produces an output that is used as input to B. Thus they
can be a part of the same procedure.
3. Communicational Cohesion: A and B take the same input or generate the same
output. They can be parts of different procedures.
4. Procedural Cohesion: A and B are structured in the similar manner. Thus, it is not
advisable to put them both in the same procedure.
5. Temporal Cohesion: Both A and B are performed at moreover the same time.
Thus, they cannot necessarily be put in the same procedure because they may or
may not be performed at once.
6. Logical Cohesion: A and B perform logically similar operations.
7. Coincidental Cohesion: A and B are not conceptually related but share the same
code.

5.10 Structured Design Methodologies


Structured analysis is used to carry out the top-down decomposition of a set of high-
level functions depicted in the problem description and to represent them graphically.
During structured analysis, functional decomposition of the system is achieved. That is,
each function that the system performs is analysed and hierarchically decomposed into
more detailed functions. Structured analysis technique is based on the following
essential underlying principles:
z Top-down decomposition approach.
z Divide and conquer principle. Each function is decomposed independently.
z Graphical representation of the analysis results using Data Flow Diagrams (DFDs).

5.10.1. Data Flow Diagrams


The DFD (also known as a bubble chart) is a simple graphical formalism that can be
used to represent a system in terms of the input data to the system, various processing
carried out on these data, and the output data generated by the system. A DFD model
uses a very limited number of primitive symbols (as shown in fig. 5.9) to represent the
functions performed by a system and the data flow among these functions.

Figure 5.9: Symbols used for designing DFDs

The main reason why the DFD technique is so popular is probably because of the
fact that DFD is a very simple formalism – it is simple to understand and use. Starting
with a set of highlevel functions that a system performs, a DFD model hierarchically
represents various subfunctions. In fact, any hierarchical model is simple to understand.
The human mind is such that it can easily understand any hierarchical model of a
system – because in a hierarchical model, starting with a very simple and abstract
model of a system, different details of the system are slowly introduced through different
hierarchies. The data flow diagramming technique also follows a very simple set of
intuitive concepts and rules. DFD is an elegant modeling technique that turns out to be
Amity Directorate of Distance and Online Education
Software Design 103
useful not only to represent the results of structured analysis of a software problem but
also for several other applications such as showing the flow of documents or items in an
organization. Notes
5.10.2 Data Dictionary
A data dictionary lists all data items appearing in the DFD model of a system. The data
items listed include all data flows and the contents of all data stores appearing on the
DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items
and the definition of all composite data items in terms of their component data items.
For example, a data dictionary entry may represent that the data grossPay consists of
the components regularPay and overtimePay.
grossPay = regularPay + overtimePay
For the smallest units of data items, the data dictionary lists their name and their
type. A data dictionary plays a very important role in any software development process
because of the following reasons:
z A data dictionary provides a standard terminology for all relevant data for use by
engineers working in a project. A consistent vocabulary for data items is very
important, since in large projects different engineers of the project have a tendency
to use different terms to refer to the same data, which unnecessarily causes
confusion.
z The data dictionary provides the analyst with a means to determine the definition of
different data structures in terms of their component elements.

5.10.3. DFD
Levels and Model The DFD model of a system typically consists of several DFDs, viz.,
level 0 DFD, level 1 DFD, level 2 DFDs, etc. A single data dictionary should capture all
the data appearing in all the DFDs constituting the DFD model of a system.
Balancing DFDs
The data that flow into or out of a bubble must match the data flow at the next level of
DFD. This is known as balancing a DFD. The concept of balancing a DFD has been
illustrated in fig. 5.10.. In the level 1 of the DFD, data items d1 and d3 flow out of the
bubble 0.1 and the data item d2 flows into the bubble P1. In the next level, bubble 0.1 is
decomposed. The decomposition is balanced, as d1 and d3 flow out of the level 2
diagram and d2 flows in.

Figure 5.10: An example showing balanced decomposition

Amity Directorate of Distance and Online Education


104 Software Engineering

Context Diagram

Notes The context diagram is the most abstract data flow representation of a system. It
represents the entire system as a single bubble. This bubble is labeled according to the
main function of the system. The various external entities with which the system
interacts and the data flow occurring between the system and the external entities are
also represented. The data input to the system and the data output from the system are
represented as incoming and outgoing arrows. These data flow arrows should be
annotated with the corresponding data names.

5.10.4 Structured Design


The aim of structured design is to transform the results of the structured analysis (i.e. a
DFD representation into a structure chart). A structure chart represents the software
architecture, i.e. the various modules making up the system, the module dependency,
and the parameters that are passed among the different modules. Since the main focus
in a structure chart representation is on the module structure of software and the
interaction between the different modules, the procedural aspects are not represented.

Flow Chart Vs. Structure Chart


We are all familiar with the flow chart representation of a program. Flow chart is a
convenient technique to represent the flow of control in a program. A structure chart
differs from a flow chart in three principal ways:
z It is usually difficult to identify the different modules of the software from its flow
chart representation.
z Sequential ordering of tasks inherent in a flow chart is suppressed in a structure
chart.

Transformation of a DFD into a Structure Chart


Systematic techniques are available to transform the DFD representation of a problem
into a module structure represented by a structure chart. Structured design provides two
strategies:
z Transform Analysis
z Transaction Analysis

Transform Analysis
Transform analysis identifies the primary functional components (modules) and the high
level inputs and outputs for these components. The first step in transform analysis is to
divide the DFD into 3 types of parts:
z Input
z Logical processing
z Output
The input portion of the DFD includes processes that transform input data from
physical (e.g. character from terminal) to logical forms (e.g. internal tables, lists, etc.).
Each input portion is called an afferent branch.
The output portion of a DFD transforms output data from logical to physical form.
Each output portion is called efferent branch. The remaining portion of a DFD is called
central transform.

5.11 Summary
Design is the technical kernel of software engineering. Design produces representation
of software that can be assessed for quality. During the design process continuous
refinement of data structures, architecture, interfaces, etc of software are developed,
documented and reviewed.
Amity Directorate of Distance and Online Education
Software Design 105
A large number of design principles and concepts have developed over the last few
decades. Modularity and abstraction allow the designers to simplify the software and
make it reusable. Information hiding and functional independence provide heuristics for Notes
achieving efficient modularity.
The design process comprises of the activities that reduce the level of abstraction.
Structured programming allows the designer to define algorithms which are less
complex, easier to read, test and maintain. The user interface is the most important
element of a computer-based system because if the interface is not designed properly
the user may not be able to effectively tap the power of the application. The user
interface design starts with the identification of user, task and environment
requirements. It is followed by task analysis to define user tasks and actions. After
identifying the tasks, user scenarios are created and analyzed to define a set of
interface objects and actions. The user interface is a window into the software and
moulds the user’s perception of the system.

5.12 Check Your Progress


Multiple Choice Questions
1. A software project classifies system entities, their activities and relationships. The
classification and abstraction of system entities is important. Which modeling
methodology most clearly shows the classification and abstraction of entities in the
system?
(a) Data flow model.
(b) Event driven model.
(c) Object oriented model.
(d) Entity-relationship model.
2. The model which is been widely used in the database design is __________.
(a) entity-relationship model.
(b) data flow model.
(c) behavioral model.
(d) context model.
3. A step by step instruction used to solve a problem is known as
(a) Sequential structure
(b) A List
(c) A plan
(d) An Algorithm
4. In the Analysis phase, the development of the ____________ occurs, which is a
clear statement of the goals and objectives of the project.
(a) documentation
(b) flowchart
(c) program specification
(d) design
5. Who designs and implement database structures.
(a) Programmers
(b) Project managers
(c) Technical writers
(d) Database administrators

Amity Directorate of Distance and Online Education


106 Software Engineering

6. ____________ is the process of translating a task into a series of commands that a


computer will use to perform that task.
Notes (a) Project design
(b) Installation
(c) Systems analysis
(d) Programming
7. In Design phase, which is the primary area of concern?
(a) Architecture
(b) Data
(c) Interface
(d) All of the mentioned
8. The importance of software design can be summarized in a single word which is:
(a) Efficiency
(b) Accuracy
(c) Quality
(d) Complexity
9. Cohesion is a qualitative indication of the degree to which a module
(a) can be written more compactly.
(b) focuses on just one thing.
(c) is able to complete its function in a timely manner.
(d) is connected to other modules and the outside world.
10. Coupling is a qualitative indication of the degree to which a module
(a) can be written more compactly.
(b) focuses on just one thing.
(c) is able to complete its function in a timely manner.
(d) is connected to other modules and the outside world.

5.13 Questions and Exercises


1. What is software design?
2. What are the phases of the design process?
3. What do you understand by design concepts?
4. What is the relationship between abstract data types and classes?
5. Define objects, messages, abstraction, class, inheritance and polymorphism.
6. Discuss the differences between function oriented designs and object oriented
design.
7. What is modularity? List the important properties of a modular system.
8. Discuss the relation between information hiding and module independence.

5.14 Key Terms


z Abstraction: Abstraction is the elimination of the irrelevant and amplification of the
essentials
z Refinement: Stepwise refinement is a top-down strategy wherein a program is
developed by continually refining the procedural details at each level.

Amity Directorate of Distance and Online Education


Software Design 107
z Modularity: Modularity refers to the division of software into separate modules
which are differently named and addressed and are integrated later on in order to
obtain the completely functional software. Notes
z Control Hierarchy: Control hierarchy, or program structure, represents the
program components’ organization and the hierarchy of control.
z Function oriented design: It is an approach of software design where the design
is divided into a number of interacting units where each unit has a clearly defined
purpose.
z Structure chart: It is a diagram consisting of rectangular boxes representing
modules and connecting arrows
z Objects: An object is an entity which has a state and a defined set of operations
which operate on that state.

Check Your Progress: Answers


1. (d) Entity-relationship model.
2. (a) entity-relationship model
3. (d) An Algorithm
4. (c) program specification
5. (d) Database administrators
6. (d) Programming
7. (d) All of the mentioned
8. (c) Quality
9. (b) focuses on just one thing.
10. (d) is connected to other modules and the outside world.

5.15 Further Readings


z Ajeet Pandey, Neeraj Kumar Goyal, Early Software Reliability Prediction: A Fuzzy
Logic Approach, Springer Science & Business Media, 2013
z Vasudeva Varma, Varma Vasudeva, Software Architecture: A Case Based
Approach, Pearson Education India, 2009
z Pankaj Jalote, A Concise Introduction to Software Engineering, Springer Science &
Business Media, 2008
z K. K. Aggarwal, Yogesh Singh, Software Engineering, New Age International, 2006

Amity Directorate of Distance and Online Education

You might also like