Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

Chapter 2

Unit Structure

2.0 Objectives

2.1 Introduction

2.2 Reducing Software Product Size


2.2.1 Languages

2.2.2 Object – Oriented Methods and Visual Modeling

2.2.3 Reuse

2.2.4 Commercial Components

2.3 Improving Software Processes

2.4 Improving Team Effectiveness

2.5 Improving Automation through Software Environments

2.6 Achieving Required Quality

2.7 Peer Inspections

2.8 Summary

2.9 Review Questions

2.10 Bibliography, References and Further Reading


2.0 Objectives
At the end of this chapter, you will be able to understand the basic parameters involved in the cost
estimation equation and how these factors are essential to improve the software economics of a
project.

2.1 Introduction
Modern software technology is enabling systems to be built with fewer human-generated source lines.
Improvements in the economics of software development have been not only difficult to achieve but
also difficult to measure and substantiate. In software textbooks, trade journals, and product literature,
the treatment of this topic is plagued by inconsistent jargon, inconsistent units of measure,
disagreement among experts, and unending hyperbole. Examining any one aspect of improving software
economics, ends up with fairly narrow conclusions and an observation of limited value. If an
organization focuses too much on improving only one aspect of its software development process, it will
not realize any significant economic improvement even though it improves this one aspect
spectacularly. The key to substantial improvement is a balanced attack across several interrelated
dimensions. The important dimensions of software economics move around the five basic parameters of
the software cost model:
1. Reducing the size or complexity of what needs to be developed.
2. Improving the development process
3. Using more-skilled personnel and better teams (not necessarily the same thing)
4. Using better environments (tools to automate the process)
5. Trading off or backing off on quality thresholds

The five basic parameters of the cost estimation equation are not mutually exclusive, nor are they
independent of one another. They are interrelated. An important factor that has influenced software
technology improvements across the board is the ever-increasing advances in hardware performance.
The availability of more cycles, more memory, and more bandwidth has eliminated many sources of
software implementation complexity. Simpler, brute-force solutions are now possible, and hardware
improvements are probably the enabling advance behind most software technology improvements of
substance.

2.2 REDUCING SOFTWARE PRODUCT SIZE


The most significant way to improve affordability and return on investment (ROI) is usually to produce a
product that achieves the design goals with the minimum amount of human-generated source material.
Component-based development is introduced here as the general term for reducing the “source”
language size necessary to achieve a software solution. Reuse, object-oriented technology, automatic
code production, and higher order programming languages are all focused on achieving a given system
with fewer lines of human-specified source directives (statements). This size reduction is the primary
motivation behind improvements in higher order languages (such as C++, Ada 95, Java, Visual Basic, and
fourth-generation languages), automatic code generators (CASE tools, visual modeling tools, GUI
builders), reuse of commercial components (operating systems, windowing environments, database
management systems, middleware, networks), and object-oriented technologies (Unified Modeling
Language, visual modeling tools, architecture frameworks). The mature and reliable size reduction
technologies are extremely powerful at producing economic benefits. Immature size reduction
technologies may reduce the development size but require so much more investment in achieving the
necessary levels of quality and performance that they have a negative impact on overall project
performance.

2.2.1 Languages
Universal function points (UFPs) are useful estimators for language-independent, early life-cycle
estimates. The basic units of function points are external user inputs, external outputs, internal logical
data groups, external data interfaces, and external inquiries. SLOC metrics are useful estimators for
software after a candidate solution is formulated and an implementation language is known. Substantial
data have been documented relating SLOC to function points [Jones, 1995]. Some of these results are
shown in table below.

The data in the table illustrate why people are interested in modern languages such as C++, Ada 95,
Java, and Visual Basic. The level of expressibility is very attractive. However, care must be taken in
applying these data because of numerous possible misuses. Each language has a domain of usage. Visual
Basic is very expressive and powerful in building simple interactive applications, but it would not be a
wise choice for a real-time, embedded, avionics program. Similarly, Ada 95 might be the best language
for a catastrophic cost-of-failure system that controls a nuclear power plant, but it would not be the
best choice for a highly parallel, scientific, number – crunching program running on a supercomputer.
Software industry data such as these, which span application domains, corporations, and technology
generations, must be interpreted and used with great care.
Because the difference between large and small projects has a greater than linear impact on the life-
cycle cost, the use of the highest level language and appropriate commercial components has a large
potential impact on cost. Furthermore, simpler is generally better: Reducing size usually increases
understandability, changeability, and reliability. One typical negative side effect is that the higher level
abstraction technologies tend to degrade performance, increasing consumption of resources such as
processor cycles, memory, and communications bandwidth. Most of these drawbacks have been
overcome by hardware performance improvements and optimizations. These improvements are far less
effective in embedded platforms.

2.2.2 Object – Oriented Methods and Visual Modeling


By providing more-formalized notations for capturing and visualizing software abstractions, the
fundamental impact of object-oriented technology is in reducing the overall size of what needs to be
developed. Booch has described three other reasons that certain object-oriented projects succeed
[Booch, 1996]. These are interesting examples of the interrelationships among the dimensions of
improving software economics.
1. An object-oriented model of the problem and its solution encourages a common vocabulary
between the end users of a system and its developers, thus creating a shared understanding of
the problem being solved.
2. The use of continuous integration creates opportunities to recognize risk early and make
incremental corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among disparate
elements of a system, creating firewalls that prevent a change in one part of the system from
rending the fabric of the entire architecture.

2.2.3 Reuse
Reusing existing components and building reusable components have been natural software
engineering activities since the earliest improvements in programming languages. Software design
methods have always dealt implicitly with reuse in order to minimize development costs while achieving
all the other required attributes of performance, feature set, and quality. One of the biggest obstacles to
reuse has been fragmentation of languages, operating systems, notations, machine architectures, tools,
and even “standards”. Most truly reusable components of value are transitioned to commercial
products supported by organizations with the following characteristics:
 They have an economic motivation for continued support.
 They take ownership of improving product quality, adding new features, and transitioning to
new technologies.
 They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. To succeed in the marketplace for
commercial components, an organization needs three enduring elements: a development group, a
support infrastructure, and a product-oriented sales and marketing infrastructure. Reuse is an important
discipline that has an impact on the efficiency of all workflows and the quality of most artifacts.

2.2.4 COMMERCIAL COMPONENTS


A common approach being pursued today in many domains is to maximize integration of commercial
components and off-the-shelf products. While the use of commercial components is certainly desirable
as a means of reducing custom development, it has not proven to be straightforward in practice. Table
given below identifies some of the advantages and disadvantages of using commercial components.
These trade-offs are particularly acute in mission-critical domains. Because the trade-offs frequently
have global effects on quality, cost, and supportability, the selection of commercial components over
development of custom components has significant impact on a project’s overall architecture. The
paramount message here is that these decisions must be made early in the life cycle as part of the
architectural design.
2.3 IMPROVING SOFTWARE PROCESSES
For software-oriented organizations, there are many processes and sub-processes. We use three distinct
process perspectives.
 Metaprocess: an organization's policies, procedures, and practices for pursuing a software-
intensive line of business. The focus of this process is on organizational economics, long-term
strategies, and a software ROI.
 Macroprocess: a project's policies, procedures, and practices for producing a complete software
product within certain cost, schedule, and quality constraints. The focus of the macroprocess is
on creating an adequate instance of the metaprocess for a specific set of constraints.
 Microprocess: a project team's policies, procedures, and practices for achieving an artifact of the
software process. The focus of the microprocess is on achieving an intermediate product
baseline with adequate quality and adequate functionality as economically and rapidly as
practical.
Although these three levels of process overlap somewhat, they have different objectives, audiences,
metrics, concerns, and time scales, as shown in the above table. The macroprocess is the project-level
process that affects the cost estimation model as discussed further.
To achieve success, most software projects require an incredibly complex web of sequential and parallel
steps. As the scale of a project increases, more overhead steps must be included just to manage the
complexity of this web. All project processes consist of productive activities and overhead activities.
Productive activities result in tangible progress toward the end product. For software efforts, these
activities include prototyping, modeling, coding, debugging, and user documentation. Overhead
activities that have an intangible impact on the end product are required in plan preparation,
documentation, progress monitoring, risk assessment, financial assessment, configuration control,
quality assessment, integration, testing, late scrap and rework, management, personnel training,
business administration, and other tasks. Overhead activities include many value-added efforts, but, in
general, the less effort devoted to these activities, the more effort that can be expended in productive
activities. The objective of process improvement is to maximize the allocation of resources to productive
activities and minimize the impact of overhead activities on resources such as personnel, computers,
and schedule.
In a perfect software engineering world with an immaculate problem description, an obvious solution
space, a development team of experienced geniuses, adequate resources, and stakeholders with
common goals, we could execute a software development process in one iteration with almost no scrap
and rework. Because we work in an imperfect world, however, we need to manage engineering
activities so that scrap and rework profiles do not have an impact on the win conditions of any
stakeholder. This should be the underlying premise for most process improvements.

2.4 Improving Team Effectiveness


It has long been understood that differences in personnel account for the greatest swings in
productivity. The original COCOMO model, for example, suggests that the combined effects of personnel
skill and experience can have an impact on productivity of as much as a factor of four. This is the
difference between an unskilled team of amateurs and a veteran team of experts. In practice, it is risky
to assess a given team as being off-scale in either direction. Balance and coverage are two of the most
important aspects of excellent teams. Whenever a team is out of balance, it is vulnerable.
Teamwork is much more important than the sum of the individuals. With software teams, a project
manager needs to configure a balance of solid talent with highly skilled people in the leverage positions.
Some maxims of team management include the following:
 A well-managed project can succeed with a nominal engineering team.
 A mismanaged project will almost never succeed, even with an expert team of engineers.
 A well-architected system can be built by a nominal team of software builders.
 A poorly architected system will flounder even with an expert team of builders.

In examining how to staff a software project, Boehm offered the following five staffing principles
[Boehm, 1981].
1) The principle of top talent: Use better and fewer people.
 This tenet is fundamental, but it can be applied only so far. There is a “natural” team size for
most jobs, and being grossly over or under this size is bad for team dynamics because it results
in too little or too much pressure on individuals to perform.
2) The principle of job matching: Fit the tasks to the skills and motivation of the people available.
 This principle seems obvious. However with software engineers, it is more difficult to
discriminate the mostly intangible personnel skills and optimal task allocations. Personal
agendas also complicate assignments. On software teams, however, it is common for talented
programmers to seek promotions to architects and managers. There are countless cases of great
software engineers being promoted prematurely into positions for which they were unskilled
and unqualified.
3) The principle of career progression: An organization does best in the long run by helping its people
to self-actualize.
 Good performers usually self-actualize in any environment. Organizations can help and hinder
employee self-actualization. Organizational training programs are typically strategic
undertakings with educational value. Project training programs are purely tactical, intended to
be useful and applied the day after training ends.
4) The principle of team balance: Select people who will complement and harmonize with one another.
 Although this principle sounds a little drippy, its spirit is the paramount factor in good
teamwork. Software team balance has many dimensions, and when a team is unbalanced in
anyone of them, a project becomes seriously at risk. These dimensions include:
i) Raw skills: intelligence, objectivity, creativity, organization, analytical thinking
ii) Psychological makeup: leaders and followers, risk takers and conservatives, visionaries and
nitpickers, cynics and optimists
iii) Objectives: financial, feature set, quality, timeliness
5) The principle of phaseout: Keeping a misfit on the team doesn't benefit anyone.
 This is really a subprinciple of the other four. A misfit gives you a reason to find a better person
or to live with fewer people. A misfit demotivates other team members, will not self-actualize,
and disrupts the team balance in some dimension. Misfits are obvious, and it is almost never
right to procrastinate weeding them out.

Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more
attention:
1) Hiring skills
 Few decisions are as important as hiring decisions. Placing the right person in the right job
seems obvious but is surprisingly hard to achieve.
2) Customer-interface skill
 Avoiding adversarial relationships among stakeholders is a prerequisite for success.
3) Decision-making skill
 The zillion books written about management have failed to provide a clear definition of this
attribute. Decision-making skill seems obvious despite its intangible definition.
4) Team-building skill
 Teamwork requires that a manager establish trust, motivate progress, exploit eccentric prima
donnas, transition average people into top performers, eliminate misfits, and consolidate
diverse opinions into a team direction.
5) Selling skill
 Successful project managers must sell all stakeholders (including themselves) on decisions and
priorities, sell candidates on job positions, sell changes to the status quo in the face of
resistance, and sell achievements against objectives. In practice, selling requires continuous
negotiation, compromise, and empathy.

2.5 Improving Automation through Software Environments


The tools and environment used in the software process generally have a linear effect on the
productivity of the process. Planning tools, requirements management tools, visual modeling tools,
compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interfaces provide
crucial automation support for evolving the software engineering artifacts. Above all, configuration
management environments provide the foundation for executing and instrumenting the process. Tools
and environments must be viewed as the primary delivery vehicle for process automation and
improvement, so their impact can be much higher.
A form of process improvement is to reduce scrap and rework, thereby eliminating steps and minimizing
the number of iterations in the process. The other form of process improvement is to increase the
efficiency of certain steps. This is one of the primary contributions of the environment: to automate
manual tasks that are inefficient or error-prone. An environment that provides semantic integration (in
which the environment understands the detailed meaning of the development artifacts) and process
automation can improve productivity, improve software quality, and accelerate the adoption of modern
techniques. An environment that supports incremental compilation, automated system builds, and
integrated regression testing can provide rapid turnaround for iterative development and allow
development teams to iterate more freely.
Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team. Integrated toolsets play an increasingly
important role in incremental/iterative development by allowing the designers to traverse quickly
among development artifacts and keep them up-to-date.
Round-trip engineering is a term used to describe the key capability of environments that support
iterative development. As we have moved into maintaining different information repositories for the
engineering artifacts, we need automation support to ensure efficient and error-free transition of data
from one artifact to another.
Forward engineering is the automation of one engineering artifact from another, more abstract
representation. For example, compilers and linkers have provided automated transition of source code
into executable code.
Reverse engineering is the generation or modification of a more abstract representation from an existing
artifact (for example, creating a visual design model from a source code representation).
Round-trip engineering describes the environment support needed to change an artifact freely and have
other artifacts automatically changed so that consistency is maintained among the entire set of
requirements, design, implementation, and deployment artifacts.

2.6 Achieving Required Quality


The table given below summarizes some dimensions of quality improvement.

Key practices that improve overall software quality include the following:
 Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the
life cycle on a balance between requirements evolution, design evolution, and plan evolution
 Using metrics and indicators to measure the progress and quality of architecture as it evolves
from a high-level prototype into a fully compliant product
 Providing integrated life-cycle environments that support early and continuous configuration
control, change management, rigorous design methods, document automation, and regression
test automation
 Using visual modeling and higher level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation
 Early and continuous insight into performance issues through demonstration-based evaluations

2.7 Peer Inspections


Peer inspections are frequently overhyped as the key aspect of a quality system. Peer reviews are
valuable as secondary mechanisms, but they are rarely significant contributors to quality compared with
the following primary quality mechanisms and indicators, which should be emphasized in the
management process:
 Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the
engineering artifacts
 Major milestone demonstrations that force the artifacts to be assessed against tangible criteria
in the context of relevant use cases
 Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure
representation rigor, consistency, completeness, and change control
 Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and
requirements compliance
 Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals

One value of inspections is in the professional development of a team. It is generally useful to have the
products of junior team members reviewed by senior mentors. Putting the products of amateurs into
the hands of experts and vice versa is a good mechanism for accelerating the acquisition of knowledge
and skill in new personnel.
Inspections are also a good vehicle for holding authors accountable for quality products. All authors of
software and documentation should have their products scrutinized as a natural by-product of the
process. Therefore, the coverage of inspections should be across all authors rather than across all
components. Junior authors need to have a random component inspected periodically, and they can
learn by inspecting the products of senior authors. Varying levels of informal inspection are performed
continuously when developers are reading or integrating software with another author’s software, and
during testing by independent test teams. However, this “inspection” is much more tangibly focused on
integrated and executable aspects of the overall system.
Finally, a critical component deserves to be inspected by several people, preferably those who have a
stake in its quality, performance, or feature set. An inspection focused on resolving an existing issue can
be an effective way to determine cause or arrive at a resolution once the cause is understood.
Notwithstanding these benefits of inspections, many organizations overemphasize meetings and formal
inspections, and require coverage across all engineering products. This approach can be extremely
counterproductive. Significant or substantial design errors or architecture issues are rarely obvious from
a superficial review unless the inspection is narrowly focused on a particular issue. And most inspections
are superficial. Consequently, random human inspections tend to degenerate into comments on style
and first-order semantic issues. They rarely result in the discovery of real performance bottlenecks,
serious control issues (such as deadlocks, races, or resource contention), or architectural weaknesses
(such as flaws in scalability, reliability, or interoperability).
Quality assurance is everyone's responsibility and should be integral to almost all process activities
instead of a separate discipline performed by quality assurance specialists. Evaluating and assessing the
quality of the evolving engineering baselines should be the job of an engineering team that is
independent of the architecture and development team. Their life-cycle assessment of the evolving
artifacts would typically include change management, trend analysis, and testing, as well as inspection.

2.8 Summary
 The important dimensions of software economics move around the five basic parameters of the
software cost model:
 Reducing the size or complexity of what needs to be developed.
 Improving the development process
 Using more-skilled personnel and better teams (not necessarily the same thing)
 Using better environments (tools to automate the process)
 Trading off or backing off on quality thresholds
 The most significant way to improve affordability and return on investment (ROI) is usually to
produce a product that achieves the design goals with the minimum amount of human-generated
source material.
 To reduce the software product size, the following components are considered
 Languages
 Object-Oriented Models and Visual Modeling
 Reuse
 Commercial Components
 All project processes consist of productive activities and overhead activities.
 Productive activities result in tangible progress toward the end product. For software efforts,
these activities include prototyping, modeling, coding, debugging, and user documentation.
 Overhead activities that have an intangible impact on the end product are required in plan
preparation, documentation, progress monitoring, risk assessment, financial assessment,
configuration control, quality assessment, integration, testing, late scrap and rework,
management, personnel training, business administration, and other tasks. Overhead activities
include many value-added efforts, but, in general, the less effort devoted to these activities, the
more effort that can be expended in productive activities.
 The objective of process improvement is to maximize the allocation of resources to productive
activities and minimize the impact of overhead activities on resources such as personnel, computers,
and schedule.
 It has long been understood that differences in personnel account for the greatest swings in
productivity.
 Teamwork is much more important than the sum of the individuals.
 In examining how to staff a software project, Boehm offered the following five staffing principles:
 The principle of top talent: Use better and fewer people.
 The principle of job matching: Fit the tasks to the skills and motivation of the people available.
 The principle of career progression: An organization does best in the long run by helping its
people to self-actualize.
 The principle of team balance: Select people who will complement and harmonize with one
another.
 The principle of phaseout: Keeping a misfit on the team doesn't benefit anyone.
 One of the primary contributions of the environment: to automate manual tasks that are inefficient
or error-prone.
 Round-trip engineering is a term used to describe the key capability of environments that support
iterative development.
 Forward engineering is the automation of one engineering artifact from another, more abstract
representation.
 Reverse engineering is the generation or modification of a more abstract representation from an
existing artifact.
 Key practices that improve overall software quality include the following:
 Focusing on driving requirements and critical use cases
 Using metrics and indicators to measure the progress and quality of architecture
 Providing integrated life-cycle environments
 Using visual modeling and higher level languages
 Early and continuous insight into performance issues through demonstration-based evaluations
 Peer reviews are valuable as secondary mechanisms, but they are rarely significant contributors to
quality like primary quality mechanisms and indicators.

2.9 Review Questions


1 State the 5 approaches for improvement of s/w cost model.

2 Explain 3 different process perspective of improving s/w processes.

3 State Boehm’s staffing principles.

4 Write brief note on event sequences necessary for performance assessment.

5 Write short note on peer inspection.

2.10 Bibliography, References and Further Reading

Software Project Management: A Unified Framework, Walker Royce, The Addison-Wesley Object
Technology Series

Information Technology Project management (4th Edition) – Kathy Schwalbe (Centgage Learning –
Indian Edition)

Project Management Core Textbook – Mantel Jr., Meredith, Shafer, Sutton with Gopalan (Wiley India
Edition)

Information Technology Project Management: a concise study, (3rd ed.) by S A Kelkar (PHI)

[Jones, 1995] Jones, Capers, “Table of Programming Languages and Levels, Version 8”, Software
Productivity Research white paper (Burlington, Massachusetts), June 1995. Copyright © 1995 Capers
Jones.

[Booch, 1996] Booch, Grady, Object Solutions: Managing the Object-Oriented Project (Addison-Wesley
Publishing Company, Menlo Park, California, 1996).

[Boehm, 1981] Boehm, Barry W., Software Engineering Economics (Prentice-Hall, Englewood Cliffs, New
Jersey, 1981).

You might also like