Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

UNIT III

Software Testing

Strategic approach to Software testing, test strategies for conventional software, test strategies for
webapps, Validation testing, System testing, White box testing, Control Structure testing, Black-box
testing, Model-based testing.

Software evolution

Evolution processes, Program evolution dynamics, Software maintenance, Legacy system


management

Project Planning

Software pricing, Plan-driven development, Project scheduling, Estimation techniques

Quality management

Software quality, Software standards, Reviews and inspections, Software measurement and metrics

5.1 A STRATEGIC APPROACH TO SOFTWARE TESTING:

Software testing is the process used to identify the correctness, completeness and the quality of
developed computer software. Testing is a set of activities that can be planned in advance and
conducted systematically. A number of software testing strategies have been proposed. All provide
you with a template for testing and all have the following generic characteristics:

• To perform effective testing, you should conduct effective technical reviews. By doing this, many
errors will be eliminated before testing commences.

• Testing begins at the component level and works “outward” toward the integration of the entire
computer-based system.

• Different testing techniques are appropriate for different software engineering approaches and at
different points in time.

• Testing is conducted by the developer of the software and an independent test group.

Unit III 1 Prepared By Dr. Pooja S.B


• Testing and debugging are different activities, but debugging must be accommodated in any testing
strategy. A strategy for software testing must accommodate low-level tests that are necessary to verify
that a small source code segment has been correctly implemented as well as high-level tests that
validate major system functions against customer requirements.

5.1.1 Verification and Validation (V&V).:

Verification refers to the set of tasks that ensure that software correctly implements a specific
function.

Validation refers to a different set of tasks that ensure that the software that has been built is traceable
to customer requirements.

Boehm states this another way:

Verification: “Are we building the product right?” Validation: “Are we building the right product?”
The definition of V&V encompasses many software quality assurance activities. Verification and
validation includes a wide array of SQA (Software Quality Assurance) activities: technical reviews,
quality and configuration audits, performance monitoring, simulation, feasibility study,
documentation review, database review, algorithm analysis, development testing, usability testing,
qualification testing, acceptance testing, and installation testing. Although testing plays an extremely
important role in V&V, many other activities are also necessary. Testing does provide the last bastion
from which quality can be assessed and, more pragmatically, errors can be uncovered

5.1.2 Organizing for Software Testing: The people who have built the software are now asked to
test the software. This seems harmless in itself; after all, who knows the program better than its
developers? From the point of view of the builder, testing can be considered to be (psychologically)
destructive. So the builder treads lightly, designing and executing tests that will demonstrate that the
program works, rather than uncovering errors. Unfortunately, errors will be present. And, if the
software engineer doesn’t find them, the customer will.

There are often a number of misconceptions that you might infer erroneously from the preceding
discussion:

(1) that the developer of software should do no testing at all,

(2) that the software should be “tossed over the wall” to strangers who will test it mercilessly,

(3) that testers get involved with the project only when the testing steps are about to begin. Each of
these statements is incorrect.
Unit III 2 Prepared By Dr. Pooja S.B
The software developer is always responsible for testing the individual units (components) of the
program, ensuring that each performs the function or exhibits the behavior for which it was designed.
In many cases, the developer also conducts integration testing—a testing step that leads to the
construction (and test) of the complete software architecture. Only after the software architecture is
complete does an independent test group become involved. The role of an independent test group
(ITG) is to remove the inherent problems associated with letting the builder test the thing that has
been built. Independent testing removes the conflict of interest that may otherwise be present. After
all, ITG personnel are paid to find errors. However, you don’t turn the program over to ITG and walk
away. The developer and the ITG work closely throughout a software project to ensure that thorough
tests will be conducted. While testing is conducted, the developer must be available to correct errors
that are uncovered. The ITG is part of the software development project team in the sense that it
becomes involved during analysis and design and stays involved (planning and specifying test
procedures) throughout a large project. However, in many cases the ITG reports to the software
quality assurance organization, thereby achieving a degree of independence that might not be possible
if it were a part of the software engineering organization.

5.1.3 Software Testing Strategy

The software process may be viewed as the spiral illustrated in Figure 17.1. Initially, system
engineering defines the role of software and leads to software requirements analysis, where the
information domain, function, behavior, performance, constraints, and validation criteria for software
are established. Moving inward along the spiral, you come to design and finally to coding. To develop
computer software, you spiral inward along streamlines that decrease the level of abstraction on each
turn. A strategy for software testing may also be viewed in the context of the spiral (Figure 17.1).
Unit testing begins at the vortex of the spiral and concentrates on each unit (e.g., component, class,
or WebApp content object) of the software as implemented in source code. Testing progresses by
moving outward along the spiral to integration testing, where the focus is on design and the
construction of the software architecture.

Unit III 3 Prepared By Dr. Pooja S.B


Taking another turn outward on the spiral, you encounter validation testing, where requirements
established as part of requirements modeling are validated against the software that has been
constructed. Finally, you arrive at system testing, where the software and other system elements are
tested as a whole. To test computer software, you spiral out in a clockwise direction along streamlines
that broaden the scope of testing with each turn. Considering the process from a procedural point of
view, testing within the context of software engineering is actually a series of four steps that are
implemented sequentially. The steps are shown in Figure 17.2. Initially, tests focus on each
component individually, ensuring that it functions properly as a unit. Hence, the name unit testing.

Unit testing makes heavy use of testing techniques that exercise specific paths in a component’s
control structure to ensure complete coverage and maximum error detection. Next, components must
be assembled or integrated to form the complete software package.

Unit III 4 Prepared By Dr. Pooja S.B


Integration testing addresses the issues associated with the dual problems of verification and
program construction. Test case design techniques that focus on inputs and outputs are more prevalent
during integration, although techniques that exercise specific program paths may be used to ensure
coverage of major control paths. After the software has been integrated (constructed), a set of high-
order tests is conducted.

Validation criteria (established during requirements analysis) must be evaluated. Validation


testing provides final assurance that software meets all informational, functional, behavioral, and
performance requirements.

System testing: The last high-order testing step falls outside the boundary of software engineering
and into the broader context of computer system engineering. Software, once validated, must be
combined with other system elements (e.g., hardware, people, databases). System testing verifies that
all elements mesh properly and that overall system function/performance is achieved.

5.5 TEST STRATEGIES FOR WEBAPPS

The strategy for WebApp testing adopts the basic principles for all software testing and applies a
strategy and tactics that are used for object-oriented systems.

The following steps summarize the approach:

1. The content model for the WebApp is reviewed to uncover errors.

2. The interface model is reviewed to ensure that all use cases can be accommodated.

3. The design model for the WebApp is reviewed to uncover navigation errors.

4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.

5. Each functional component is unit tested.

6. Navigation throughout the architecture is tested.

7. The WebApp is implemented in a variety of different environmental configurations and is tested


for compatibility with each configuration.

8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its
environment.

9. Performance tests are conducted.

Unit III 5 Prepared By Dr. Pooja S.B


10. The WebApp is tested by a controlled and monitored population of end users. The results of
their interaction with the system are evaluated for content and navigation errors, usability concerns,
compatibility concerns, and WebApp reliability and performance. Because many WebApps evolve
continuously, the testing process is an ongoing activity, conducted by support staff who use
regression tests derived from the tests developed when the WebApp was first engineered.

5.6 VALIDATION TESTING

Validation testing is followed by the integration testing, when individual components have been
exercised, the software is completely assembled as a package, and interfacing errors have been
uncovered and corrected.

At the validation or system level, the distinction between conventional software, object-oriented
software, and WebApps disappears.

Testing focuses on user-visible actions and user-recognizable output from the system.

Validation can be defined in many ways, but a simple (albeit harsh) definition is that validation
succeeds when software functions in a manner that can be reasonably expected by the customer. At
this point a battle-hardened software developer might protest: “Who or what is the arbiter of
reasonable expectations?” If a Software Requirements Specification has been developed, it describes
all user-visible attributes of the software and contains a Validation Criteria section that forms the
basis for a validation-testing approach.

5.6.1 Validation-Test Criteria:

Software validation is achieved through a series of tests that demonstrate conformity with
requirements. A test plan outlines the classes of tests to be conducted, and a test procedure defines
specific test cases that are designed to ensure that all functional requirements are satisfied, all
behavioral characteristics are achieved, all content is accurate and properly presented, all
performance requirements are attained, documentation is correct, and usability and other
requirements are met (e.g., transportability, compatibility, error recovery, maintainability).

After each validation test case has been conducted, one of two possible conditions exists:

(1) The function or performance characteristic conforms to specification and is accepted or

Unit III 6 Prepared By Dr. Pooja S.B


(2) A deviation from specification is uncovered and a deficiency list is created. Deviations or errors
discovered at this stage in a project can rarely be corrected prior to scheduled delivery. It is often
necessary to negotiate with the customer to establish a method for resolving deficiencies.

5.6.2 Configuration Review:

• An important element of the validation process is a configuration review.


• The objective of the review is to ensure that all elements of the software configuration have
been properly developed, are cataloged, and have the necessary detail to bolster the support
activities.
• The configuration review, sometimes called an audit.

5.7 SYSTEM TESTING

System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that system
elements have been properly integrated and perform allocated functions.

5.7.1 Recovery Testing:

Many computer-based systems must recover from faults and resume processing with little or no
downtime. In some cases, a system must be fault tolerant; that is, processing faults must not cause
overall system function to cease. In other cases, a system failure must be corrected within a specified
period of time or severe economic damage will occur. Recovery testing is a system test that forces
the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery
is automatic (performed by the system itself), reinitialization, check pointing mechanisms, data
recovery, and restart are evaluated for correctness. If recovery requires human intervention, the mean-
time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.

5.7.2 Security Testing:

Any computer-based system that manages sensitive information or causes actions that can
improperly harm (or benefit) individuals is a target for improper or illegal penetration. Penetration
spans a broad range of activities: hackers who attempt to penetrate systems for sport, disgruntled
employees who attempt to penetrate for revenge, dishonest individuals who attempt to penetrate for
illicit personal gain. Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration. During security testing, the tester plays the role(s)
of the individual who desires to penetrate the system. Anything goes! The tester may attempt to

Unit III 7 Prepared By Dr. Pooja S.B


acquire passwords through external clerical means; may attack the system with custom software
designed to break down any defenses that have been constructed; may overwhelm the system, thereby
denying service to others; may purposely cause system errors, hoping to penetrate during recovery;
may browse through insecure data, hoping to find the key to system entry. Given enough time and
resources, good security testing will ultimately penetrate a system. The role of the system designer is
to make penetration cost more than the value of the information that will be obtained.

5.7.3 Stress Testing:

Stress testing executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume.

For example,
(1) Special tests may be designed that generate ten interrupts per second, when one or two is the
average rate.
(2) Input data rates may be increased by an order of magnitude to determine how input functions
will respond.
(3) Test cases that require maximum memory or other resources are executed.
(4) Test cases that may cause thrashing in a virtual operating system are designed.
(5) Test cases that may cause excessive hunting for disk-resident data are created.
Essentially, the tester attempts to break the program. A variation of stress testing is a technique
called sensitivity testing. In some situations (the most common occur in mathematical algorithms), a
very small range of data contained within the bounds of valid data for a program may cause extreme
and even erroneous processing or profound performance degradation. Sensitivity testing attempts to
uncover data combinations within valid input classes that may cause instability or improper
processing.

5.7.4 Performance Testing:

Performance testing is designed to test the run-time performance of software within the context of
an integrated system. Performance testing occurs throughout all steps in the testing process. Even at
the unit level, the performance of an individual module may be assessed as tests are conducted.
However, it is not until all system elements are fully integrated that the true performance of a system
can be ascertained. Performance tests are often coupled with stress testing and usually require both
hardware and software instrumentation. That is, it is often necessary to measure resource utilization
(e.g., processor cycles) in an exacting fashion.

5.7.5 Deployment Testing:


Unit III 8 Prepared By Dr. Pooja S.B
In many cases, software must execute on a variety of platforms and under more than one operating
system environment. Deployment testing, sometimes called configuration testing, exercises the
software in each environment in which it is to operate. In addition, deployment testing examines all
installation procedures and specialized installation software (e.g., “installers”) that will be used by
customers, and all documentation that will be used to introduce the software to end users.

5.9.5 CONTROL STRUCTURE TESTING


The basis path testing technique is one of a number of techniques for control structure testing.
Although basis path testing is simple and highly effective, it is not sufficient in itself. The other
variations on control structure testing are discussed. These broaden testing coverage and improve the
quality of white-box testing.
5.9.5.1 Condition Testing:
• Condition testing is a test-case design method that exercises the logical conditions contained
in a program module.
• A simple condition is a Boolean variable or a relational expression, possibly preceded with
one NOT (¬) operator.
• A relational expression takes the form E1 <relational operator> E2
where E1 and E2 are arithmetic expressions
and <relational operator> is one of the following: >,>=,<,<=,!=
• A compound condition is composed of two or more simple conditions, Boolean operators,
and parentheses.
• We assume that Boolean operators allowed in a compound condition include OR (|), AND
(&), and NOT (¬).
• A condition without relational expressions is referred to as a Boolean expression. If a
condition is incorrect, then at least one component of the condition is incorrect.
Therefore, types of errors in a condition include
• Boolean operator errors (incorrect/missing/extra Boolean operators),
• Boolean variable errors,
• Boolean parenthesis errors,
• Relational operator errors, and
• Arithmetic expression errors.

Unit III 9 Prepared By Dr. Pooja S.B


The condition testing method focuses on testing each condition in the program to ensure that it does
not contain errors.
5.9.5.2 Data Flow Testing:
• Data flow testing is a form of structural testing and a white box testing techniques
• The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program.
❖ From the point where a variable v is defined or assigned a value
❖ To the point where that variable v is used.
• Data flow testing method selects test path of a program according to the locations of definitions
and uses of variables in the program.
• A program unit accepts inputs, performs computations, assign new values to variables and
returns results, one can visualize of “flow” of data values from one statement to another.
• A data value produced ion one statement is expected to be used later.
• Anomaly: An anomaly is a deviant or abnormal way of doing something.
• Data flow anomalies: Represents the patterns of data usage which may lead to an incorrect
execution of the code.

Example of data flow anomaly:

• It is an abnormal situation to successively assign two values to a variable without using the
first value.
• It is abnormal to use a value of a variable before assigning a value to the variable.
• Another abnormal situation is to generate a data value and never used it

5.9.5.3 Loop Testing:

❖ Loops are the cornerstone for the vast majority of all algorithms implemented in software.
And yet, we often pay them little heed while conducting software tests.
❖ Loop testing is a white-box testing technique that focuses exclusively on the validity of loop
constructs.
Four different classes of loops can be defined:
❖ Simple loops,
❖ Concatenated loops,
❖ Nested loops, and
❖ Unstructured loops

Unit III 10 Prepared By Dr. Pooja S.B


Simple loops.
The following set of tests can be applied to simple loops, where n is the maximum number of
allowable passes through the loop.
❖ Skip the loop entirely.
❖ Only one pass through the loop.
❖ Two passes through the loop.
❖ m passes through the loop where m < n.
❖ n -1, n, n +1 passes through the loop

Nested loops.
If we were to extend the test approach for simple loops to nested loops, the number of possible tests
would grow geometrically as the level of nesting increases. This would result in an impractical
number of tests. Beizer suggests an approach that will help to reduce the number of tests:
❖ Start at the innermost loop. Set all other loops to minimum values.
❖ Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or
excluded values.
❖ Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to “typical” values.
❖ Continue until all loops have been tested.
Concatenated loops.

Unit III 11 Prepared By Dr. Pooja S.B


❖ Concatenated loops can be tested using the approach defined for simple loops, if each of the
loops is independent of the other.
❖ However, if two loops are concatenated and the loop counter for loop 1 is used as the initial
value for loop 2, then the loops are not independent.
❖ When the loops are not independent, the approach applied to nested loops is recommended.
Unstructured loops.
❖ Whenever possible, this class of loops should be redesigned to reflect the use of the structured
programming constructs.

5.9.7 MODEL-BASED TESTING


Model-based testing (MBT) is a black-box testing technique that uses information contained in the
requirements model as the basis for the generation of test cases. The MBT technique requires five
steps:
1. Analyse an existing behavioural model for the software or create one: Recall that a behavioral
model indicates how software will respond to external events or stimuli.
To create the model, you should perform the steps:
❖ Evaluate all use cases to fully understand the sequence of interaction within the system,
❖ Identify events that drive the interaction sequence and understand how these events relate to
specific objects.
❖ Create a sequence for each use case.
❖ Build a UML state diagram for the system and
❖ Review the behavioural model to verify accuracy and consistency.
2. Traverse the behavioural model and specify the inputs that will force the software to make the
transition from state to state. The inputs will trigger events that will cause the transition to occur.
3. Review the behavioural model and note the expected outputs as the software makes the transition
from state to state. Recall that each state transition is triggered by an event and that as a consequence
of the transition, some function is invoked and outputs are created. For each set of inputs (test cases)
you specified in step 2, specify the expected outputs as they are characterized in the behavioural
model. “A fundamental assumption of this testing is that there is some mechanism, a test oracle, that
will determine whether or not the results of a test execution are correct”. In essence, a test oracle
establishes the basis for any determination of the correctness of the output. In most cases, the oracle
is the requirements model, but it could also be another document or application, data recorded
elsewhere, or even a human expert.

Unit III 12 Prepared By Dr. Pooja S.B


4. Execute the test cases. Tests can be executed manually or a test script can be created and executed
using a testing tool.
5. Compare actual and expected results and take corrective action as required. MBT helps to
uncover errors in software behavior, and as a consequence, it is extremely useful when testing event-
driven applications.

Software Evolution

• Software development does not stop when a system is delivered but continues throughout the
lifetime of the system.
• After a system has been deployed, it inevitably has to change if it is to remain useful.
• Business changes and changes to user expectations generate new requirements for the existing
software.
• Parts of the software may have to be modified to correct errors that are found in operation, to
adapt it for changes to its hardware and software platform, and to improve its performance or
other non-functional characteristics.
• Software evolution is important because organizations have invested large amounts of money
in their software and are now completely dependent on these systems.
• Software evolution may be triggered by changing business requirements, by reports of
software defects, or by changes to other systems in a software
• Therefore, think software engineering as a spiral process with requirements, design,
implementation, and testing going on throughout the lifetime of the system, it is shown in
below Figure.

Unit III 13 Prepared By Dr. Pooja S.B


In this case, there are likely to be discontinuities in the spiral process. Requirements and design
documents may not be passed from one company to another. Companies may merge or reorganize
and inherit software from other companies, and then find that this has to be changed. When the
transition from development to evolution is not seamless, the process of changing the software after
delivery is often called Rajlich and Bennett (2000) proposed an alternative view of the software
evolution life cycle, as shown in below Figure. In this model, they distinguish between evolution and
servicing. Evolution is the phase in which significant changes to the software architecture and
functionality may be made. During servicing, the only changes that are made are relatively small,
essential changes.

During evolution, the software is used successfully and there is a constant stream of proposed
requirements changes. However, as the software is modified, its structure tends to degrade and
changes become more and more expensive. This often happens after a few years of use when other
environmental changes, such as hardware and operating systems, are also often required. At some
stage in the life cycle, the software reaches a transition point where significant changes, implementing
new requirements, become less and less cost effective.

At that stage, the software moves from evolution to servicing. During the servicing phase, the
software is still useful and used but only small tactical changes are made to it. During this stage, the
company is usually considering how the software can be replaced. In the final stage, phase-out, the

Unit III 14 Prepared By Dr. Pooja S.B


software may still be used but no further changes are being implemented. Users have to work around
any problems that they discover.

3.8 Evolution processes

• Software evolution processes vary depending on the type of


Software being maintained
The development processes used in an organization and
The skills of the people involved
• In some organizations, evolution may be an informal process where change requests mostly
come from conversations between the system users and developers.
• In other companies, it is a formalized process with structured documentation produced at
each stage in the process.
• Below figure shows an overview of the evolution process.

The process includes the fundamental activities of

Change analysis,
Release planning
System implementation and
Releasing a system to customers.
• The cost and impact of these changes are assessed to see how much of the system is affected
by the change and how much it might cost to implement the change.
• If the proposed changes are accepted, a new release of the system is planned. During release
planning, all proposed changes (fault repair, adaptation, and new functionality) are considered.
• A decision is then made on which changes to implement in the next version of the system.
• The changes are implemented and validated, and a new version of the system is released.
• The process then iterates with a new set of changes proposed for the next release.

Unit III 15 Prepared By Dr. Pooja S.B


• Ideally, the change implementation stage of this process should modify the system
specification, design, and implementation to reflect the changes to the system (as shown in
below Figure).
• New requirements that reflect the system changes are proposed, analyzed, and validated.
• System components are redesigned and implemented and the system is retested.
• If appropriate, prototyping of the proposed changes may be carried out as part of the change
analysis process.

Change implementation

• In these cases, the need to make the change quickly means that you may not be able to follow
them formal change analysis process.
• Rather than modify the requirements and design, you make an emergency fix to the program to
solve the immediate problem (It is as shown in below figure Figure).
• However, the danger is that the requirements, the software design, and the code become
inconsistent.
• Although you may intend to document the change in the requirements and design, additional
emergency fixes to the software may then be needed.
• These take priority over documentation. Eventually, the original change is forgotten and the
system documentation and code are never realigned.

Unit III 16 Prepared By Dr. Pooja S.B


The emergency repair process

3.9 Program Evolution Dynamics

Program evolution dynamics is the study of system change. In the 1970s and 1980s, Lehman and
Belady (1985) carried out several empirical studies of system change with a view to understanding
more about characteristics of software evolution.

The work continued in the 1990s as Lehman and others investigated the significance offeedback in
evolution processes (Lehman, 1996; Lehman et al., 1998; Lehman et al., concerning system change.

Lehman and Belady law

Figure: Lehman’s laws


Unit III 17 Prepared By Dr. Pooja S.B
3.10 Software maintenance

Software maintenance is the general process of changing a system after it has been delivered. The
term is usually applied to custom software in which separate development groups are involved before
and after delivery. The changes made to the software may be simple changes to correct coding errors,
more extensive changes to correct design errors, or significant enhancements to correct specification
errors or accommodate new requirements. Changes are implemented by modifying existing system
components and, where necessary, by adding new components to the system.

There are three different types of software maintenance:

1. Fault repairs: Coding errors are usually relatively cheap to correct; design errors are more
expensive as they may involve rewriting several program components. Requirements errors are the
most expensive to repair because of the extensive system redesign which may be necessary.
2. Environmental adaptation: This type of maintenance is required when some aspect of the
system’s environment such as the hardware, the platform operating system, or other support software
changes. The application system must be modified to adapt it to cope with these environmental
changes.
3. Functionality addition: This type of maintenance is necessary when the system requirements
change in response to organizational or business change. The scale of the changes required to the
software is often much greater than for the other types of maintenance.

Figure: Maintenance effort distribution

Above figure shows an approximate distribution of maintenance costs. The specific percentages
will obviously vary from one organization to another but, universally, repairing system faults is not

Unit III 18 Prepared By Dr. Pooja S.B


the most expensive maintenance activity. Evolving the system to cope with new environments and
new or changed requirements consumes most maintenance effort. The relative costs of maintenance
and new development vary from one application domain to another. Guimaraes (1983) found that the
maintenance costs for business application systems are broadly comparable with system development
costs
Software reengineering
Reengineering may involve redocumenting the system, refactoring the system architecture,
translating programs to a modern programming language, and modifying and updating the structure
and d and, normally, you should try to avoid making major changes to the system architecture.
There are two important benefits from reengineering rather than replacement:
• Reduce cost
• Reduce risk

Preventative maintenance by refactoring


Refactoring is the process of making improvements to a program to slow down degradation through
change (Opdyke and Johnson, 1990). It means modifying a program to improve its structure, to
reduce its complexity, or to make it easier to understand. Refactoring is sometimes considered to be
limited to object- oriented development but the principles can be applied to any development
approach. When you refactor a program, you should not add functionality but should concentrate on
program improvement.

3.11 Legacy system management

Most organizations usually have a portfolio of legacy systems that they use, with a limited budget
for maintaining and upgrading these systems. They have to decide how to get the best return on their
investment. This involves making a realistic assessment of their legacy systems and then deciding on
the most appropriate strategy for evolving these systems.
There are four strategic options:
• Scrap the system completely: This option should be chosen when the system is not making
an effective contribution to business processes. This commonly occurs when business processes have
changed since the system was installed and are no longer reliant on the legacy system.

Unit III 19 Prepared By Dr. Pooja S.B


• Leave the system unchanged and continue with regular maintenance: This option
should be chosen when the system is still required but is fairly stable and the system users make
relatively few change requests.
• Reengineer the system to improve its maintainability: This option should be chosen when
the system quality has been degraded by change and where a new change to the system is still being
proposed. This process may include developing new interface components so that the original system
can work with other, newer systems.
• Replace all or part of the system with a new system. This option should be chosen when
factors, such as new hardware, mean that the old system cannot continue in operation or where off-
the-shelf systems would allow the new system to be developed at a reasonable cost.

There are four clusters of systems:

• Low quality, low business value: Keeping these systems in operation will be expensive
and the rate of the return to the business will be fairly small. These systems should be scrapped.
• Low quality, high business value: These systems are making an important business
contribution so they cannot be scrapped. However, their low quality means that it is expensive to
maintain them. These systems should be reengineered to improve their quality. They may be replaced,
if a suitable off-the-shelf system is available.
• High quality, low business value: These are systems that don’t contribute much to the
business but which may not be very expensive to maintain. It is not worth replacing these systems so
normal system maintenance may be continued if expensive changes are not required and the system
hardware remains in use. If expensive changes become necessary, the software should be scrapped.
• High quality, high business value: These systems have to be kept in operation. However,
their high quality means that you don’t have to invest in transformation or system replacement.
Normal system maintenance should be continued.

4. PROJECT PLANNING
• Project planning is one of the most important jobs of a software project manager.
• The project plan, which is created at the start of a project, is used to communicate how the work
will be done to the project team and customers, and to help assess progress on the project.
• Project planning takes place at three stages in a project life cycle:

Unit III 20 Prepared By Dr. Pooja S.B


At the proposal stage, when there is bidding for a contract to develop or provide a software
system, a plan may be required at this stage to help contractors decide if they have the
resources to complete the work and to work out the price that they should quote to a
customer.
During the project startup phase, there is a need to plan who will work on the project, how
the project will be broken down into increments, how resources will be allocated across your
company, etc.
Periodically throughout the project, when the plan is modified, in light of experience gained
and information from monitoring the progress of the work, more information about the
system being implemented and capabilities of development team are learnt.
• This information allows to make more accurate estimates of how long the work will take.

4.1 Software Pricing


• In principle, the price of a software product to a customer is simply the cost of development
plus profit for the developer. Fig 4.1 shows the factors affecting software pricing
• It is essential to think about organizational concerns, the risks associated with the project,
and the type of contract that will be used.
• These may cause the price to be adjusted upwards or downwards.
• Because of the organizational considerations involved, deciding on a project price should
be a group activity involving marketing and sales staff, senior management, and project
managers.

Unit III 21 Prepared By Dr. Pooja S.B


Figure 4.1: Factors affecting software pricing

4.2 Plan Driven Development


→ Plan-driven or plan-based development is an approach to software engineering where the
development process is planned in detail.
→ A project plan is created that records the work to be done, who will do it, the development
schedule, and the work products.
→ Managers use the plan to support project decision making and as a way of measuring progress.
→ Plan-driven development is based on engineering project management techniques and can be
thought of as the ‘traditional’ way of managing large software development projects.
→ The principal argument against plan-driven development is that many early decisions have to
be revised because of changes to the environment in which the software is to be developed and used
4.2.1 Project Plans
→ In a plan-driven development project, a project plan sets out the resources available to the
project, the work breakdown, and a schedule for carrying out the work.
→ The plan should identify risks to the project and the software under development, and the
approach that is taken to risk management.
→ Plans normally include the following sections:
1. Introduction: Describes the objectives of the project and sets out the constraints (e.g., budget,
time, etc.) that affect the management of the project.

Unit III 22 Prepared By Dr. Pooja S.B


2. Project organization: This describes the way in which the development team is organized, the
people involved, and their roles in the team.
3. Risk analysis: This describes possible project risks, the likelihood of these risks arising, and the
risk reduction strategies that are proposed.
4. Hardware and software resource requirements: This specifies the hardware and support
software required to carry out the development. If hardware has to be bought, estimates of the prices
and the delivery schedule may be included.
5. Work breakdown: This sets out the breakdown of the project into activities and identifies the
milestones and deliverables associated with each activity.
6. Project schedule: Shows dependencies between activities, the estimated time required to reach
each milestone, and the allocation of people to activities.
7. Monitoring and reporting mechanisms: This defines the management reports that should be
produced, when these should be produced, and the project monitoring mechanisms to be used. → The
project plan supplements are as shown in fig 4.2.

Fig 4.2: Project plan supplements


4.2.2 The Planning Process
• Project planning is an iterative process that starts when an initial project plan is created during
the project startup phase.
• Fig 4.3 is a UML activity diagram that shows a typical workflow for a project planning process.
• At the beginning of a planning process, it is necessary to assess the constraints affecting the
project.
• These constraints are the required delivery date, staff available, overall budget, available tools,
and so on.
• It is also necessary to identify the project milestones and deliverables.

Unit III 23 Prepared By Dr. Pooja S.B


• Milestones are points in the schedule against which progress can be accessed, for example, the
handover of the system for testing.
• Deliverables are work products that are delivered to the customer.
• The process then enters a loop.
• You draw up an estimated schedule for the project and the activities defined in the schedule are
initiated or given permission to continue.
• After some time (usually about two to three weeks), the progress must be reviewed and
discrepancies must be noted from the planned schedule.
• Because initial estimates of project parameters are inevitably approximate, minor slippages are
normal and thereby modifications will have to be made to the original plan.

Fig 4.3: The project planning process

4.3 Project Scheduling


• Project scheduling is the process of deciding how the work in a project will be organized as
separate tasks, and when and how these tasks will be executed.
• Here there is an estimation of the calendar time needed to complete each task, the effort
required, and who will work on the tasks that have been identified.
• It is essential to estimate the resources needed to complete each task, such as the disk space
required on a server, the time required on specialized hardware, such as a simulator, and what
the travel budget will be .

Unit III 24 Prepared By Dr. Pooja S.B


• Scheduling in plan-driven projects (Fig 4.4) involves breaking down the total work involved
in a project into separate tasks and estimating the time required to complete each task.
• Tasks should normally last at least a week, and no longer than 2 months.
• Finer subdivision means that a disproportionate amount of time must be spent on replanning
and updating the project plan.
• The maximum amount of time for any task should be around 8 to 10 weeks.
• If it takes longer than this, the task should be subdivided for project planning and scheduling.

Fig 4.4: The project scheduling process


4.3.1 Schedule Representation
→ Project schedules may simply be represented in a table or spreadsheet showing the tasks,
effort, expected duration, and task dependencies.
→ There are two types of representation that are commonly used:
1. Bar charts, which are calendar-based, show who is responsible for each activity, the expected
elapsed time, and when the activity is scheduled to begin and end. Bar charts are sometimes called
‘Gantt charts’.
2. Activity networks, which are network diagrams, show the dependencies between the different
activities making up a project
. → Project activities are the basic planning element. Each activity has:
1. A duration in calendar days or months.
2. An effort estimate, which reflects the number of person-days or person months to complete the
work.
3. A deadline by which the activity should be completed.
4. A defined endpoint.
This represents the tangible result of completing the activity. This could be a document, the
holding of a review meeting, the successful execution of all tests, etc.
→ When planning a project, milestones must also be defined; that is, each stage in the project
where a progress assessment can be made.

Unit III 25 Prepared By Dr. Pooja S.B


→ Each milestone should be documented by a short report that summarizes the progress made
and the work done.
→ Milestones may be associated with a single task or with groups of related activities.
→ For example, in fig 4.5, milestone M1 is associated with task T1 and milestone M3 is
associated with a pair of tasks, T2 and T4.
→ A special kind of milestone is the production of a project deliverable.
→ A deliverable is a work product that is delivered to the customer. It is the outcome of a
significant project phase such as specification or design.
→ Usually, the deliverables that are required are specified in the project contract and the
customer’s view of the project’s progress depends on these deliverables.

Fig 4.5: Tasks, durations and dependencies


→ Fig 4.6 takes the information in Figure 4.5 and presents the project schedule in a graphical
format.

Unit III 26 Prepared By Dr. Pooja S.B


Fig 4.6: Activity bar chart
→ It is a bar chart showing a project calendar and the start and finish dates of tasks.
→ Reading from left to right, the bar chart clearly shows when tasks start and end.
→ The milestones (M1, M2, etc.) are also shown on the bar chart
→ In Fig 4.7, it is observed that Mary is a specialist, who works on only a single task in the
project.

Unit III 27 Prepared By Dr. Pooja S.B


Fig 4.7: Staff allocation chart
→ This can cause scheduling problems. If one project is delayed while a specialist is working on
it, this may have a knock-on effect on other projects where the specialist is also required.
→ These may then be delayed because the specialist is not available.
→ Delays can cause serious problems with staff allocation, especially when people are working
on several projects at the same time.
→ If a task (T) is delayed, the people allocated may be assigned to other work (W).
→ To complete this may take longer than the delay but, once assigned, they cannot simply be
reassigned back to the original task, T.
→ This may then lead to further delays in T as they complete W.

4.4 Estimation Techniques


→ Project schedule estimation is difficult.
→ There might be a need to make initial estimates on the basis of a high-level user requirements
definition.
→ Organizations need to make software effort and cost estimates.
→ There are two types of technique that can be used to do this:

Unit III 28 Prepared By Dr. Pooja S.B


1. Experience-based Techniques: The estimate of future effort requirements is based on the
manager’s experience of past projects and the application domain.
2. Algorithmic cost Modeling: In this approach, a formulaic approach is used to compute the
project effort based on estimates of product attributes, such as size, and process characteristics,
such as experience of staff involved.
→ During development planning, estimates become more and more accurate as the project
progresses (Fig 4.8).
→ Experience-based techniques rely on the manager’s experience of past projects and the actual
effort expended in these projects on activities that are related to software development.
→ The difficulty with experience-based techniques is that a new software project may not have
much in common with previous projects

Fig 4.8: Estimate Uncertainty


4.4.1 Algorithmic Cost Modeling
→ Algorithmic cost modeling uses a mathematical formula to predict project costs based on
estimates of the project size; the type of software being developed; and other team, process, and
product factors.
→ An algorithmic cost model can be built by analyzing the costs and attributes of completed
projects, and finding the closest-fit formula to actual experience.
→ Algorithmic models for estimating effort in a software project are mostly based on a simple
formula:
Effort = A * SizeB * M
→ A is a constant factor which depends on local organizational practices and the type of software
that is developed.
Unit III 29 Prepared By Dr. Pooja S.B
→ Size may be either an assessment of the code size of the software or a functionality estimate
expressed in function or application points.
→ The value of exponent B usually lies between 1 and 1.5. M is a multiplier made by combining
process, product, and development attributes, such as the dependability requirements for the
software and the experience of the development team.
→ All algorithmic models have similar problems:
1. It is often difficult to estimate Size at an early stage in a project, when only the specification is
available. Function-point and application-point estimates are easier to produce than estimates of
code size but are still often inaccurate.
2. The estimates of the factors contributing to B and M are subjective. Estimates vary from one
person to another, depending on their background and experience of the type of system that is
being developed.
4.4.2 The COCOMO II Model
→ This is an empirical model that was derived by collecting data from a large number of software
projects.
→ These data were analyzed to discover the formulae that were the best fit to the observations.
→ The COCOMO II model takes into account more modern approaches to software development,
such as rapid development using dynamic languages, development by component composition,
and use of database programming.
→ COCOMO II supports the spiral model of development.
→ The sub-models (Fig 4.9) that are part of the COCOMO II model are:
1. An application-composition model: Models the effort required to develop systems that are
created from reusable components, scripting, or database programming. Software size estimates
are based on application points, and a simple size/productivity formula is used to estimate the
effort required.
2. An early design model: This model is used during early stages of the system design after the
requirements have been established.
3. A reuse model: This model is used to compute the effort required to integrate reusable
components and/or automatically generated program code. It is normally used in conjunction with
the post-architecture model.
4. A post-architecture model: Once the system architecture has been designed, a more accurate
estimate of the software size can be made. Again, this model uses the standard formula for cost
estimation discussed above.

Unit III 30 Prepared By Dr. Pooja S.B


→ The Application-Composition Model
* The application-composition model was introduced into COCOMO II to support the estimation
of effort required for prototyping projects and for projects where the software is developed by
composing existing components.
* It is based on an estimate of weighted application points (sometimes called object points),
divided by a standard estimate of application point productivity.
* The estimate is then adjusted according to the difficulty of developing each application point.
* Application composition usually involves significant software reuse.
* It is almost certain that some of the application points in the system will be implemented using
reusable components the final formula for effort computation for system prototypes is:
PM = (NAP * (1 - %reuse / 100)) / PROD
Where, PM is the effort estimate in person-months.
NAP is the total number of application points in the delivered system.
%reuse is an estimate of the amount of reused code in the development.
→ The Early Design Model
* This model may be used during the early stages of a project, before a detailed architectural
design for the system is available.
* Early design estimates are most useful for option exploration where you need to compare
different ways of implementing the user requirements.
* The estimates produced at this stage are based on the standard formula for algorithmic models,
namely:
Effort = A * SizeB * M
* Boehm proposed that the coefficient A should be 2.94.
* The exponent B reflects the increased effort required as the size of the project increases.
* This can vary from 1.1 to 1.24 depending on the novelty of the project, the development
flexibility, the risk resolution processes used, the cohesion of the development team, and the
process maturity level of the organization.
* This results in an effort computation as follows:
PM = 2.94 * Size(1.1 – 1.24) * M
Where M = PERS * RCPX * RUSE * PDIF * PREX * FCIL * SCED
* The multiplier M is based on seven project and process attributes that increase or decrease the
estimate.

Unit III 31 Prepared By Dr. Pooja S.B


* The attributes used in the early design model are product reliability and complexity (RCPX),
reuse required (RUSE), platform difficulty (PDIF),
personnel capability (PERS), personnel experience (PREX), schedule (SCED), and support
facilities (FCIL). → The Reuse Mode
l * COCOMO II considers two types of reused code.
* ‘Black-box’ code is code that can be reused without understanding the code or making changes
to it.
* The development effort for black-box code is taken to be zero.
* ‘White box’ code has to be adapted to integrate it with new code or other reused components.
* A model (often in UML) is analyzed and code is generated to implement the objects specified in
the model.
* The COCOMO II reuse model includes a formula to estimate the effort required to integrate this
generated code:
PMAuto = (ASLOC * AT / 100) / ATPROD // Estimate for generated code
Where,
ASLOC is total number of lines of reused code, including code that is automatically generated.
AT is percentage of reused code that is automatically generated.
ATPROD is productivity of engineers in integrating such code.
* If there are a total of 20,000 lines of reused source code in a system and 30% of this is
automatically generated, then the effort required to integrate the generated code is:
(20.000 * 30/100) / 2400 = 2.5 person months //Generated Code
* The following formula is used to calculate the number of equivalent lines of source code:
ESLOC = ASLOC * AAM
Where, ESLOC is the equivalent number of lines of new source code.
ASLOC is the number of lines of code in the components that have to be changed.
AAM is an Adaptation Adjustment Multiplier (AAM) which adjusts the estimate to reflect the
additional effort required to reuse code.

4.5 Software Quality


→ Different software quality attributes are as shown below

Unit III 32 Prepared By Dr. Pooja S.B


Fig 4.12: Software quality attributes
→ A manufacturing process involves configuring, setting up, and operating the machines
involved in the process.
→ Once the machines are operating correctly, product quality naturally follows
→ The quality of the product is measured and the process is changed until the quality level
needed is achieved.
→ Fig 4.13 illustrates this process-based approach to achieving product quality.

Fig 4.13: Process based quality

4.6 Software Standards


→ Software standards are important for three reasons:
1. Standards capture wisdom that is of value to the organization. They are based on knowledge
about the best or most appropriate practice for the company. This knowledge is often only
acquired after a great deal of trial and error.
2. Standards provide a framework for defining what ‘quality’ means in a particular setting. This
depends on setting standards that reflect user expectations for software dependability, usability,
and performance.
3. Standards assist continuity when work carried out by one person is taken up and continued by
another. Standards ensure that all engineers within an organization adopt the same practices.

Unit III 33 Prepared By Dr. Pooja S.B


→ There are two related types of software engineering standard that may be defined and used in
software quality management:
1. Product Standards:
* These apply to the software product being developed.
* They include document standards, such as the structure of requirements documents,
documentation standards, such as a standard comment header for an object class definition, and
coding standards, which define how a programming language should be used.
2. Process Standards:
* These define the processes that should be followed during software development.
* They should encapsulate good development practice.
* Process standards may include definitions of specification, design and validation processes,
process support tools, and a description of the documents that should be written during these
processes.

4.7 Reviews and Inspections


→ Reviews and inspections are QA activities that check the quality of project deliverables.
→ This involves examining the software, its documentation and records of the process to discover
errors and omissions and to see if quality standards have been followed.
→ During a review, a group of people examine the software and its associated documentation,
looking for potential problems and non-conformance with standards.
→ The review team makes informed judgments about the level of quality of a system or project
deliverable.
→ Project managers may then use these assessments to make planning decisions and allocate
resources to the development process.
→ Quality reviews are based on documents that have been produced during the software
development process.
→ The purpose of reviews and inspections is to improve software quality, not to assess the
performance of people in the development team.
→ Reviewing is a public process of error detection, compared with the more private component-
testing process
4.7.1 The Review Process
→ Review process is structured into 3 phases:
1. Pre-Review Activities:
Unit III 34 Prepared By Dr. Pooja S.B
* These are preparatory activities that are essential for the review to be effective.
* Pre-review activities are concerned with review planning and review preparation.
* Review planning involves setting up a review team, arranging a time and place for the review,
and distributing the documents to be reviewed.
* During review preparation, the team may meet to get an overview of the software to be
reviewed
2. The Review Meeting:
* During the review meeting, an author of the document or program being reviewed should ‘walk
through’ the document with the review team.
* The review itself should be relatively short—two hours at most. One team member should chair
the review and another should formally record all review decisions and actions to be taken.
3. Post-Review Activities:
* After a review meeting has finished, the issues and problems raised during the review must be
addressed.
* This may involve fixing software bugs, refactoring software so that it conforms to quality
standards, or rewriting documents.
4.7.2 Program Inspections
→ Program inspections are ‘peer reviews’ where team members collaborate to find bugs in the
program that is being developed.
→ Inspections may be part of the software verification and validation processes.
→ They complement testing as they do not require the program to be executed.
→ This means that incomplete versions of the system can be verified and that representations
such as UML models can be checked.
→ During an inspection, a checklist of common programming errors is often used to focus the
search for bugs.
→ This checklist may be based on examples from books or from knowledge of defects that are
common in a particular application domain.
→ Possible checks are as shown in fig 4.14.

Unit III 35 Prepared By Dr. Pooja S.B


Fig 4.14: An inspection checklist

4.8 Software Measurement and Metrics


→ Software measurement is concerned with deriving a numeric value or profile for an attribute
of a software component, system, or process.
→ The long-term goal of software measurement is to use measurement in place of reviews to
make judgments about software quality.
→ Using software measurement, a system could ideally be assessed using a range of metrics and,
from these measurements, a value for the quality of the system could be inferred.
→ Software metric is a characteristic of a software system, system documentation, or
development process that can be objectively measured.
→ Examples of metrics include the size of a product in lines of code.
→ Software metrics may be either control metrics or predictor metrics.
→ Control metrics support process management, and predictor metrics helps to predict
characteristics of the software.

Unit III 36 Prepared By Dr. Pooja S.B


→ Control metrics are usually associated with software processes.
→ Examples of control or process metrics are the average effort and the time required to repair
reported defects.
→ Predictor metrics are associated with the software itself and are sometimes known as ‘product
metrics.
→ Both control and predictor metrics may influence management decision making, as shown in
fig 4.15.

Fig 4.15: Predictor and control measurements


→ There are two ways in which measurements of a software system may be used:
1. To assign a value to system quality attributes:
By measuring the characteristics of system components, such as their cyclomatic complexity, and
then aggregating these measurements, you can assess system quality attributes, such as
maintainability.
2. To identify the system components whose quality is substandard:
Measurements can identify individual components with characteristics that deviate from the
norm.
For example, components can be measured to discover those with the highest complexity.
→ Fig 4.16 shows some external software quality attributes and internal attributes that could,
intuitively, be related to them.
→ The diagram suggests that there may be relationships between external and internal attributes,
but it does not say how these attributes are related.
→ If the measure of the internal attribute is to be a useful predictor of the external software
characteristic, three conditions must hold

Unit III 37 Prepared By Dr. Pooja S.B


1. The internal attribute must be measured accurately. This is not always straightforward and it
may require special-purpose tools to make the measurements.
2. A relationship must exist between the attribute that can be measured and the external quality
attribute that is of interest. That is, the value of the quality attribute must be related, in some way,
to the value of the attribute than can be measured.
3. This relationship between the internal and external attributes must be understood, validated,
and expressed in terms of a formula or model. Model formulation involves identifying the
functional form of the model (linear, exponential, etc.)by analysis of collected data, identifying
the parameters that are to be included in the model, and calibrating these parameters using
existing data.

Fig 4.16: Relationships between internal and external software


4.8.1 Product Metrics
→ Product metrics are predictor metrics that are used to measure internal attributes of a software
system.
→ Examples of product metrics include the system size, measured in lines of code, or the number
of methods associated with each object class.
→ Product metrics fall into two classes:
1. Dynamic metrics, which are collected by measurements made of a program in execution. These
metrics can be collected during system testing or after the system has gone into use. An example
might be the number of bug reports or the time taken to complete a computation.

Unit III 38 Prepared By Dr. Pooja S.B


2. Static metrics, which are collected by measurements made of representations of the system,
such as the design, program, or documentation. Examples of static metrics are the code size and
the average length of identifiers used.
→ The metrics in fig 4.17 are applicable to any program but more specific objectoriented (OO)
metrics have also been proposed

Fig 4.17: Static software product metrics


→ Fig 4.18 summarizes Chidamber and Kemerer’s suite

Unit III 39 Prepared By Dr. Pooja S.B


4.8.2 Software Component Analysis
→ A measurement process that may be part of a software quality assessment process is shown in
fig 4.19.
→ Each system component can be analyzed separately using a range of metrics.
→ The values of these metrics may then be compared for different components and, perhaps,
with historical measurement data collected on previous projects.
→ The key stages in this component measurement process are:
1. Choose measurements to be made:
* The questions that the measurement is intended to answer should be formulated and the
measurements required to answer these questions defined.
* Measurements that are not directly relevant to these questions need not be collected.
2. Select components to be assessed:
* You may not need to assess metric values for all of the components in a software system.
Unit III 40 Prepared By Dr. Pooja S.B
* Sometimes, a representative selection of components can be selected for measurement, allowing
to make an overall assessment of system quality.
3. Measure component characteristics:
* The selected components are measured and the associated metric values computed.
* This normally involves processing the component representation (design, code, etc.) using an
automated data collection tool.
* This tool may be specially written or may be a feature of design tools that are already in use.
4. Identify anomalous measurements:
* After the component measurements have been made, it can be compared with each other and to
previous measurements that have been recorded in a measurement database.
5. Analyze anomalous components:
* When the components that have anomalous values for chosen metrics have been identified, it
becomes necessary to examine them to decide whether or not these anomalous metric values
mean that the quality of the component is compromised.
* An anomalous metric value for complexity

Fig 4.19: The process of product measurement


4.8.3 Measurement Ambiguity
→ When you collect quantitative data about software and software processes, the data must be
analyzed in order to be understood.
→ It is easy to misinterpret data and to make inferences that are incorrect.
→ There are several reasons for the users to make change requests:
1. The software is not good enough and does not do what customers want it to do. They therefore
request changes to deliver the functionality that they require.
2. Alternatively, the software may be very good and so it is widely and heavily used. Change
requests may be generated because there are many software users who creatively think of new
things that could be done with the software.

Unit III 41 Prepared By Dr. Pooja S.B

You might also like