Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Unit-5

TESTING AND MANAGEMENT ISSUES

SYLLABUS:

Quality issues-Non execution based testing- execution based testing- cost benefit analysis- risk
analysis-Improving the process- Metrics-CPM/PERT- Choice of programming language-Reuse
case studies- Portability-planning and estimating duration and cost-testing the project
management plan-maintenance and the object oriented paradigm-CASE Tools for maintenance.

INTRODUCTION

 Traditional life-cycle models usually include a separate testing phase, after


implementation and before maintenance
o This cannot lead to high-quality information systems
 Testing is an integral component of the information system development process
o An activity that must be carried out throughout the life cycle
 Continual testing carried out by the development team while it performs each workflow
is essential,
o In addition to more methodical testing at the end of each workflow
 Verification
o The process of determining whether a specific workflow has been correctly
carried out
o This takes place at the end of each workflow
 Validation
o The intensive evaluation process that takes place just before the information
system is delivered to the client
o Its purpose is to determine whether the information system as a whole satisfies its
specifications
 The words verification and validation are used as little as possible in this book
o The phrase verification and validation (or V & V) implies that the process of
checking a workflow can wait until the end of that workflow
o On the contrary, this checking must be carried out in parallel with all information
system development and maintenance activities
 There are two types of testing
o Execution-based testing of an artifact means running (“executing”) the artifact on
a computer and checking the output
 However, a written specification, for example, cannot be run on a computer
o The only way to check it is to read through it as carefully as possible
o This type of checking is termed nonexecution-based testing
 (Unfortunately, the term verification is sometimes also used to mean
nonexecution-based testing. This can also cause confusion.)
 Clearly, computer code can be tested both ways
o It can be executed on a computer, or
o It can be carefully reviewed
 Reviewing code is at least as good a method of testing code as executing it on a computer

1
5.1 QUALITY ISSUES

5.1.1 Quality Assurance

 The quality of an information system is the extent to which it satisfies its specifications
 The term quality does not imply “excellence” in the information systems context
o Excellence is generally an order of magnitude more than what is possible with our
technology today
 The task of every information technology professional is to ensure a high-quality
information system at all times
o However, the information system quality assurance group has additional
responsibilities with regard to information system quality

5.2.2 Quality Assurance Terminology

 A fault is the standard IEEE terminology for what is popularly called a “bug”
 A failure is the observed incorrect behavior of the information system as a consequence
of the fault
 An error is the mistake made by the programmer
 In other words,
o A programmer makes an error that results in a fault in the information system that
is observed as a failure
o
5.2.3. Managerial Independence

 It is important to have managerial independence between


o The development team and The quality assurance group
 Serious faults are often found in an information system as the delivery deadline
approaches
o The information system can be released on time but full of faults
 The client then struggles with a faulty information system or
o The developers can fix the information system but deliver it late
 Either way the client will lose confidence in the information system development
organization
 A senior manager should decide when to deliver the information system
o The manager responsible for development, and
o The quality assurance manager
o Should report to the more senior manager
 The senior manager can decide which of the two choices would be in the best interest of
both the development organization and the client
 A separate quality assurance group appears to add greatly to the cost of information
system development
o The additional cost is one manager to lead the quality assurance group
 The advantage is a quality assurance group consisting of independent specialists
 In a development organization with under six employees
o Ensure that each artifact is checked by someone other than the person responsible
for producing that artifact

2
5.2 NON EXECUTION BASED TESTING

 When we give a document we have prepared to someone else to check


o He or she immediately finds a mistake that we did not see
 It is therefore a bad idea if the person who draws up a document is the only one who
reviews it
o The review task must be assigned to someone other than the author of the
document
o Better still, it should be assigned to a team
 This is the principle underlying the inspection
o A review technique used to check artifacts of all kinds
o In this form of non execution-based testing, an artifact is carefully checked by a
team of information technology professionals with a broad range of skills
 Advantages:
The different skills of the participants increase the chances of finding a fault
A team often generates a synergistic effect
o When people work together as a team, the result is often more effective than if the
team members work independently as individuals

5.2.1 Principles of Inspections

 An inspection team should consist of from 4 to 6 individuals


o Example: An analysis workflow inspection team
 At least one systems analyst
 The manager of the analysis team
 A representative of the next team (design team)
 A client representative
 A representative of the quality assurance group
 An inspection team should be chaired by the quality assurance representative
o He or she has the most to lose if the inspection is performed poorly and faults slip
through
 The inspection leader guides the other members of the team through the artifact to
uncover any faults
o The team does not correct faults
o It records them for later correction
 There are four reasons for this:
o A correction produced by a committee is likely to be lower in quality than a
correction produced by a specialist
o A correction produced by an team of (say) five individuals will take at least as
much time as a correction produced by one person and, therefore, costs five times
as much
o Not all items flagged as faults actually are incorrect
 It is better for possible faults to be examined carefully at a later time and
then corrected only if there really is a problem
o There is not enough time to both detect and correct faults
 No inspection should last longer than 2 hours
 During an inspection, a person responsible for the artifact walks the participants through
that artifact
o Reviewers interrupt when they think they detect a fault

3
o However, the majority of faults at an inspection are spontaneously detected by the
presenter
 The primary task of the inspection leader is
o To encourage questions about the artifact being inspected and promote discussion
 It is absolutely essential that the inspection not be used as a means of evaluating the
participants
 If that happens
o The inspection degenerates into a point-scoring session
o Faults are not detected
 The sole aim of an inspection is to highlight faults
o Performance evaluations of participants should not be based on the quality of the
artifact being inspected
o If this happens, the participant will try to prevent any faults coming to light
 The manager who is responsible for the artifact being reviewed should be a member of
the inspection team
o This manager should not be responsible for evaluating members of the inspection
team (and particularly the presenter)
o If this happens, the fault detection capabilities of the team will be fatally
weakened.

5.2.2 How Inspections are performed

 An inspection consists of five steps:


 An overview of the artifact to be inspected is given
o Then the artifact is distributed to the team members
 In the second step, preparation, the participants try to understand the artifact in detail
o Lists of fault types found in recent inspections ranked by frequency help team
members concentrate on areas where the most faults have occurred
 In the inspection, one participant walks through the artifact with the inspection team
o Fault finding now commences
o The purpose is to find and document faults, not to correct them
o Within one day the leader of the inspection team (the moderator) produces a
written report of the inspection.
 The fourth stage is the rework
o The individual responsible for that artifact resolves all faults and problems noted
in the written report
 In the follow-up, the moderator ensures that every single issue raised has been resolved
satisfactorily
o By either fixing the artifact or
o Clarifying items incorrectly flagged as faults
o If more than 5 percent of the material inspected has been reworked, the team must
reconvene for a 100 percent re-inspection
 Input to the inspection:
o The checklist of potential faults for artifacts of that type
 Output from the inspection
o The record of fault statistics
 Recorded by severity (major or minor), and
 Fault type
 The fault statistics can be used in a number of different ways

4
5.2.3 Use of Fault Statistics

 The number of faults observed can be compared with averages of faults detected in
those same artifact types in comparable information systems
o This gives management an early warning that something is wrong, and
o Allows timely corrective action to be taken
 If a disproportionate number of faults of one type are observed, management can take
corrective action
 If the detailed design of a module reveals far more faults than in any other module
o That module should be redesigned from scratch
 Information regarding the number and types of faults detected at a detailed design
inspection will aid the team performing the code inspection of that module at a later stage

5.3 EXECUTION BASED TESTING

 The first Unified Process workflow is the Requirements workflow. The artifacts of this
workflow are diagrams and documents. Accordingly the requirement artifact have to
undergo non execution based testing.
 Next comes the Analysis workflow. Again the artifacts of this workflow are diagrams and
documents and again non execution based testing is the only alternative.
 Next workflow is Design workflow
o The artifacts of these workflow are diagrams and documents
o Testing has to be non execution-based
 Why then do systems analysts need to know about execution-based testing?

5.3.1 The Relevance of Execution-Based Testing

 Not all information systems are developed from scratch


 The client’s needs may be met at lower cost by a COTS package
 In order to provide the client with adequate information about a COTS package,
– The systems analyst has to know about execution-based testing

5.3.2 Principles of Execution-Based Testing

 Claim:
o Testing is a demonstration that faults are not present
 Fact:
o Execution-based testing can be used to show the presence of faults
o It can never be used to show the absence of faults
 Run an information system with a specific set of test data
o If the output is wrong then the information system definitely contains a fault, but
o If the output is correct, then there still may be a fault in the information system
 All that is known from that test is that the information system runs
correctly on that specific set of test data
 If test data are chosen cleverly
o Faults will be highlighted
 If test data are chosen poorly
o Nothing will be learned about the information system

5
5.3.3 The Two Basic Types of Test Cases

 Black-box test cases:


o Drawn up by looking at only the specifications
 The code is treated as a “black box” (in the engineering sense)
 “We do not look inside the box”
 Glass-box test cases
– Drawn up by carefully examining the code and finding a set of test cases that,
when executed, will together ensure that every line of code is executed at least
once
» These are called glass-box test cases because now we look inside the
“box” and examine the code itself to draw up the test cases

5.3.4 What Execution-Based Testing Should Test

 Correctness is by no means enough


 Five other qualities need to be tested:
o Utility
o Reliability
o Robustness
o Performance
o Correctness

 Utility is the measure of the extent to which an information system meets the user’s needs
o Is it easy to use?
o Does it perform useful functions?
o Is it cost effective?
 Reliability is a measure of the frequency and criticality of information system failure
o How often does the information system fail?
 (Mean time between failures)
o How bad are the effects of that failure?
o How long does it take to repair the system?
 (Mean time to repair)
o How long does it take to repair the results of the failure?
 Robustness is a measure of a number of factors including
o The range of operating conditions
o The possibility of unacceptable results with valid input, and
o The acceptability of effects when the information system is given invalid input
 Performance constraints must be met
o Are average response times met?
 (Hard real-time constraints rarely apply to information systems Correctness- An
information system is correct if it satisfies its specifications
 Every information system has to be correct
 But in addition, it must pass execution-based testing of
o Utility
o Reliability
o Robustness, and
o Performance

6
5.4 COST BENEFIT ANALYSIS
 An information system will be constructed only if it is cost-effective to do so
 A popular technique for determining this
o Compare estimated future benefits against projected future costs
o This is termed cost–benefit analysis
 Tangible benefits are easy to measure, but
o Intangible benefits can be hard to quantify directly
 A way of assigning a dollar value to intangible benefits is to make assumptions
o In the absence of data, this is the best that can be done
o Better assumptions mean better data and more accurate calculation of intangible
benefits
 The same technique can be used for intangible costs
 Cost–benefit analysis is a fundamental technique for deciding
o Whether a client should computerize his or her business and, if so
o In what way
 For each strategy, costs and benefits are computed
o Select the strategy for which the difference between benefits and costs is the
largest

5.5 RISK ANALYSIS

 A risk is an event or condition that can cause the delivery of an information system
to be
o Canceled
o Delayed
o Over budget, or
o Not to meet its requirements
 Risks include:
o The project may not meet its time constraints
o The moving target problem can result in time and cost overruns
o The delivered information system may not meet the client’s real needs
o The developers may not have the needed expertise
o The hardware may not be delivered in time
o The CASE tools may not be available, or may not have all the needed
functionality
o A COTS package with the same functionality may be put on the market while the
project is underway
 The first step
o List the risks in a project
 Risk management is the process of
o Determining what the risks are, and then
o Attempting to mitigate them
 Minimize their impact
 Example 1:
o To mitigate the risk that part of a proposed information system will not work
 Build a proof-of-concept prototype
 Example 2:
o To mitigate the risk that the development team will not have the necessary skills
 Provide training

7
 Risks are like diseases
o Sometimes they go away spontaneously
o They often get better or worse without intervention
o Minor ones merely need to be watched, but
o Major ones need to be cured (mitigated)
 A risk list must therefore be maintained
 For each item on the list, the following items are recorded:
o A description of the risk
o The priority of the risk (critical, significant, or routine)
 The priority can change, in either direction
o The way the project is impacted by the risk
o The name of the person responsible for monitoring the risk
o The action to be taken if the risk materializes
 Risk analysis is integral to the Unified Process
 During the inception phase
o The risk list is drawn up
o Attempts are made to mitigate the critical risks
o The use cases are prioritized according to their risks
 During the elaboration phase
o The risks are monitored
o The risk list is updated
 Particularly with regard to priorities
 During the construction phase
o The risk list is again updated
 During the transition phase
o Attempts are made to find any previously unidentified risks
 Risk analysis does not terminate when the product is delivered to the client
o The risk list must be maintained through the entire life cycle of the product

5.6 IMPROVING THE PROCESS

 The global economy is critically dependent on computers


o And hence on information systems
 Many governments are concerned about the information system development process
o This includes the activities, techniques, and tools used to produce information
systems
 The Department of Defense founded the Software Engineering Institute at Carnegie
Mellon University in Pittsburgh
 A major success of the Software Engineering Institute is the Capability Maturity Model
(CMM)

5.6.1 Capability Maturity Models

 The capability maturity models of the Software Engineering Institute


o A related group of strategies for improving the process for developing
information systems
 (Maturity is a measure of the goodness of the process itself)
 The Software Engineering Institute has developed CMMs for

8
o Software (SW–CMM)
o Management of human resources (P–CMM; the P is for “people”)
o For systems engineering (SE–CMM)
o For integrated product development (IPD–CMM), and
o For software acquisition (SA–CMM)
 In 1997 it was decided to develop a single integrated framework for maturity models
o Capability maturity model integration (CMMI)

 SW–CMM is presented here


o SW–CMM incorporates technical and managerial aspects of the development of
an information system
 Underlying principle:
o The use of new techniques cannot result in increased productivity and profitability
o Problems are caused by the way the process is managed
o Improving the management of the process will result in
 Improvements in technique
 Better-quality information systems, and
 Fewer projects with time and cost overruns
 Improvements in the process cannot occur overnight
o The SW–CMM induces change incrementally

 Five different levels of maturity are defined


o An organization advances slowly toward the higher levels of process maturity

Maturity Level 1: Initial Level

 No information system management practices are in place


o Instead, everything is done ad hoc
o Most activities are responses to crises
o The process is unpredictable
o It is impossible to make accurate time and cost estimates
 Most development organizations worldwide are still at level 1

Maturity Level 2: Repeatable Level

 Basic information system project management practices are in place


o Planning and management techniques are based on experience
 Hence the name “repeatable”
o Measurements are taken
 The essential first step in achieving an adequate process
o Without measurements, it is impossible to detect problems before they get out of
hand
o Also, measurements taken during one project can be used to draw up realistic
schedules for future projects

Maturity Level 3: Defined Level

 The process for information system development is fully documented


o Managerial and technical aspects of the process are clearly defined

9
 Continual efforts are made to improve the process
 At this level, it makes sense to introduce new technology such as CASE
o In contrast, “high tech” only makes the crisis-driven level 1 process even more
chaotic
 A number of organizations have attained maturity levels 2 and 3
o Not many have reached levels 4 or 5
 For most companies the two highest levels are targets for the future

Maturity Level 4: Managed Level

 A level 4 organization sets quality and productivity goals for each project
o These two quantities are measured continually
o Action is taken if there are deviations from the goals

Maturity Level 5: Optimizing Level

 The goal of a level 5 organization is continuous process improvement


o Statistical quality and process control techniques are used to guide the
organization
 The knowledge gained from each project is utilized in future projects
o The process thus incorporates a positive feedback loop
 The five maturity levels

Figure : Maturity Levels

 To improve its process, an organization


o Attempts to understand its current process
o Formulates the intended process
o Determines and ranks in priority actions that will achieve this process
improvement
o Draws up and executes a plan to accomplish this improvement
 This series of steps then is repeated
o The organization successively improves its process
 Experience with the capability maturity model
o Advancing a complete maturity level usually takes from 18 months to 3 years
 But moving from level 1 to level 2 can take up to 5 years
 It is difficult to instill a methodical approach in a level 1 organization
 For each maturity level there are key process areas that an organization should target to
reach the next maturity level
 Example: The key process areas for level 2 include
o Configuration control

10
o Quality assurance
o Project planning
o Project tracking
o Requirements management
 These areas cover the basic elements of information system management:
o Determine the client’s needs (requirements management)
o Draw up a plan (project planning)
o Monitor deviations from that plan (project tracking)
o Control the pieces that make up the information system (configuration
management), and
o Ensure that the information system is fault free (quality assurance)
 A level 5 organization is far more advanced than a level 2 organization
 Example:
o A level 2 organization is concerned with quality assurance, that is, with detecting
and correcting faults
o The process of a level 5 organization incorporates fault prevention, that is,
ensuring there are no faults in the first place
 An original goal of the CMM program was to raise the quality of defense software
o By awarding contracts to only those defense contractors who demonstrate a
mature process
 The U.S. Air Force stipulated conformance to SW–CMM level 3 by 1998
o The Department of Defense as a whole subsequently issued a similar directive
 Today, the SW–CMM program is being implemented by development organizations
worldwide

5.7 METRICS

 Measurements (or metrics) are essential to detect problems early in the information
system process
O before they get out of hand
 Metrics serve as an early warning system for potential problems
 A wide variety of metrics can be used, such as
O loc measurements
O mean time between failures
O effort in person-months
O personnel turnover
O cost
O efficiency of fault detection
 Gathering metrics data costs money
O even when data gathering is automated, the case tool that accumulates the
information is not free
O interpreting the output from the tool consumes human resources
 Numerous different metrics have been put forward
O which metrics should an information system organization measure?
 There are five essential, fundamental metrics:
O size (in lines of code, or better)
O cost (in dollars)
O duration (in months)

11
O effort (in person-months), and
O quality (number of faults detected)
 Each metric must be measured by phase or workflow
 Data from these fundamental metrics can highlight problems such as
O high fault rates during the design workflow
O code output that is well below the industry average
 A strategy to correct these problems is then put into place
 To monitor the success of this strategy, more detailed metrics can be introduced

5.8 CPM/PERT

 More general types of management information are also needed


 example:
O critical path management (cpm), otherwise known as
O program evaluation review techniques (pert)
 When developing an information system
O many hundreds of activities have to be performed
O some activities have to precede others
O other activities can be carried on in parallel
 example:
O two activities are started at the same time
O they can be performed in parallel
O both have to be completed before proceeding with the project as a whole
O the first takes 12 days, but the second needs only 3 days
 The first activity is critical
O any delay in the first activity will cause the project as a whole to be delayed
 However, the second activity can be delayed up to 9 days without adversely impacting
the project
O there is a slack of 9 days associated with the second activity
 When using pert/cpm, the manager inputs
O the activities
O their estimated durations
O any precedence relations
 The pert/cpm package will then
O determine which of the hundreds of activities are critical
O compute the slack for each of the noncritical activities
O print out a pert chart showing the precedence relationships, and
O highlight the critical path
 the path through the chart that consists of critical activities only
 if any activity on the critical path is delayed, then so is the project as a
whole
 Simple PERT chart

12
 There are 12 activities and 9 milestones
O a milestone is an event used to measure progress, such as the completion of an
activity or set of activities
 Starting with milestone a, activities ab, ac, and ad can be started in parallel
 Activity fj cannot start until both bf and cf are finished
 The project as a whole is complete when activities hj, fj, and gj are all complete
 Completing the whole project will take at least 25 days
 The critical path is acgj
O if any one of the critical activities
 ac, cg, or gj s delayed in any way, the project as a whole will be delayed
 However, if activity ad is delayed by up to 15 days, the project as a whole will not be
delayed
O there is a slack of 15 days associated with activity ad
 Now suppose that activity ad is delayed by 15 days
 The situation at day 17
O actual durations of completed activities are underlined

 There are now two critical paths


o Activity DG has become critical
 Simply printing out a PERT chart showing the expected durations is useless
o Data regarding actual durations must be input continually
o The PERT chart must be updated
 How is the PERT data continually updated?
o The task is too large for humans—a CASE tool is needed
o All the information system development tools must integrated
o Information of all kinds, including
 Source code
 Designs
 Documentation
 Contracts, and
 Management information
 must be stored in a system development database
 The CASE tool that generates the PERT chart then obtains its information directly from
the database
o Thus, what is needed is a CASE environment

13
5.9 CHOICE OF PROGRAMMING LANGUAGE

 In what language should the information system be implemented?


o This is usually specified in the contract
 But what if the contract specifies
o The product is to be implemented in the “most suitable” programming language
 What language should be chosen?
 Example
o QQQ Corporation has been writing COBOL programs for over 25 years
o Over 200 software staff members, all with COBOL expertise
o What is “the most suitable” programming language?
 Obviously COBOL
 What happens when new language (Java, say) is introduced
o New hires are needed
o Existing professionals must be retrained
o Future products are written in Java
o However, existing COBOL products must be maintained
o Expensive software is needed, and the hardware to run it
o 100s of person-years of expertise with COBOL are wasted
 There are now two classes of programmers
o COBOL maintainers (despised)
o Java developers (paid more)
 The solution:
OO-COBOL
 Object-oriented COBOL—2002
 QQQ Corporation train their technical staff in
o The object-oriented paradigm in general, and
o OO-COBOL in particular
 QQQ Corporation can then
o Develop new information systems in OO-COBOL, and
o Maintain existing information systems in traditional COBOL
 Where there is no clear reason for choosing one programming language over another
o Use cost–benefit analysis
 Management must compute costs and benefits of each language under
consideration
 The language with the largest expected gain is chosen
o Alternatively, risk analysis can be used
 For each language, a list is made of the potential risks and ways of
mitigating them
 The language with the smallest overall risk is selected
 Which is the appropriate object-oriented language today?
o Twenty years ago, there was only one choice—Smalltalk
o Today the most widely used object-oriented language is C++
o Java is in second place
C++ is popular because of its similarity to C
o C++ is a superset of C
o C++ is therefore a hybrid object-oriented language

14
o Managers therefore often assume that a C programmer can quickly pick up the
rest
o Conceptually C++ is quite different from C
o C is for the traditional paradigm
o C++ is for the object-oriented paradigm
o Before an organization adopts C+
o Training in the object-oriented paradigm must be given
Java is a pure object-oriented programming language
 Education and training are even more important with Java than a hybrid object-oriented
language
o Like C++ or OO-COBOL
 What of the future?
 Existing information systems
o COBOL will remain the most widely used language
 New information systems
o Will be written in object-oriented languages like
 C++
 Java
 C#

5.10 REUSE CASE STUDIES

 Instead of utilizing previously developed programs, organizations all over the world
develop their own programs from scratch
o Why do information technology professionals continually reinvent the wheel?

5.10.1 Reuse Concepts

 Reuse
o Using artifacts of one information system when developing a different
information system with different functionality
 Reusable artifacts include
o Modules
o Code fragments
o Design artifacts
o Part of manuals
o Sets of test data
o Duration and cost estimates
 Two types of reuse
o Accidental reuse (or opportunistic reuse)
 First, the information system is built
 Then, artifacts are put into the artifact database for reuse
o Planned reuse
 First, reusable artifacts are constructed
 Then, information systems are built using these artifacts
 A strength of deliberate reuse
o A artifacts specially constructed for reuse are likely to be
 Easy and safe to reuse
 Robust

15
 Well documented
 Thoroughly tested
 Uniform in style
 A weakness of deliberate reuse
o There can be no guarantee that such an artifact will ever be reused
 Reasons for reuse
o It is expensive to design, implement, test, and document software
o On average, only 15% of new code serves an original purpose
o Reuse of parts saves
 Design costs
 Implementation costs
 Testing costs
 Documentation costs
 So why do so few organizations employ reuse?

5.10.2 Impediments to Reuse

 There are a number of obstacles to reuse:


o The not invented here (NIH) syndrome
o Concerns about faults in potentially reusable routines
o The storage–retrieval problem
o Costs of reuse
 All these can be solved
 Legal issues can arise with a contract information system
o The information system usually belongs to the client
o Reuse of an artifact for another client constitutes theft of intellectual property

5.11 PORTABILITY

 Every information system must be portable


o That is, easily adapted to run on different hardware–operating system
combinations
 Hardware is replaced every 4 years or so
 There are several obstacles to portability

5.11.1 Hardware Incompatibilities

 Sources of incompatibilities include


o Diskette formats (PC vs. Macintosh)
o Character codes (EBCDIC vs. ASCII)
o Tape drives (different parities)
 There is an economic reason for perpetuating incompatibilities
o To force a customer to buy an expensive compatible computer
 To avoid an even more expensive conversion to a cheaper incompatible
computer

5.11.2 Operating System Incompatibilities

 An information system that runs under Windows will not run under

16
o Linux
o Mac OS, or
o OS/370
 Problems can arise even when upgrading the same operating system
o Example: Windows

5.11.3 Compiler Incompatibilities

 Information systems should be implemented in a widely used language such as


o COBOL
o C
o C++, or
o Java
 Using only standard features of that language

5.12 PLANNING AND ESTIMATING DURATION AND COST

5.12.1 Planning

 There are two types of planning


o Planning, like testing, must continue throughout the development and
maintenance life cycle
o After the specification document has been drawn up, duration and cost estimates
are computed and a detailed plan is produced

Planning and the Information System Life Cycle

 Ideally, the plan for the entire information system project would be drawn up at the very
beginning of the life cycle
 This is impossible
o There is insufficient information that early
 There is not enough information available at the end of the requirements workflow to
plan the system
o At that stage, the developers at best have an informal understanding of what the
client needs
 Planning has to wait until the end of the analysis workflow
o At that stage, the developers have a detailed appreciation of most aspects of the
target information system
o This is the earliest point in the life cycle at which accurate duration and cost
estimates can be determined
 Suppose that the delivered cost of an information system is found to be $1 million

 The figure shows that if a cost estimate had been made


– Midway through the requirements workflow, the relative range for the cost
estimate was 4
o The cost estimate was probably in the range ($1 million / 4, $1 million 
4), or ($0.25 million, $4 million)
– Midway through the analysis workflow the relative range for the cost estimate
was 2

17
o The range of likely estimates would have shrunk to ($1 million / 2, $1
million  2), or ($0.5 million, $2 million)
– At the end of the analysis workflow, the relative range at this point was 1.5
o The estimate was probably in the still relatively wide range of ($1 million
/ 1.5, $1 million  1.5) or ($0.67 million, $1.5 million)

Figure : Early estimates can be wildly inaccurate


l
 Cost estimation is not an exact science
– And a premature estimate is likely to be even less accurate than one made at the
correct time
 The assumption throughout the remainder of this chapter is that
– The analysis workflow has been completed, so meaningful estimating and
planning now can be carried out

5.12.2 Estimating Duration and Cost

 Before development commences, the client wants to know how much the information
system will cost
– If the development team underestimates, the development organization will lose
money on the project
– If the development team overestimates, then the client may decide against
proceeding, or
– The client may give the job to another development organization whose estimate
is more reasonable
 Accurate cost estimation is critical

 Internal cost
– The cost to the developers, including
o Salaries of the development teams, managers, and support personnel
o The cost of the hardware and software
o The cost of overhead such as rent, utilities, and salaries of senior
management
 External cost
– The cost to the client

18
 Sometimes
– External cost = internal cost + profit margin
 However, economic and psychological factors can affect this
– If the developers desperately need work they may charge the client the internal
cost or less
– When a contract is awarded on the basis of bids, a team may try to come up with a
bid that will be slightly lower than what they believe will be their competitors’
bids

 Estimating the duration of the project is equally important


– The client wants to know when the finished information system will be delivered
 If the project falls behind schedule
– At best the developers lose credibility
– At worst penalty clauses are invoked
 If the developments overestimate the time needed
– The client will probably go elsewhere
 It is hard to estimate duration and cost accurately
– The human factor is critical
 Experiments of Sackman
– With matched pairs, Sackman observed differences of
o 6 to 1 in information system size
o 8 to 1 in information system execution time
o 9 to 1 in development time
o 18 to 1 in coding time
o 28 to 1 in debugging time
– The best and worst performances were by two programmers, each of whom had
11 years of experience
 Human factors therefore preclude accurate estimates of duration or cost
 Differences among individuals do not tend to cancel out, even on large projects
– One or two very good (or very bad) team members can cause major deviations
from estimations
 Critical staff members can resign during the project
– Time and money then are spent
o Finding replacements and integrating them into the team, or
o Reorganizing the remaining team members
– Schedules slip and estimates become inaccurate

5.12.3 Metrics for the Size of an Information System

 The most common size metric


– Lines of code (LOC), or
– Thousand delivered source instructions (KDSI)

 There are many problems with this metric


– Source code is only a small part of the development effort
– Versions in different languages have different numbers of lines of code
– Should comments in the code be counted?
– How should changed lines or deleted lines be counted?
– What if code is not written, but rather inherited from a parent class?

19
– Not all the code written is delivered to the client
o Half the code may be for tools
– What if thousands of lines are generated by
o A report generator
o A screen generator, or
o A graphical user interface (GUI) generator
 Some metrics estimate size on the basis of the estimated number of lines of code
 This is doubly dangerous
 It is an uncertain input to an uncertain formula
 Alternatives to lines of code
– So-called software science
o Anything but science!
 What is required is a metric that can be computed from quantities available early in the
life cycle
– FFP metric
o Size = Number of Files + Number of Flows + Number of Processes
o Validated for medium-scale information systems
o Never extended to databases
– Function points
– Function points
o Based on the number of input items, output items, inquiries, master files,
and interfaces
o The metric also incorporates the effects of 14 technical factors

 Function points and the FFP metric have the same disadvantage
– Maintenance often is inaccurately measured
– Major changes can be made without changing
» The number of files, flows, and processes or
» The number of inputs, outputs, inquiries, master files, and interfaces
» (Lines of code is no better in this respect)

20
5.12.4 Techniques of Cost Estimation

 Expert judgment by analogy


– An expert compares the target information system to completed systems and notes
similarities and differences
– Different estimates from experts are reconciled using the Delphi technique
– Estimates and rationales are distributed to all the experts
– They now produce a second estimate
– Estimation and distribution continue until the experts agree within an accepted
tolerance
– No group meetings take place during the iteration process
– Estimation by a group of experts should reflect their collective experience
» If this is broad enough, the result well may be accurate

 Algorithmic cost estimation models


– A metric is used as input to a model
– The model is then used to estimate duration and cost
 Unlike a human, a model is unbiased
– However, estimates from a model are only as good as its underlying assumptions
 Hybrid models incorporate mathematical equations, statistical modeling, and expert
judgment
– The most important hybrid model is COCOMO

5.12.4.1 COCOMO

 COCOMO estimation is done in two stages


 First, a rough estimate of the development effort is determined, based on
– The number of lines of code in the target system, and
– The level of difficulty of developing that target system
 From these two parameters, the nominal effort can be computed
 Example:
– The target information system is straightforward, and
– Estimated to be 12,000 lines of code
 The nominal effort will be 43 person-months
 Second, the nominal effort is multiplied by 15 development effort multipliers, such as
– Required software reliability, and
– Product complexity to yield the estimated effort
– The multipliers can range in value from 0.70 to 1.66
 Example:
– A network of ATMs is complex and the network has to be reliable
– According to the COCOMO guidelines, the multiplier
– Required software reliability is 1.15, and
– Product complexity is 1.30
 The estimated effort is then used in additional formulas to determine various estimates,
including
– Dollar costs
– Development schedules
– Activity distributions

21
– Annual maintenance costs
 COCOMO is a complete algorithmic cost estimation model
– It gives the user virtually every conceivable assistance in project planning
 COCOMO has proved to be the most reliable estimation method
– Actual values come within 20 percent of the predicted values about two thirds of
the time
 The major problem with COCOMO
– Its most important input is the number of lines of code in the target information
system
– If this estimate is incorrect, then every single prediction of the model may be
incorrect
 Management must monitor all predictions throughout information system development

5.12.4.2 COCOMO II

 COCOMO II is both flexible and sophisticated


– Consequently, it is much more complex than the original COCOMO
 The model still is too new to estimate
– Its accuracy, and
– The extent to which it is an improvement over the original COCOMO

5.12.4.3 Tracking Duration and Cost Estimates

 While the information system is being developed


– Actual development effort must constantly be compared against predictions
 Deviations from predictions serve as early warnings that
– Something has gone wrong, and
– Corrective action must be taken
 Management must then take appropriate action to minimize the effects of
– Cost overruns, and
– Duration overruns
 Careful tracking of predictions must be done throughout the development process
– Irrespective of the prediction techniques that were used
 Detect deviations early in order to
– Take immediate corrective action

5.13 TESTING THE PROJECT MANAGEMENT PLAN

 Cost and duration estimates must be as accurate as possible


 The entire project management plan must be checked by the quality assurance group
before estimates are given to the client
o The best way to test the plan is by a plan inspection
 The plan inspection team must review the project management plan in detail
o Special attention must be paid to the duration and cost estimates
 To reduce risks even further
o As soon as the members of the planning team have determined their estimates,
duration and cost estimates should be computed independently by a member of
the quality assurance group

22
 This must be done irrespective of the metrics used

5.14 MAINTENANCE AND THE OBJECT ORIENTED PARADIGM

5.14.1 MAINTENANCE—DEFINITION

 Maintenance is the process that occurs when an information system artifact is modified
o Either because of a problem, or
o Because of a need for improvement or adaptation
 (International Standards Organization and International Electro technical
Commission, 1995)
 That is, maintenance occurs whenever an information system is modified
o Regardless of whether this takes place before or after installation

5.14.2 WHY MAINTENANCE IS NECESSARY

 There are three main types of maintenance:


 Corrective maintenance
o To correct a fault
 Perfective maintenance
o To improve the effectiveness of the information system
 Adaptive maintenance
o To react to changes in the environment in which the information system operates

5.14.3 DEVELOPMENT AND MAINTENANCE

 The information system life cycle can be viewed as an evolutionary process


o This is how maintenance is viewed by the Unified Process
o Maintenance is treated merely as another increment
 However, there is a basic difference between development and maintenance
o It is easier to create a new version than to modify an existing version
 Example
 Consider the similarities and differences between
o Modifications to a portrait
o Modifications to a information system
o Conclusions
o A new portrait must be painted from scratch
o The existing information system must be modified

5.14.4 MAINTENANCE AND THE OBJECT-ORIENTED PARADIGM

 The object-oriented paradigm promotes maintenance


o A class is an independent unit
o Example: Bank Card Class models every aspect of a bank card
 No aspects of a bank card are modeled by any other class
o Information hiding ensures that implementation details are not visible outside a
class
o Message passing is the only form of communication permitted

23
 In theory, it is easy to maintain a class
o Independence ensures it will be easy to determine which part of an information
system must be changed
o Information hiding ensures that a change made to a class will have no impact
outside that class
o This reduces regression faults
 In practice, there are obstacles specific to the maintenance of object-oriented information
systems
 Inheritance is the cause of some problems
 If new features is added to a class with no subclasses, there is no effect on any other
class, but
 If a class with subclasses is changed, all its subclasses are changed in the same way
 Inheritance hierarchy

Figure :Inheritance hierarchy

 A new attribute added to class Bottom Class cannot affect any other class in any way
 A new attribute added to class Top Class applies to all the classes in the diagram
o This is termed the fragile class problem
 Inheritance can have
o A positive influence on development, but
o A negative impact on maintenance
 A second problem arises as a consequence of polymorphism and dynamic binding
 Create new subclass via inheritance
o Does not affect super class
o Does not affect any other subclass
 Modify this new subclass
o Again, no affect
 Modify a super class
o All descendent subclasses are affected
 Inheritance can have
o A positive effect on development
o A negative effect on maintenance

24
 Key point
o Maintainers must not merely be skilled in a broad variety of areas, they must be
highly skilled in all those areas
o Specialization is impossible for the maintainer
 Maintenance is the same as development, only more so

5.14.5 Reverse Engineering

 When the only documentation is the code itself


o Start with the code
o Recreate the design artifacts
o Recreate the specification artifacts (extremely hard)
o CASE tools can help (flow charters, other visual aids)
 This is a common problem with legacy systems
 Definitions
o Reengineering
 Reverse engineering, followed by forward engineering
o Restructuring
 Improving the information system without changing its functionality
o Examples:
 Pretty printing
 Converting code from traditional to object-oriented form
 Improving maintainability
 Testing during Maintenance

 Maintainers view an information system as a set of loosely related code artifacts


o They were not involved in the development of the product
 Regression testing is essential
o Store test cases and their outcomes, modify as needed

5.15 CASE TOOLS FOR MAINTENANCE.

 Version control tools are essential


 Configuration control tools are essential
– Examples:
» Commercial: PVCS, SourceSafe
» Open source: CVS
 If no configuration control tool is available
– A build tool is required
 A fault tracking tool is also needed
– It keeps a record of reported faults that are not yet fixed
– Example:
» Open source: Bugzilla
 CASE tools can assist in reverse engineering and reengineering
– Examples
» Commercial: Battlemap, Teamwork
 Maintenance is difficult and frustrating
– Management must provide maintenance programmers with the CASE tools
needed for efficient and effective maintenance.

25

You might also like