Assignment

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 59

Software development project needs analysis

and solution changes

Abstract "demand change", once the process of software development projects that the demand for
change, whether it is the project manager or program developers have come to feel a headache.
Moreover, in some project management consultant 的 PPT courseware, as well as software project
management of technical books and tutorials, Ye to "Xuqiubiangeng" as a separate item to study. In
this article, and your software development project to explore the demand for change occurs because
the demand for change control, and when there needs to resolve how to deal with the time change.

First, the demand for change annoying

As a software project manager in project development in progress, you encounter such a problem: the
customer a call overturned before on your customers, and your own development team, after repeated
discussions and settle down in the demand for recognition . After you re-start and customers, and your
development team needs new round of talk, even talk about is endless. Even have to redesign the
existing structure.

In the face of this situation, as a project manager you will say: "We can not refuse customers, but he
could not immediately meet the new demand, so I had to be pushed to complete it later." Or, more
extreme the idea of some : Customer always good to be true, customer demand is technically
impossible to achieve ... ...

Customer demand with the new argument, you will demand confirmation of the importance of the
doubt. Since the beginning has been repeatedly and customer communication, but also where no
objection was clearly the answer, but when the development project is constantly evolving, customers
gradually deepening understanding of the system when they themselves would like to eventually
reverse the to demand. But then you would think that to demand only the acquisition, did not confirm.

The reason is because the demand changes, resulting in the extension of the project many times,
customers still say this is not what they want. You are still complaining about the needs of our
customers have been like the weather keeps changing, ultimately, whether your complaints or customer
demand changes to the project team will make the developer exhausted and confused.

In your software development projects before, you and your project members have had such thoughts,
in the software development, it is necessary to eliminate the demand for change, not to talk about any
changes in the demand good?

First, the kind of thinking and understanding is wrong, the software project development changes in the
demand can not be completely eliminated. Whether the project manager or project developer,
preferably before the start of a project to eliminate this idea. Needs to change is not possible to be
eliminated, and the "elimination of demand change," but the idea needs to be remove. Demand for
change to eliminate all the efforts and ideas, usually carried out in the project development are
thankless.

Project development process, the demand for change is inevitable.


While under normal circumstances, the project manager spent a lot of effort to avoid tedious
requirements change, can change the final demand will always be. But this does not mean that the
project should not do this work, whether it is the project manager or developer needs to change for the
right attitude and approach should be the attitude of software testing as possible before the change takes
place in the demand for demand reduction to change the situation that occurred to demand changes to
the risks to a minimum.

Second, the demand causes the change

In software development projects, the demand for change may come from program service providers,
customers or suppliers and, of course, may come from within the project team.

To demand changes to the causes and carefully held up nothing less than the following reasons:

1, there is no delineation began thinning

Detailed work is done by the staff requirement analysis, generally based on user submitted descriptive
summary of just a few words to refine, and extract a functional, and gives descriptions (normal
execution time description and a description of the accident).

With addition to a certain degree, and began a systematic design, scope changes occur, then the details
of use case description may have Henduo to change. If the original data is added manually to change
calculated based information systems, and the original description of a property to become an entity so
described.

2, do not specify the baseline demand

The baseline demand is the demand for change is to allow the line.

As the project progressed, the baseline demand is also changing. Are allowed to change based on
contracts and costs, such as the overall structure of the software has been designed, are not allowed to
change the scope of the demand, because the overall structure of the entire project will progress and
cost of the initial budget. As the project progressed, the baseline will be higher the more (the change
will allow the less).

3, without good software architecture to adapt to change

Component-based software architecture is to provide a rapid adaptation to changes in demand


architecture, the data layer encapsulates the data logic between visits, business layer encapsulates the
business logic, presentation logic layer show the user.

However, adaptation must follow some principles of loosely coupled together, or there are links
between the layers, the design of interfaces to minimize the entrance of parameters will be changed.
Well, if the business logic package, then the interface layer arrangement or reduce some of the
information requested is very easy to adapt. If the interface definition was reasonable, even if there are
changes in business processes, and can quickly adapt to change. Therefore, the cost impact to the extent
permitted baseline can reduce demand, improve customer satisfaction.

Third, demand for change control


As already mentioned, and in software development projects before, we should eliminate the "demand
for change must not allow the occurrence of" thinking. In the project, the event needs to change, not the
complaints are not blind, do not go blindly to meet the customer's "new needs", but to manage and
control requirements change.

1, hierarchical management customer demand

Software development project, the "customer is always right" and "Customer is God" is not entirely
correct, because the project contract has been signed, any new changes and increasing demand in
addition to the normal impact of the project, it also affect the customer's investment income, so
sometimes the project manager Fandao be for the customer.

Demand for the project, classified management can be implemented to meet the demand for change on
the control and management.

One needs (or change) is a key demand, this demand, if met, means that the project can not be properly
delivered, preliminary work will be all negative. This level of demand must be met, otherwise it means
that no project members and members of their own all the efforts, so as "Urgent". This is usually a
remedial debug type to fire.

2 demand (or change) is the follow-up to critical needs, it does not affect the delivery of the content of
the previous work, but not be met, the content can not submit new projects or continue, it is
"Necessary". General basis for the new module key components, fall into this category.

3 demand is an important follow-up demand, if not met will decrease the value of the overall project, in
order to reflect the value of the project, the developer is also proof of the value of their own technology,
so as "Needed". Major general valuable new module development, fall into this category.

The three levels should be implemented, but the timing can be arranged in priority.

4 demand is improving demand, failed to meet such demand does not affect the use of existing features,
but if achieved would be better, rated "Better". Interface and usage requirements, generally in the grade.

5 requirements are optional requirements, more is even an idea, and a possible, usually just a personal
preference of customers only, rated as "Maybe".

Demand for the four, if time and resources allow it, may wish to do so. For the five needs, as its
description, as do it or not is a "Maybe".

2, the entire change management life cycle needs

Of all sizes and types of software project life cycle can be divided into three stages, namely, project
initiation, project implementation, project closeout. Do not change that needs management and control
of projects is in the implementation phase, but throughout the entire project life cycle in the entire
process.

Proceed from the overall perspective of the demand for change management, change control requires
an integrated approach.
(1) Project start-up phase of the change to prevent

As previously emphasized, for any software project, requirements change are inevitable, and no escape,
no matter the project manager or developer can actively respond to, and this should be started from the
needs analysis phase of the project began.

Doing very well on a needs analysis project, the base document defines the scope of the more detailed
and clear, the user needs to change with the project manager of the chances you get. If demand did not
do a good job, reference documents in the range of ambiguity was found to have great customers, "the
new demands of space", this time the project team often have to pay a lot of unnecessary expense.

If the needs analysis done well, the document clear and there clients sign, then later changes made by
the customer beyond the scope of the contract would require additional fees. This time, the project
manager must argue, then this is not to deliberately make our customers money, but can not allow
customers to develop the habit of frequent changes, or no end of trouble.

(2) the needs of the project implementation phase change

Successful software projects and failed projects is that projects the difference between whether the
whole process is controllable.

Project manager should establish the idea that "needs to change is inevitable, controllable, and is
useful." Project implementation phase of change control needs to do is to analyze the change request,
assess the potential risk to change and modify the base file.

Gradient control needs to note the following:

Requirements must be associated with the input, if the demand side to change the cost borne by the
developer, the project has become an inevitable demand for a change. Therefore, in the beginning of
the project, both funded the development of party or parties must be clear about this one: Requirements
change, software development inputs have changed.

Changes in demand for recognition of donors to go through, so will demand changes to the concept of
cost can be prudent to deal with demand changes.

The demand for small change is also subject to formal requirements management process, otherwise it
will add up.

In practice, people often do not want the demand for small change to the implementation of formal
demand management process that reduced the efficiency of development and a waste of time. But
precisely because of this concept needs only to gradually become uncontrollable, leading to project
failure.

Precise definition of the needs and scope of change will not stop the demand.

The definition of needs is not the more detailed, more needs to avoid gradient, which is two
dimensions. Requirements definition is too thin gradient have no effect on demand. Because the
demand for change is eternal, not requirement was dropped, it will not be changed.

Attention to communication skills.


Project development process is the actual user, the developer recognized the problems between the
points above, but due to demand changes may come from customer side, it may come from the
development side, therefore, as demand for managers, project managers need to using a variety of
communication skills to make the project the parties get what they want.

(3), a summary of the project closeout phase

Capabilities are often not come from successful experience, but from the lessons of failure come. Many
project managers do not pay attention to lessons learned and accumulated, even in the course of the
project ending up badly beaten, he can just complain about luck, the environment and the teamwork is
not good, very little systematic analysis of the sum, or do not know how to analyze and summarize, so
the same problem recurring.

In fact, the project should be concluded as the existing continuous improvement project or future
projects an important part of the work, but also as the project contract, design the content and target
identification and validation. Project summary work includes projects in the pre-identified risks and did
not anticipate the changes that occurred in response to measures such as risk analysis and summary,
including projects and project changes that occurred in the analysis of problems in the statistical
summary.

3, requirements change management principles

Although the demand for content and type of change are varied, but the principles of change
management needs is their original aim. Implementation of change management needs to follow the
following principles:

(1) to establish baseline requirements. Demand is the demand to change the basis of the baseline. In the
development process, requirements to identify and read the review after (user involved in the
assessment), we can establish the first requirements baseline. After each change, and after review, the
new requirements should be re-established baseline.

(2) develop a simple, effective change control process and the formation of the document. After
establishing the baseline demand for all changes made must follow the control flow control. At the
same time, this process is widespread, on future project development and other projects have the
reference.

(3) the establishment of Project Change Control Board (CCB) or related functions similar
organizations, is responsible for determining which changes to accept. CCB by the project staff
involved in multi-joint component, should include users and developers to policy and decision makers.

(4) needs to change we must first apply and then evaluate, and finally through and change the size of
the considerable level of assessment to confirm.

(5) needs to change, the affected software plans, products, activities must change accordingly to
maintain and update the same demand.

(6) to keep proper documentation generated change.

Fourth, changing requirements, deal with such


Change control requirements apply generally to go through change, change assessment, decision
making, respond to these four steps. If the change is accepted, but also increase the implementation of
the change and validate two steps, and sometimes there are steps to cancel the change. For the change
control process, a few responded to the crisis.

Mutual cooperation - the user resistance was difficult to imagine a project to succeed. In discussing the
demand, developers and users should try to understand each other, the attitude of mutual cooperation
on energy issues as much as possible. Even if the user put forward the developer seems to be
"excessive" demands should carefully analyze the reasons, put forward a viable alternative.

Full exchange - the process of change management needs of users and developers a large extent the
exchange process. Software developers must learn to carefully listen to the user's requirements,
considerations and ideas, and analyze and organize. Meanwhile, software developers should explain to
the user to enter the design stage, a further demand for change will bring the development of what kind
of impact and negative consequences.

Arrangements for full-time staff requirements change management - the task to heavy at times,
developers vulnerable to neglect the development work at any time communicate with users, it requires
a dedicated change management needs timely exchange of staff and users.

Binding contract - needs to change to the software development impact for all to see, so contracts with
users, you can increase the number of relevant terms, such as the limited needs of the user made
changes to time, according to the circumstances of the change can be accepted, rejected or Some
accepted, but also provisions in the demand for change must be the implementation of change control
processes.

Discrimination - as the development progress, some users will continue to offer some of the project
group did not achieve or workload appears relatively large, have a significant impact on the progress of
the project needs. In such cases, developers can explain to the user, the project's launch is the initial
basic needs as a prerequisite for development, if a substantial increase in new demand (although
demand for the user that is refined, but actually increase the workload new requirements), the project
will not be completed on time. If users insist on implementation of the new requirements, users can
suggest new important and urgent needs of attainment by grade, as a basis for assessing needs change.
At the same time, but also pay attention to controlling the frequency of proposed new requirements.

Choose the appropriate development model - a prototype development model used to establish more
appropriate to needs of development projects is not clear. Developers first description of the user on
demand to establish a system prototype, and then communicate with the user. General users to see some
real stuff, the more demand there will be a detailed explanation, developers can further improve the
user's instructions prototype. This process is repeated several times, the system prototype is gradually
moving closer to the end user needs, and fundamentally change the appearance of demand reduction.
The industry is more popular iterative development methods of the urgent requirements of the project
schedule change control is very effective.

Needs assessment of user participation - as the authors of the demand, users take for granted is one of
the most authoritative spokesman. In fact, demand for the assessment process, the user can often put
forward many valuable comments. It is also, on demand by the user the opportunity to final
confirmation, you can effectively reduce the incidence of demand for change.
Introduction
A major computer software company has retained the services of an attorney to represent it in litigation alleging that an up-
and-coming software firm has pirated its software. The copyright, patent, trade secret, and software piracy issues associated
with this litigation are complex and difficult for the attorney, the judge, and the jury to grasp. In order to adequately
represent the client, an attorney requires the assistance of a computer expert to properly assess and evaluate the complex
technical evidence. Regardless of the resolution of the matter through settlement, arbitration or litigation, a technical expert
is necessary to properly evaluate the case and to deftly reduce complex technical concepts to simple terms so that attorneys,
arbitrators, the parties, judges, and juries fully understand the issues. Technical experts are typically required to provide or
defend issues arising from patent infringement, copyright infringement, trade secret misappropriation, and software piracy.

This article examines the critical role of the computer expert. The selection of a computer expert is crucial. Computer
experts can be used by attorneys to help resolve computer-related intellectual property disputes without costly, time-
consuming litigation. If litigation proves necessary, the services of a computer expert are essential during pretrial
proceedings and at the trial itself.

What is the role of a Computer Expert?


A computer expert makes the technical aspects of a computer-related intellectual property dispute understandable to
laypersons, including lawyers and their clients. For example, a technical expert may evaluate whether a competitor's
software program is "substantially similar" to another's in a potential copyright infringement suit. At times, an expert may
conclude that a threatened claim is weak or even baseless. As a result, a party may refrain from suit and, possibly, avoid
serious Rule 11 sanctions. If a lawsuit is filed, the technical expert's assistance will be important to pre-filing preparation,
pretrial discovery and the presentation of evidence at trial. A technical expert may be essential in a non-jury trial by
presenting a case in terms understandable to the judge so that the judge can adequately assess the case.

Computer-Related Intellectual Property Disputes Which Require Technical Experts


Technical experts are typically required to prove or defend issues arising from patent infringement, copyright infringement,
trade secret misappropriation, and software piracy.

Copyright Infringement
A technical expert must first investigate exactly what the plaintiff's copyright and what the defendant infringed upon protect.
These elements are usually included within three general categories:

• An exact copy of the plaintiff's software.


• A derivative work with many elements exactly the same or similar.
• Similarity in design, which extends protection beyond copying program code. (1) Instead, protection extends to
similarities in the program structure, sequence and organization. See Whelan Assocs., Inc. v. Jaslow Dental Lab,
Inc., 797 F.2d 1222 (3d Cir. 1986), cert. denied, 479 U.S. 1031 (1987); compare Computer Assocs. Inter., Inc. v.
Altai, Inc., 982 F.2d 693 (2d Cir. 1992).

The expert must examine the original software and look for a copyright notice, as the software must clearly state that is a
copyrighted work, who owns the work, and the creation date. Even though the copyright registration is not necessary for
copyright ownership, in order to claim copyright infringement, the owner must have a valid copyright registration in the
computer software. The expert must then examine the owner's version control system. He or she must determine whether the
software was published or was in the public domain prior to the copyright date. Moreover, the expert must determine
whether the defendant had sufficient access to enable him or her to copy the plaintiff's software.

The most important element of the expert's investigation is an examination of the defendant's software. If the source code is
available, the expert should launch a full-scale investigation. Otherwise, the expert should examine the software for
similarities in the overall design. Looking at screens, reports, menus and software logic hierarchy does this. The objective is
to determine whether probable cause exists for a copyright infringement lawsuit. Once a lawsuit is instituted, the defendant's
source code can be obtained during discovery, perhaps subject to the terms of a confidentiality order.
Patent Infringement
Like in the case of copyright infringement, the technical expert must first investigate exactly what is protected by the patent
and what the defendant infringed. An examination of the patent claims and specifications is essential to this investigation.
While patent protection is more difficult to obtain, patent protection can be broader than copyright protection. A patent can
protect a:

• Process
• Device
• Methodology (in some cases)
• Format Type

In the case of computer software, a pure mathematical algorithm, without any specific end use, is not patentable. However, a
new format type (e.g., a new spreadsheet concept) could be patentable. A patentable claim could include computer software
that controls industrial processes or devices, even though such software utilizes mathematical algorithms. Diamond v.
Diehr, 450 U.S. 175 (1981).

The technical expert must examine the patent to determine specifically what the claims and specifications protect. He or she
must then look for public domain similarities to ascertain the validity of the claims. Finally, he or she must examine the
defendant's software and determine the areas of infringement. Sometimes, examination of source code is not necessary, but
it would not hurt.

Misappropriation of a Trade Secret


Trade secret law may provide the broadest protection against copying or misappropriation. This protection extends not only
to the software itself but also to any derivative work. In this case, the expert must determine whether the software was
sufficiently novel and whether it was treated by the plaintiff as a trade secret. The expert must determine whether the
defendant knew that the software was a trade secret, had access to the secret, and used the secret in an unauthorized manner.
The expert must examine the defendant's software to uncover areas of violation. However, to be thorough, the expert must
also search the public domain, because if the software exists therein through no fault of the defendant, then the defendant
did not violate the plaintiff's confidence.

Elements of Discovery Required by a Technical Expert


In order to complete a forensic investigation in an intellectual property dispute involving software piracy, a computer expert
must have access to the following information:

• Copyright, patent or trade secret information on both plaintiff's and defendant's software;
• Copies of all agreements that were entered into between plaintiff and defendant;
• All information necessary to create a complete chronology of events pertaining to the matter, including any and all
documentation created during development of plaintiff's and defendant's software;
• Complete working magnetic copies (object code, executables, and databases) of both plaintiff's and defendant's
software; and,
• Complete program source code for both plaintiff's and defendant's software.

Software Piracy
In order to establish software piracy, the computer expert must launch a full-scale forensic investigation. There are at least
seven different instances of software piracy which would usually be investigated:

• Defendant's software was created as a direct (exact) duplicate of plaintiff's object code;
• Defendant's software was created as an updated derivative of plaintiff's software from original source code using
the same programming language;
• Defendant's software was created as a direct (exact) translation from plaintiff's original source code into another
programming language;
• Defendant's software was created as an updated derivative of plaintiff's software from translated source code using
a different programming language;
• Defendant's software was copied from plaintiff's software using a fourth generation language (4GL);
• Defendant's software was created as an updated derivative of plaintiff's software which was generated using a 4GL;
and,
• Defendant created software by copying only the design of plaintiff's software.

The forensic investigation is made by the expert using both object and source code. A comparison of source code is
extremely difficult. Usually, software systems are very large. Often, a software system contains several hundred thousand
lines of source code. In these cases, locating copied sections is very time consuming. As computer-related litigation can be
very expensive, an attorney should carefully direct the expert's efforts in order to ensure that an expert produces the most
ideal work and does not waste the client's money.

One tool that can be very useful in a forensic investigation is HIPO

(Hierarchy plus Input - Process - Output)

This is a documentation technique developed by IBM during the 1970's. It was developed as a structured analysis tool. It
was intended that HIPO diagrams be created prior to actual software development. This would impose a structure upon the
software created from these diagrams thereby insuring maintainability. However, it is possible to develop HIPO diagrams
from already existing software using the source code. The hierarchy chart shows the relationship between various programs
and modules. It appears similar to a corporate organization chart. One IPO diagram is then generated for each program or
module on the hierarchy diagram. In other words, each box on the hierarchy chart generates its own IPO diagram. The IPO
diagram shows the Input, Processing, and Output portions of each programming step within the program or module. Using
HIPO enables an expert to see the forest through the trees, and makes his forensic investigation more manageable.

It is important to remember that an individual who develops software similar to existing software, even where the
functionality is similar, is not necessarily guilty of software piracy. Copyright laws do not protect computer algorithms.
Even where it can be shown that the individual had access to the original software, the new software may not have been
copied. Similar functionality may have been created merely from the marketing needs of a particular industry or profession.

What follows is a methodology that a computer expert can use to establish software piracy:

Direct (Exact) Duplication of Object Code


Direct (exact) duplication of object code is the most common form of software piracy. The program is produced by making
an exact magnetic copy of the original. It is very simple to accomplish using standard computer utilities. This type of piracy
is prevalent among personal computer users. However, this type of copying can be performed on any computer. It is so
widespread because it does not require the defendant to use the plaintiff's source code.

To establish software piracy resulting from direct duplication of object code, the technical expert would compare the file
sizes and creation dates of both the plaintiff's and defendant's programs. If they are identical, the expert then performs a
byte-by-byte comparison of the defendant's and the plaintiff's object code. If defendant's software was produced by direct
duplication, then the object files would be identical. Another clue would be to look at a character dump of both object files.
Most programmers put some character information into their programs. While object code is not usually understandable, the
character information contained therein can often be recognized. If the defendant's software was copied from plaintiff's
object code, the identifying character information should be recognizable. During discovery, source code should be
demanded, as the defendant probably cannot produce source code.

Updated Derivative Software From Original Source Code


If the defendant has access to plaintiff's source code, he would be able to modify and improve the software to make it more
marketable. He would want to make modifications to the original software to enable it to run on a different computer or with
a different operating system. In addition, by making such modifications, he is able to disguise the software so as to make
piracy less detectable.

The defendant would produce the new software by modifying the plaintiff's original source code. Probably, the original
formats of the screens, reports and menus will also have been changed. New screens and reports will have been generated.
Often, new functions will have been added. Possibly, some of the main logic will have been modified. However, there are
limits to the logic modification, since severe modification would make a complete re-write more cost effective.

To establish this type of software piracy, a computer expert must compare source code of the defendant's software with that
of the plaintiff's software. First, since the programming languages are the same, the expert should search for copied
segments of program code (exact duplication). Next, he should examine the data file structures of both systems. They should
be identical or substantially similar. Duplication of file structure is one of the telltale indications of software piracy. The
expert should then examine the program logic. In this type of software piracy, large segments of logic would be identical.
Variables would have the same or similar names, and identical constants would be used.

Direct (Exact) Translation from Original Source Code Into Another Programming
Language
Software pirates are usually very clever. Translation of the original source code into another programming language
accomplishes three things. First, it can disguise the final product. Second, it can permit the software to run more efficiently
on a different computer or operating system. Third, it can enable the software to be produced from original source code by
translating from one programming language into another. This is usually performed on a line-by-line basis.

To establish software piracy in this instance, the expert must run both the plaintiff's and defendant's software to demonstrate
identical operation. In addition, he or she must compare both plaintiff's and defendant's source code. If the defendant's
software was produced by direct translation of plaintiff's source code, then the screens, reports and menus should be
identical, the file structure should be identical, the constants should be identical, and there should be a one-to-one
correspondence between the variables across both systems. To further establish this type of software piracy, the expert
should develop HIPO charts from both plaintiff's and defendant's source code. The hierarchical charts should be identical.
The IPO charts should show that the same logic was used to create both systems.

Updated Derivative Software from Translated Source Code


After a software pirate has translated software into a new programming language, he would probably perform modifications
to the software. This would be done to further disguise the software and to improve the software following translation.
Extensive modification to translated software could make software piracy virtually undetectable. Once again, screens,
reports and menus would be changed significantly. New screens and reports as well as new functions would be added. There
might also be some modification of the main logic.

Software piracy can be established by a computer expert both from examination of source code of both systems and from
observing software operation of both systems. The expert should examine the file structures of both systems. They should
be identical or, at least, very similar. He or she should search the source code for constants. Many would be the same.
Finally, the expert should develop HIPO charts. He or she should find that large sections of the hierarchy are identical or
similar. In those cases where the hierarchy is identical, the expert should examine the corresponding IPO diagrams. They
should also be identical or similar.

Exact Duplication of Software Using a 4GL


The late 1970's and early 1980's witnessed the development of fourth generation software development systems. With such
4GL systems, a software analyst could potentially create entire software systems at a terminal without having to write a
single line of program code. Thus, software piracy acquired a new dimension. Rather than copying object modules or
translating original source code, the pirate could easily duplicate the exact external functionality of someone else's software.
Copying using a 4GL is much simpler than translation, and it provides ease of subsequent modification. It can be done with
or without the pirate having original source code available. If he has the source code available, he would duplicate the
original data file structure as well as the data flow. On the other hand, he could derive a new file structure without source
code that would function just as well. The pirate can copy the software merely by reading the user manuals and by
observing software operation. Essentially, he duplicates the design of the original software.

The technical expert can establish this type of software piracy both by observation of software operation and from
examination of the source code. If the plaintiff's source code was available to the defendant when he copied the software,
the data file structures should be identical or extremely similar. If the source code was not available to him, the file structure
should contain the same elements (fields) which have the same specifications, but which are in a different order. Screens,
reports and menus should be identical. This can be observed from software operation as well as source code. Finally, the
expert should prepare HIPO diagrams. They should be identical.

Updated Derivative Software from 4GL Translation


Once a program has been duplicated using a 4GL, a software pirate would probably update and modify the software. Piracy
in the resulting software would be extremely difficult to detect. Such modifications would be simple to generate.

A technical expert would have difficulty establishing software piracy in this instance. Where sufficient modification has
been performed, the resulting software is virtually new and original. The expert should examine the source code and observe
software operation. He or she should examine the data file structures for similarities. The expert should examine the design
of the system for investigating screens, reports, menus and program logic. Finally, he or she should develop HIPO diagrams
and search for similarities in logic.

Newly Developed Software Where Only the Design Was Copied


There have been many instances of software copyright infringement where the program code was completely new (not
copied), but where there was a deliberate effort to duplicate the design of an existing system. This was usually done to
enhance the marketability of a newly developed product, especially when the original software was very popular among
consumers.

This issue demands that an expert be able to show striking similarities in structure, function and organization. Menus,
screens, reports and logic should be similar. Specific methods of accomplishing certain tasks should be identical. An
example of this would be the use of all the same function keys to accomplish specific tasks. Copyright infringement is
demonstrated by observation of software operation. Nothing would be gained by examination of source code. If the software
has been sufficiently modified, proving copyright infringement would be very difficult.

Using an Expert to Resolve Computer-Related Intellectual Property Claims Without


Litigation
Most computer-related intellectual property claims never go to court. Due to the high cost of litigation and the uncertainty of
outcome, they are either settled or abandoned. A technical expert can be used to increase the chance of reaching a favorable
settlement quickly. With an enhanced technical perspective, an expert will work with an attorney to help him or her prepare
an imposing argument of the merits of his case and the weaknesses of the opposition's case. In such instances, the attorney
and the expert, working as a team, often convince the opposing side that litigation would accomplish nothing.

Use of Computer Experts in Pretrial Proceedings


Computer-related intellectual property litigation requires one or more independent technical experts. An expert's report is
usually submitted to the adversary during pretrial discovery. At a minimum, the report is required to set forth the opinions
that the expert will offer at trial and the basis for these opinions. However, many reports are more substantial. They often
become treatises that attempt to prove whether or not infringement occurred. At times, because of the overwhelming nature
of an expert's technical report, the opposition offers to settle or withdraw from the proceedings. If the attorney hopes to
settle the matter without a trial, this type of report is desirable. However, where the attorney knows that a settlement is
unlikely, only a minimal report should be produced. Why should an expert over-prepare the opposition for trial?

During pretrial preparation, an expert should review pleadings for factual accuracy and suggest changes. A computer expert
should assist in preparing the complaint or, where required, counterclaim. The expert should also help prepare
interrogatories, document requests, and requests for admission addressing the technical aspects of the case.
With intellectual property litigation, the expert should attend depositions of all technical witnesses. In preparation for such
depositions, the expert prepares questions for the attorney to ask. At depositions the expert can provide ad hoc information
to an attorney that could make the depositions more meaningful or less damaging. Normally, the expert is deposed as a part
of pretrial discovery. An expert helps to evaluate the technical reports generated by opposing technical experts. Often, an
expert is asked to investigate an opposing expert in an effort to impeach that expert's credibility.

Sometimes, just prior to a trial, an attorney will ask for a computer expert's help with jury selection. The expert assists the
attorney in preparing a list of questions to ask prospective jurors during the voir dire. Such questions should reveal a
potential juror with expert knowledge of computers so that he or she may be challenged and excluded. This is important
because other jurors would look to this expert juror for guidance during deliberations.

Use of Technical Experts in Trial Proceedings


A technical expert is essential at trial. This individual is the most important witness in the case. Establishment of software
piracy is difficult because judges and juries have insufficient knowledge of the technical elements required to prove the
case. During direct examination, the expert must educate the fact finder. He or she must effectively explain complex
technical evidence to lay people. The expert normally uses exhibits and materials prepared before trial. It is important that
exhibits be presented because they are placed in the jury room at the end of the trial, and remain as a constant explanation
and reminder to the jury during its deliberations. Sometimes, an expert witness provides a hands-on demonstration of the
software to the court during direct examination.

In any matter of this type, both litigants present expert witnesses. This is very confusing to judges and juries since their
testimony will invariably conflict. The jury does not know which one to believe. Consequently, during cross-examination,
attorneys challenge every fact and every opinion of the opposing expert. One expert will always state that a sufficient
number of similarities exist between two software products so as to establish copying or derivation. The other expert will
always testify that the first expert's investigation was inconclusive. An attorney must sort out the logic that separates these
two witnesses and create a technical position that would be clear to the fact finder. This can only be done with the expert's
assistance. A technical expert must be able to anticipate the answers of the opposing expert. This can usually be
accomplished if he or she is familiar with the opposing expert's deposition. The expert then establishes a series of questions
or question categories designed to prove the point.

Summary
Technical experts are essential in computer-related intellectual property litigation. They are needed because the complex
technical issues are beyond the knowledge and understanding of the average layperson. Initially, the expert performs a
forensic investigation. He or she helps with discovery. Establishment of infringement is only possible with expert assistance.

In order to establish software piracy, an expert must examine and analyze both the software of the plaintiff and defendant.
This software is normally very large, and similarities are very difficult to find. This article presents a methodology that
simplifies the task of the expert.

Not only are experts essential in cases that go to trial, but they can be valuable in attaining satisfactory pretrial settlements.
Experts are as important during pretrial proceedings as they are during the trial itself.

Cash flow forecasting


From Wikipedia, the free encyclopedia
Jump to: navigation, search

Cash flow forecasting is (1) in a corporate finance sense, the modeling of a company or asset’s future
financial liquidity over a specific timeframe. Cash usually refers to the company’s total bank balances,
but often what is forecast is treasury position which is cash plus short-term investments minus short-
term debt. Cash flow is the change in cash or treasury position from one period to the next; (2) in the
context of the entrepreneur or manager, forecasting what cash will come into the business or business
unit in order to ensure that outgoing can be managed so as to avoid them exceeding cashflow coming
in. If there is one thing entrepreneurs learn fast, it is to become very good at cashflow forecasting.

Contents
[hide]

• 1 Methods (corporate finance)


• 2 Methods (entrepreneurial)
• 3 Uses (corporate finance)
• 4 Uses (entrepreneurial)

• 5 References

[edit] Methods (corporate finance)


The direct method of cash flow forecasting schedules the company’s cash receipts and disbursements
(R&D). Receipts are primarily the collection of accounts receivable from recent sales, but also include
sales of other assets, proceeds of financing, etc. Disbursements include, payroll, payment of accounts
payable from recent purchases, dividends, debt service, etc. This direct, R&D method is best suited to
the short-term forecasting horizon of 30 days or so because this is the period for which actual, as
opposed to projected, data is available. (de Caux, 2005)

The three indirect methods are based on the company’s projected income statements and balance
sheets. The adjusted net income (ANI) method starts with operating income (EBIT or EBITDA) and
adds or subtracts changes in balance sheet accounts such as receivables, payables and inventories to
project cash flow. The pro-forma balance sheet (PBS) method looks straight at the projected book cash
account; if all the other balance sheet accounts have been correctly forecast, cash will be correct, too.
Both the ANI and PBS methods are best suited to the medium-term (up to one year) and long-term
(multiple years) forecasting horizons. Both are limited to the monthly or quarterly intervals of the
financial plan, and need to be adjusted for the difference between accrual-accounting book cash and the
often-significantly-different bank balances. (Association for Financial Professionals, 2006)

The third indirect approach is the accrual reversal method (ARM), which is similar to the ANI method.
But instead of using projected balance sheet accounts, large accruals are reversed and cash effects are
calculated based upon statistical distributions and algorithms. This allows the forecasting period to be
weekly or even daily. It also eliminates the cumulative errors inherent in the direct, R&D method when
it is extended beyond the short-term horizon. But because the ARM allocates both accrual reversals and
cash effects to weeks or days, it is more complicated than the ANI or PBS indirect methods. The ARM
is best suited to the medium-term forecasting horizon. (Bort, 1990)and the advantages are as follows

[edit] Methods (entrepreneurial)


The simplest method is to have a spreadsheet that shows cash coming in from all sources out to at least
90 days, and all cash going out for the same period. This requires that the quantity and timings of
receipts of cash from sales are reasonably accurate, which in turn requires judgement honed by
experience of the industry concerned, because it is rare for cash receipts to match sales forecasts
exactly, and it is also rare for suppliers all to pay on time. These principles remain constant whether the
cash flow forecasting is done on a spreadsheet or on paper or on some other IT system.

A danger of using too much corporate finance theoretical methods in cash flow forecasting for
managing a business is that there can be non cash items in the cashflow as reported under financial
accounting standards. This goes to the heart of the difference between financial accounting and
management accounting.

[edit] Uses (corporate finance)


A cash flow projection is an important input into valuation of assets, budgeting and determining
appropriate capital structures in LBOs and leveraged recapitalizations.

[edit] Uses (entrepreneurial)


The point of making the forecast of incoming cash is to manage the outflow of cash so that the business
remains solvent. The section of the spreadsheet that shows cash out is thus the basis for what-if
modeling, for instance, "what if we pay our suppliers 30 days later?"

[edit] References
• “Cash Forecasting”, Tony de Caux, Treasurer’s Companion, Association of Corporate
Treasurers, 2005
• “Cash Flow Forecasting”, Association for Financial Professionals, 2006
• “Medium-Term Funds Flow Forecasting”, Corporate Cash Management Handbook, Richard
Bort, Warren Gorham & Lamont, 1990

Cost estimation in software engineering


From Wikipedia, the free encyclopedia
Jump to: navigation, search

The ability to accurately estimate the time and/or cost taken for a project to come in to its successful
conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and
well understood software development process has, in recent years, shown itself to be the most
effective method of gaining useful historical data that can be used for statistical estimation. In
particular, the act of sampling more frequently, coupled with the loosening of constraints between parts
of a project, has allowed more accurate estimation and more rapid development times.

[edit] Methods
Popular methods for estimation in software engineering include:

• Parametric Estimating
• Wideband Delphi
• COCOMO
• SLIM
• SEER-SEM Parametric Estimation of Effort, Schedule, Cost, Risk. Mimimum time and staffing
concepts based on Brooks's law
• Function Point Analysis
• Proxy-based estimating (PROBE) (from the Personal Software Process)
• The Planning Game (from Extreme Programming)
• Program Evaluation and Review Technique (PERT)
• Analysis Effort method
• PRICE Systems Founders of Commercial Parametric models that estimates the scope, cost,
effort and schedule for software projects.
• Evidence-based Scheduling Refinement of typical agile estimating techniques using minimal
measurement and total time accounting.

Cost-benefit analysis
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Cost-benefit analysis is a term that refers both to:

• helping to appraise, or assess, the case for a project, programme or policy proposal;
• an approach to making economic decisions of any kind.

Under both definitions the process involves, whether explicitly or implicitly, weighing the total
expected costs against the total expected benefits of one or more actions in order to choose the best or
most profitable option. The formal process is often referred to as either CBA (Cost-Benefit Analysis) or
BCA (Benefit-Cost Analysis).

Benefits and costs are often expressed in money terms, and are adjusted for the time value of money, so
that all flows of benefits and flows of project costs over time (which tend to occur at different points in
time) are expressed on a common basis in terms of their “present value.” Closely related, but slightly
different, formal techniques include cost-effectiveness analysis, economic impact analysis, fiscal
impact analysis and Social Return on Investment (SROI) analysis. The latter builds upon the logic of
cost-benefit analysis, but differs in that it is explicitly designed to inform the practical decision-making
of enterprise managers and investors focused on optimizing their social and environmental impacts.

Contents
[hide]

• 1 Theory
• 2 Application and history
• 3 Accuracy problems
o 3.1 Use in regulation
• 4 See also
• 5 References
• 6 Further reading

• 7 External links

[edit] Theory
Cost–benefit analysis is often used by governments to evaluate the desirability of a given intervention.
It is heavily used in today's government. It is an analysis of the cost effectiveness of different
alternatives in order to see whether the benefits outweigh the costs. The aim is to gauge the efficiency
of the intervention relative to the status quo. The costs and benefits of the impacts of an intervention are
evaluated in terms of the public's willingness to pay for them (benefits) or willingness to pay to avoid
them (costs). Inputs are typically measured in terms of opportunity costs - the value in their best
alternative use. The guiding principle is to list all parties affected by an intervention and place a
monetary value of the effect it has on their welfare as it would be valued by them.

The process involves monetary value of initial and ongoing expenses vs. expected return. Constructing
plausible measures of the costs and benefits of specific actions is often very difficult. In practice,
analysts try to estimate costs and benefits either by using survey methods or by drawing inferences
from market behavior. For example, a product manager may compare manufacturing and marketing
expenses with projected sales for a proposed product and decide to produce it only if he expects the
revenues to eventually recoup the costs. Cost–benefit analysis attempts to put all relevant costs and
benefits on a common temporal footing. A discount rate is chosen, which is then used to compute all
relevant future costs and benefits in present-value terms. Most commonly, the discount rate used for
present-value calculations is an interest rate taken from financial markets (R.H. Frank 2000). This can
be very controversial; for example, a high discount rate implies a very low value on the welfare of
future generations, which may have a huge impact on the desirability of interventions to help the
environment. Empirical studies suggest that in reality, people's discount rates do decline over time.
Because cost–benefit analysis aims to measure the public's true willingness to pay, this feature is
typically built into studies.

During cost–benefit analysis, monetary values may also be assigned to less tangible effects such as the
various risks that could contribute to partial or total project failure, such as loss of reputation, market
penetration, or long-term enterprise strategy alignments. This is especially true when governments use
the technique, for instance to decide whether to introduce business regulation, build a new road, or
offer a new drug through the state healthcare system. In this case, a value must be put on human life or
the environment, often causing great controversy. For example, the cost–benefit principle says that we
should install a guardrail on a dangerous stretch of mountain road if the dollar cost of doing so is less
than the implicit dollar value of the injuries, deaths, and property damage thus prevented (R.H. Frank
2000).

Cost–benefit calculations typically involve using time value of money formulas. This is usually done
by converting the future expected streams of costs and benefits into a present value amount.

[edit] Application and history


The practice of cost–benefit analysis differs between countries and between sectors (e.g., transport,
health) within countries. Some of the main differences include the types of impacts that are included as
costs and benefits within appraisals, the extent to which impacts are expressed in monetary terms, and
differences in the discount rate between countries. Agencies across the world rely on a basic set of key
cost–benefit indicators, including the following:

• NPV (net present value)


• PVB (present value of benefits)
• PVC (present value of costs)
• BCR (benefit cost ratio = PVB / PVC)
• Net benefit (= PVB - PVC)
• NPV/k (where k is the level of funds available)

The concept of CBA dates back to an 1848 article by Dupuit and was formalized in subsequent works
by Alfred Marshall. The practical application of CBA was initiated in the US by the Corps of
Engineers, after the Federal Navigation Act of 1936 effectively required cost–benefit analysis for
proposed federal waterway infrastructure.[1] The Flood Control Act of 1939 was instrumental in
establishing CBA as federal policy. It specified the standard that "the benefits to whomever they accrue
[be] in excess of the estimated costs.[2]

Subsequently, cost–benefit techniques were applied to the development of highway and motorway
investments in the US and UK in the 1950s and 1960s. An early and often-quoted, more developed
application of the technique was made to London Underground's Victoria Line. Over the last 40 years,
cost–benefit techniques have gradually developed to the extent that substantial guidance now exists on
how transport projects should be appraised in many countries around the world.

In the UK, the New Approach to Appraisal (NATA) was introduced by the then Department for
Transport, Environment and the Regions. This brought together cost–benefit results with those from
detailed environmental impact assessments and presented them in a balanced way. NATA was first
applied to national road schemes in the 1998 Roads Review but subsequently rolled out to all modes of
transport. It is now a cornerstone of transport appraisal in the UK and is maintained and developed by
the Department for Transport.[11]

The EU's 'Developing Harmonised European Approaches for Transport Costing and Project
Assessment' (HEATCO) project, part of its Sixth Framework Programme, has reviewed transport
appraisal guidance across EU member states and found that significant differences exist between
countries. HEATCO's aim is to develop guidelines to harmonise transport appraisal practice across the
EU.[12][13] [3]

Transport Canada has also promoted the use of CBA for major transport investments since the issuance
of its Guidebook in 1994.[4]

More recent guidance has been provided by the United States Department of Transportation and several
state transportation departments, with discussion of available software tools for application of CBA in
transportation, including HERS, BCA.Net, StatBenCost, CalBC, and TREDIS. Available guides are
provided by the Federal Highway Administration[5][6], Federal Aviation Administration[7], Minnesota
Department of Transportation[8], California Department of Transportation (Caltrans)[9], and the
Transportation Research Board Transportation Economics Committee [10].

In the early 1960s, CBA was also extended to assessment of the relative benefits and costs of healthcare
and education in works by Burton Weisbrod.[11][12] Later, the United States Department of Health and
Human Services issued its CBA Guidebook.[13]
Net present value
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In finance, the net present value (NPV) or net present worth (NPW)[1] of a time series of cash flows,
both incoming and outgoing, is defined as the sum of the present values (PVs) of the individual cash
flows. In the case when all future cash flows are incoming (such as coupons and principal of a bond)
and the only outflow of cash is the purchase price, the NPV is simply the PV of future cash flows
minus the purchase price (which is its own PV). NPV is a central tool in discounted cash flow (DCF)
analysis, and is a standard method for using the time value of money to appraise long-term projects.
Used for capital budgeting, and widely throughout economics, finance, and accounting, it measures the
excess or shortfall of cash flows, in present value terms, once financing charges are met.

The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount
curve and outputting a price; the converse process in DCF analysis, taking as input a sequence of cash
flows and a price and inferring as output a discount rate (the discount rate which would yield the given
price as NPV) is called the yield, and is more widely used in bond trading.

*********************** *****************************************

Return On Investment - ROI

What Does Return On Investment - ROI Mean?


A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of
a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by
the cost of the investment; the result is expressed as a percentage or a ratio.

The return on investment formula:

In the above formula "gains from investment", refers to the proceeds obtained from selling the
investment of interest. Return on investment is a very popular metric because of its versatility and
simplicity. That is, if an investment does not have a positive ROI, or if there are other opportunities
with a higher ROI, then the investment should be not be undertaken.

Watch: ROI
Investopedia explains Return On Investment - ROI
Keep in mind that the calculation for return on investment and, therefore the definition, can be modified
to suit the situation -it all depends on what you include as returns and costs. The definition of the term
in the broadest sense just attempts to measure the profitability of an investment and, as such, there is no
one "right" calculation.

For example, a marketer may compare two different products by dividing the gross profit that each
product has generated by its respective marketing expenses. A financial analyst, however, may compare
the same two products using an entirely different ROI calculation, perhaps by dividing the net income
of an investment by the total value of all resources that have been employed to make and sell the
product.

This flexibility has a downside, as ROI calculations can be easily manipulated to suit the user's
purposes, and the result can be expressed in many different ways. When using this metric, make sure
you understand what inputs are being used.
Filed Under: Acronyms, Formulas, Fundamental Analysis, Investing Basics, Portfolio Management
Related Terms

• Capital Rationing
• Compound Return
• Investment
• Phantom Gain
• Return
• Return On Assets - ROA
• Return On Capital Employed - ROCE
• Return On Equity - ROE
• Return On Gross Invested Capital - ROGIC
• Return On Investment Capital - ROIC
• More Related Terms
• Return on Investment (ROI) analysis is one of several commonly used approaches for
evaluating the financial consequences of business investments, decisions, or actions. ROI
analysis compares the magnitude and timing of investment gains directly with the magnitude
and timing of investment costs. A high ROI means that investment gains compare favorably
to investment costs.
• In the last few decades, ROI has become a central financial metric for asset purchase decisions
(computer systems, factory machines, or service vehicles, for example), approval and
funding decisions for projects and programs of all kinds (such as marketing programs, recruiting
programs, and training programs), and more traditional investment decisions (such as the
management of stock portfolios or the use of venture capital).
• • The ROI Concept
• Simple ROI for Cash Flow and Investment Analysis
• Competing Investments: ROI From Cash Flow Streams
• ROI vs NPV, IRR, and Payback Period
• Other ROI Metrics
• The ROI Concept
• Most forms of ROI analysis compare investment returns and costs by constructing a ratio, or
percentage. In most ROI methods, an ROI ratio greater than 0.00 (or a percentage greater than
0%) means the investment returns more than its cost. When potential investments compete for
funds, and when other factors between the choices are truly equal, the investment—or action, or
business case scenario—with the higher ROI is considered the better choice, or the better
business decision.
• One serious problem with using ROI as the sole basis for decision making, is that ROI by itself
says nothing about the likelihood that expected returns and costs will appear as predicted. ROI
by itself, that is, says nothing about the risk of an investment. ROI simply shows how returns
compare to costs if the action or investment brings the results hoped for. (The same is also true
of other financial metrics, such as Net Present Value, or Internal Rate of Return). For that
reason, a good business case or a good investment analysis will also measure the probabilities of
different ROI outcomes, and wise decision makers will consider both the ROI magnitude and
the risks that go with it.
• Decision makers will also expect practical suggestions from the ROI analyst, on ways to
improve ROI by reducing costs, increasing gains, or accelerating gains (see the figure above).
• [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ]
• Simple ROI for Cash Flow and Investment Analysis
• Return on investment is frequently derived as the “return” (incremental gain) from an action
divided by the cost of that action. That is “simple ROI,” as used in business case analysis and
other forms of cash flow analysis. For example, what is the ROI for a new marketing program
that is expected to cost $500,000 over the next five years and deliver an additional $700,000 in
increased profits during the same time?


Simple ROI is the most frequently used form of ROI and the most easily understood. With
simple ROI, incremental gains from the investment are divided by investment costs.
• Simple ROI works well when both the gains and the costs of an investment are easily known
and where they clearly result from the action. In complex business settings, however, it is not
always easy to match specific returns (such as increased profits) with the specific costs that
bring them (such as the costs of a marketing program), and this makes ROI less trustworthy as a
guide for decision support. Simple ROI also becomes less trustworthy as a useful metric when
the cost figures include allocated or indirect costs, which are probably not caused directly by the
action or the investment.
• [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ]
• Competing Investments: ROI From Cash Flow Streams
• ROI and other financial metrics that take an investment view of an action or investment
compare investment returns to investment costs. However each of the major investment metrics
(ROI, internal rate of return IRR, net present value NPV, and payback period), approaches the
comparison differently, and each carries a different message. This section illustrates ROI
calculation from a cash flow stream for two competing investments, and the next section ( ROI
vs. NPV, IRR, and Payback Period) compares the differing and sometimes conflicting messages
from different financial metrics.
• Consider two five-year investments competing for funding, Investment A and Investment B.
Which is the better business decision? Analysts will look first at the net cash flow streams from
each investment. The net cash flow data and comparison graph appear below.

• Payback period
• From Wikipedia, the free encyclopedia
• Jump to: navigation, search
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and
removed. (March 2009)
• Payback period in capital budgeting refers to the period of time required for the return on an
investment to "repay" the sum of the original investment. For example, a $1000 investment
which returned $500 per year would have a two year payback period. The time value of money
is not taken into account. Payback period intuitively measures how long something takes to
"pay for itself." All else being equal, shorter payback periods are preferable to longer payback
periods. Payback period is widely used because of its ease of use despite recognized limitations,
described below.
• The term is also widely used in other types of investment areas, often with respect to energy
efficiency technologies, maintenance, upgrades, or other changes. For example, a compact
fluorescent light bulb may be described as having a payback period of a certain number of years
or operating hours, assuming certain costs. Here, the return to the investment consists of
reduced operating costs. Although primarily a financial term, the concept of a payback period is
occasionally extended to other uses, such as energy payback period (the period of time over
which the energy savings of a project equal the amount of energy expended since project
inception); these other terms may not be standardized or widely used.
• Payback period as a tool of analysis is often used because it is easy to apply and easy to
understand for most individuals, regardless of academic training or field of endeavour. When
used carefully or to compare similar investments, it can be quite useful. As a stand-alone tool to
compare an investment to "doing nothing," payback period has no explicit criteria for decision-
making (except, perhaps, that the payback period should be less than infinity).
• The payback period is considered a method of analysis with serious limitations and
qualifications for its use, because it does not account for the time value of money, risk,
financing or other important considerations, such as the opportunity cost. Whilst the time value
of money can be rectified by applying a weighted average cost of capital discount, it is
generally agreed that this tool for investment decisions should not be used in isolation.
Alternative measures of "return" preferred by economists are net present value and internal rate
of return. An implicit assumption in the use of payback period is that returns to the investment
continue after the payback period. Payback period does not specify any required comparison to
other investments or even to not making an investment.
• There is no formula to calculate the payback period, except the simple and unrealistic case of
the initial cash outlay and further constant cash inflows or constantly growing cash inflows. To
calculate the payback period an algorithm is needed. It is easily applied in spreadsheets. The
typical algorithm reduces to the calculation of cumulative cash flow and the moment in which it
turns to positive from negative.
• Additional complexity arises when the cash flow changes sign several times; i.e., it contains
outflows in the midst or at the end of the project lifetime. The modified payback period
algorithm may be applied then. First, the sum of all of the cash outflows is calculated. Then the
cumulative positive cash flows are determined for each period. The modified payback period is
calculated as the moment in which the cumulative positive cash flow exceeds the total cash
outflow.
• [edit] References

COCOMO
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Docomo.
Not to be confused with Kokomo (disambiguation).
This article needs attention from an expert on the subject. See the talk page for details.
WikiProject Computer science or the Computer science Portal may be able to help recruit an
expert. (November 2008)

The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model
developed by Barry Boehm. The model uses a basic regression formula, with parameters that are
derived from historical project data and current project characteristics.

COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a
model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at
TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981. The
study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming
languages ranging from assembly to PL/I. These projects were based on the waterfall model of software
development which was the prevalent software development process in 1981.

References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and
finally published in 2000 in the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the
successor of COCOMO 81 and is better suited for estimating modern software development projects. It
provides more support for modern software development processes and an updated project database.
The need for the new model came as software development technology moved from mainframe and
overnight batch processing to desktop development, code reusability and the use of off-the-shelf
software components. This article refers to COCOMO 81.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level,
Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its
accuracy is limited due to its lack of factors to account for difference in project attributes (Cost
Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases.

Contents
[hide]

• 1 Basic COCOMO
• 2 Intermediate COCOMOs
• 3 Detailed COCOMO
• 4 Projects using COCOMO
• 5 See also
• 6 References
• 7 Further reading

• 8 External links

[edit] Basic COCOMO


Basic COCOMO computes software development effort (and cost) as a function of program size.
Program size is expressed in estimated thousands of lines of code (KLOC).
COCOMO applies to three classes of software projects:

• Organic projects - "small" teams with "good" experience working with "less than rigid"
requirements
• Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid
and less than rigid requirements
• Embedded projects - developed within a set of "tight" constraints (hardware, software,
operational, ...)

The basic COCOMO equations take the form

Effort Applied = ab(KLOC)bb [ man-months ]


Development Time = cb(Effort Applied)db [months]
People required = Effort Applied / Development Time [count]

The coefficients ab, bb, cb and db are given in the following table.

Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

Basic COCOMO is good for quick estimate of software costs. However it does not account for
differences in hardware constraints, personnel quality and experience, use of modern tools and
techniques, and so on.

[edit] Intermediate COCOMOs


Intermediate COCOMO computes software development effort as function of program size and a set of
"cost drivers" that include subjective assessment of product, hardware, personnel and project attributes.
This extension considers a set of four "cost drivers",each with a number of subsidiary attributes:-

• Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
• Hardware attributes
o Run-time performance constraints
o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
• Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
• Project attributes
o Use of software tools
o Application of software engineering methods
o Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to
"extra high" (in importance or value). An effort multiplier from the table below applies to the
rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical
values for EAF range from 0.9 to 1.4.

Ratings
Very Very Extra
Cost Drivers Low Low Nominal High High High
Product attributes
Required software reliability 0.75 0.88 1.00 1.15 1.40
Size of application database 0.94 1.00 1.08 1.16
Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65
Hardware attributes
Run-time performance constraints 1.00 1.11 1.30 1.66
Memory constraints 1.00 1.06 1.21 1.56
Volatility of the virtual machine environment 0.87 1.00 1.15 1.30
Required turnabout time 0.87 1.00 1.07 1.15
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Virtual machine experience 1.21 1.10 1.00 0.90
Programming language experience 1.14 1.07 1.00 0.95
Project attributes
Application of software engineering methods 1.24 1.10 1.00 0.91 0.82
Use of software tools 1.24 1.10 1.00 0.91 0.83
Required development schedule 1.23 1.08 1.00 1.04 1.10

The Intermediate Cocomo formula now takes the form:

E=ai(KLoC)(bi).EAF

where E is the effort applied in person-months, KLoC is the estimated number of thousands of
delivered lines of code for the project, and EAF is the factor calculated above. The coefficient ai and
the exponent bi are given in the next table.

Software project ai bi
Organic 3.2 1.05
Semi-detached 3.0 1.12
Embedded 2.8 1.20

The Development time D calculation uses E in the same way as in the Basic COCOMO.

[edit] Detailed COCOMO


Detailed COCOMO - incorporates all characteristics of the intermediate version with an assessment of
the cost driver's impact on each step (analysis, design, etc.) of the software engineering process 1. the
detailed model uses different efforts multipliers for each cost drivers attribute these Phase Sensitive
effort multipliers are each to determine the amount of effort required to complete each phase.

[edit] Projects using COCOMO

Software Quality Factors

Till now we have been talking software quality in general. What it means to be a quality product. We
also looked at CMM in brief. We need to know various quality factors upon which quality of a software
produced is evaluated. These factors are given below.

The various factors, which influence the software, are termed as software factors. They can be broadly
divided into two categories. The classification is done on the basis of measurability. The first category
of the factors is of those that can be measured directly such as number of logical errors and the second
category clubs those factors which can be measured only indirectly for example maintainability but the
each of the factors are to be measured to check for the content and the quality control. Few factors of
quality are available and they are mentioned below.

• Correctness - extent to which a program satisfies its specification and fulfills the client's
objective.
• Reliability - extent to which a program is supposed to perform its function with the required
precision.
• Efficiency - amount of computing and code required by a program to perform its function.
• Integrity - extent to which access to software and data is denied to unauthorized users.
• Usability- labor required to understand, operate, prepare input and interpret output of a program
• Maintainability- effort required to locate and fix an error in a program.
• Flexibility- effort needed to modify an operational program.
• Testability- effort required to test the programs for their functionality.
• Portability- effort required to run the program from one platform to other or to different
hardware.
• Reusability- extent to which the program or it’s parts can be used as building blocks or as
prototypes for other programs.
• Interoperability- effort required to couple one system to another.

Now as you consider the above-mentioned factors it becomes very obvious that the measurements of all
of them to some discrete value are quite an impossible task. Therefore, another method was evolved to
measure out the quality. A set of matrices is defined and is used to develop expressions for each of the
factors as per the following expression

Fq = C1*M1 + C2*M2 + …………….Cn*Mn

where Fq is the software quality factor, Cn are regression coefficients and Mn is metrics that influences
the quality factor. Metrics used in this arrangement is mentioned below.

• Auditability- ease with which the conformance to standards can be verified.


• Accuracy- precision of computations and control
• Communication commonality- degree to which standard interfaces, protocols and bandwidth
are used.
• Completeness- degree to which full implementation of functionality required has been achieved.
• Conciseness- program’s compactness in terms of lines of code.
• Consistency- use of uniform design and documentation techniques throughout the software
development.
• Data commonality- use of standard data structures and types throughout the program.
• Error tolerance – damage done when program encounters an error.
• Execution efficiency- run-time performance of a program.
• Expandability- degree to which one can extend architectural, data and procedural design.
• Hardware independence- degree to which the software is de-coupled from its operating
hardware.
• Instrumentation- degree to which the program monitors its own operation and identifies errors
that do occur.
• Modularity- functional independence of program components.
• Operability- ease of programs operation.
• Security- control and protection of programs and database from the unauthorized users.
• Self-documentation- degree to which the source code provides meaningful documentation.
• Simplicity- degree to which a program is understandable without much difficulty.
• Software system independence- degree to which program is independent of nonstandard
programming language features, operating system characteristics and other environment
constraints.
• Traceability- ability to trace a design representation or actual program component back to initial
objectives.
• Training- degree to which the software is user-friendly to new users.

There are various ‘checklists’ for software quality. One of them was given by Hewlett-Packard that has
been given the acronym FURPS – for Functionality, Usability, Reliability, Performance and
Supportability.

Functionality is measured via the evaluation of the feature set and the program capabilities, the
generality of the functions that are derived and the overall security of the system.
Considering human factors, overall aesthetics, consistency and documentation assesses usability.

Reliability is figured out by evaluating the frequency and severity of failure, the accuracy of output
results, the mean time between failure (MTBF), the ability to recover from failure and the predictability
of the program.

Performance is measured by measuring processing speed, response time, resource consumption,


throughput and efficiency. Supportability combines the ability to extend the program, adaptability,
serviceability or in other terms maintainability and also testability, compatibility, configurability and the
ease with which a system can be installed.

Software Quality Attributes


September 7, 2006 11:39:46.490

Chapters 4 and 5 of Software Architecture in Practice are about "software quality attributes". This is
what they call non-functional requirements like performance, security, reliability, modifiability,
testability and usability. These are, in fact, the main ones they talk about (though the book says
"availability" when it should say "reliability") The book claims that one of the main purposes of
architecture is to ensure these attributes. I go along with that, because most of these are global properties
of systems. Chapter 4 talks about how to specify these attributes, and chapter 5 talks about how to
achieve them. It does this by describing, for each attribute, tactics for achieving the attribute.

Of course, most of these are big topics. UIUC has courses on most of them. Moreover, these topics can
be specialized by problem domain. Performance means a different thing for programming scientific
applications on supercomputers than it does for distributed business systems, or real-time control
systems. So, the few pages that SAIP gives to each quality attribute is not nearly enough. Nevertheless,
what the book says is important.

The book says that patterns bundle tactics. In other words, patterns are concrete examples of how to use
a few tactics together. Certainly some patterns are like this. But I've seen people write patterns that were
the same thing as one of the tactics. I think that tactics are patterns, too.

Even though the book doesn't explain how to use any tactic, the lists that it gives should be useful for
people who want to document patterns because it gives an outline of possible patterns.

\**************************************************************

Capability Maturity Model versus ISO 9000


An assessment
John R. Snyder

March, 2003

Abstract
This paper serves as a general guideline for those who wish to implement a process improvement
model--but are unsure of which governing framework to select. How the process model has become the
dominant framework for software engineering activities is investigated, as is the distinction of a process
model from a lifecycle model.

The two dominant process improvement models in use today, Capability Maturity Model and the ISO
9000 standard, will be illustrated, contrasted, and analyzed for applicability to software development
environments. The focus is on the ISO guidelines in areas that are most relevant to software engineering,
that is, the ISO 9000:3. The recent ISO 9000:2000 updates and revisions are discussed at a high-level.
Previous publications on this topic are analyzed for relevance to today's environment.
Some general conclusions are developed as to the applicability of each of the process model standards
for different types of software development organizations and business environments.

Introduction
The two most common process models in use today for software engineering are the Capability Maturity
Model (CMM), and the International Organization for Standards (ISO) ISO 9000 standard. To embezzle
the classic "To be or not to be" phrase of the ancient thespian Shakespeare, "To CMM, or ISO--that is
the question". Indeed, choosing a process improvement framework is a daunting prospect for the
uninitiated. The "alphabet soup" of acronyms and labyrinth of clauses can be confusing to interpret.
Developing analytic comparisons between the two models can be problematic because of interpretation
issues.

Mark C. Paulk of the Software Engineering Institute, in his 1994 paper "A Comparison of ISO 9001 and
the Capability Maturity Model for Software" (Paulk, 1994) developed an analysis of the relationships
between ISO 9001, and the CMM model. However, since the publication of that document, ISO 9001
has undergone a major revision. The 1994 standard, which consisted of a twenty-clause structure, now
consists of only five clauses.

In this document, the most recent model of the ISO 9000 standard will be compared and contrasted to
the current CMM standard. This paper will attempt to analyze and assess how these two process models
compare and contrast; how applicable each respective model might be in your organization.

Process Model Defined

The Historical Perspective

Why is the topic of "process" synonymous with the creation of high-quality software by professionals?
According to Dampier and Glickstein (2000, p. 4), "The quality of a software system is highly
influenced by the quality of the process used to develop and maintain it". Software and the hardware
systems that process the software have become increasingly complex. When computers began to get a
foothold in academia and business in the 1970's, only the rare mission-critical software project was
managed with any type of methodology framework. The early computers and their software were
certainly not "simple", but perhaps more "straightforward" to manipulate. The smaller collection of
configurations, permutations, and the people who understood them made the creation and maintenance
of software easier to manage. In addition, the rudimentary nature of the tools in use through the early
1980's dictated that the pace of software construction was extremely slow and methodical--therefore less
prone to be in error. Are you old enough to remember punch cards?

In the last ten-twenty years, the advent of high-level programming languages and the personal computer
brought the ability to create software to a much larger group. The responsibilities that accompanied this
new ability were not always considered. Amateurish programming and get-rich quick software schemes
unleashed a Pandora's Box of software issue on a naive public. Horror stories of malfunctioning
software are rampant today. Software projects are defined as a "Death March" (Yourdon, 1997) more
often than not.

One could easily attribute many of the difficulties faced by software developers to the immaturity of the
science--we do not have the luxury of hundreds of years of empirical experience. However, from a
historical perspective, the emerging field of Software Engineering may be doing better than the constant
negativity in the news media would lead us to believe. Schulz, speaking to the failure rate of Information
Technology projects--states, "IT is performing just as well as other disciplines". He goes on to make the
assertion "Perhaps the problem is that IT is just newer, more active and being studied and reported more
frequently" (2000, para 2).

Complexity

Certainly, the complexity of software is one major factor that contributes to software project failures,
and products that are laden with defects. "Software entities are more complex for their size than perhaps
any other human construct " (Dorfman & Thayer, 1997, p. 14). Are quality control mechanisms that
have been successful in other genres of manufacturing applicable to the creation of Software? Many in
the industry believe that the application of "engineering discipline to the development and maintenance
of software" (Paulk, 2002) would bring Total Quality Management (TQM) concepts into the software
development process. Thus, process, and "life cycle" models for software engineering were created from
the archetypes that interjected quality control into other lines of business and hard-goods manufacturing.

The Life Cycle Model

Often, there are misconceptions and confusion about what is a process model, and what constitutes a life
cycle model. Dorfman and Thayer (1997, p. 401), describe a life cycle model as "a model of the phases
or activities that start when a software product is conceived and end when the product is no longer
available for use". We know a life cycle model as the familiar events that make up software creation:
requirements, design, coding, integration, testing. Several different types of life cycle models exist, and
the procedures may take place once in sequence, in multiple iterations, or some other variation
(Dorfman & Thayer, 1997, p. 401).

A life cycle model defines how the software product is assembled, equivalent to the assembly line in a
software factory . A process model then, describes the "sub-activities or tasks within a phase or activity,
the dependencies among them, and the conditions that must exist before the tasks can begin" (Dorfman
& Thayer (1997, p. 402). The process model provides a framework for the life cycle model to operate in,
analogous to the operations manual for the software assembly line and the workers who run it. Figure 1
shows the relationship between the organization level process model, and the project level life cycle
model. Note that each software project in the organization may use a different type of life cycle model,
but at the enterprise level, the organization will only implement one process model.
Figure 1 - Process Model Role in the Organization

The Process Model

Organization process models are usually mentioned in the context of software quality assurance
activities, or as part of an enterprise-wide Total Quality Management plan. This is appropriate as a
mechanism of the "engineering discipline" mentioned previously (Paulk, 2002). In addition, any
practitioner of software engineering will quickly confirm that testing activities alone are not enough to
introduce quality into software. Software is only one element delivered to the customer. In order for the
software to execute, the engineer must consider the hardware that it will be installed on, and the data set
that it will use when the software runs. Software is part of a system. As such, it must be viewed
systemically through all phases of development.

Similarly, the testing and quality control activities for software must be viewed as one component of a
quality assurance system. Pressman defines a quality assurance system as "the organizational structure,
responsibilities, procedures, processes, and resources for implementing quality management (2001, p.
216). The quality system helps to objectify and quantify the activities that provide assurance to the
organization that customer expectations are being met. As stated previously, testing alone is not software
quality assurance. Software quality assurance is the planning, controlling, measuring, testing, reporting--
the improvement of quality measures throughout development activities (Pressman, 2001).

The process model used by an organization defines the umbrella quality assurance system that is used.
The definition of a process model is analogous to the description of a Total Quality Management
system: "a comprehensive set of management tools, management philosophies, and improvement
methods" (Tingey, 1997, p. 5). The essential elements of a process model, according to Tingey (1997):

• Customer orientation
• Empowerment of employees
• Participative management
• Data-based decisions
• Continual improvement
• "Process" orientation
• Quantitative tools for process improvement

Many variants of process models exist. Some have been created with software development in mind,
some are more slanted to hard manufacturing, and some to service-only organizations. What is the
correct process model for your organization? Should you use the CMM structure, or the ISO 9000
paradigm?

The Capability Maturity Model (CMM)

The Software Engineering Institute developed the Capability Maturity Model (CMM) for software at
Carnegie Mellon University. The Software Engineering Institute (SEI) is a federally funded research and
development center sponsored by the U.S. Department of Defense through the Office of the Under
Secretary of Defense for Acquisition, Technology, and Logistics (2003, para. 1). The U.S. Department
of Defense (DOD) recognized that in order to create and maintain the high-quality software systems that
it needs, a scientifically developed process model was required. The DOD commissioned the SEI in
1986 with selfish motivations; however, the entire software community has since benefited from the
work of this Institute. In fact, the Software CMM is probably one of the most well known and most
widely used model world-wide that is specific to software process improvement as of this date (Paulk,
1999).

The Capability Maturity Model for Software describes the principles and practices underlying software
process maturity and is intended to help software organizations improve the maturity of their software
processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined
software processes. The Software CMM is organized into five maturity levels, described in Table 1.

Maturity Level Description


The software process is characterized as ad hoc, and occasionally
1 - Initial even chaotic. Few processes are defined, and success depends on
individual effort and heroics.
Basic project management processes are established to track cost,
2- Repeatable schedule, and functionality. The necessary process discipline is in
place to repeat earlier successes on projects with similar applications.
The software process for both management and engineering activities
is documented, standardized, and integrated into a standard software
3 - Defined process for the organization. All projects use an approved, tailored
version of the organization's standard software process for
developing and maintaining software.
Detailed measures of the software process and product quality are
4 - Managed collected. Both the software process and products are quantitatively
understood and controlled.
Continuous process improvement is enabled by quantitative feedback
5 - Optimized
from the process and from piloting innovative ideas and technologies.

Table 1 - CMM Maturity Levels

With the exception of level one, the maturity levels are further refined into 18 Key Process Areas
(KPA's). Within the key process areas, there are 52 goals and 316 practices defined (Tingey, 1997). The
key practices in each subject area provide a mechanism to achieve the respective goals. The 18 KPA's
are organized by five logical groupings, called "common features", that serve to provide organization
and categorization to the practices. The common features provide infrastructure to the process, and a
high-level framework for the details of the key process areas. Here are the common features defined in
the CMM version 1.1 (Tingey, 1997):

• Commitment to Perform (CO)

Commitment to Perform describes the actions the organization must take to ensure that the
process is established and will endure. Commitment to Perform typically involves establishing
organizational polices and senior management sponsorship.

• Ability to Perform (AP)

Ability to Perform describes the preconditions that must exist in the project or organization to
implement the software process competently. Ability to Perform typically involves resources,
organizational structures, and training.
• Activities Performed (AC)

Activities Performed describes the roles and procedures necessary to implement a key process
area. Activities Performed typically involves establishing plans and procedures, performing the
work, tracking it, and taking corrective action.

• Measurement and Analysis (ME)

Measurement and Analysis describes the need to measure the process and analyze the
measurements. Measurement and Analysis typically includes examples of the measurements that
could be taken to determine the status and effectiveness of the Activities Performed.

• Verifying Implementation (VE)

Verifying Implementation describes the steps to ensure that the activities are performed in
compliance with the process that has been established. Verifying Implementation typically
encompasses reviews and audits by management and software quality assurance.

Key Process Area Example

A specific example will better illustrate how the CMM would be used in a commercial environment
from a pragmatic standpoint. The "Peer Review" KPA is a good candidate to show how the CMM would
be applied, and how common features discussed previously provide umbrella organization to the KPA
practices in a hierarchical fashion.

For example, at the CMM level 3 "Defined" maturity level, the peer review is one of the seven key
process areas to be satisfied. Here is the text of that KPA (Carnegie Mellon University & TeraQuest
Metrics, 2002):

Description:

The purpose of Peer Reviews is to remove defects from the software work products early and
efficiently. An important corollary effect is to develop a bettor understanding of the software
work products and of defects that might be prevented.

Peer Reviews involve a methodical examination of software work products by the producers'
peers to identify defects and areas where changes are needed. The specific products that will
undergo a peer review are identified in the project's defined software process and scheduled
apart of the software project planning activities, as described in Integrated Software
Management.

This key process area covers the practices for performing peer reviews. The practices identifying
the specific software work products that undergo peer review are contained in the key process
areas that describe the development and maintenance of each software work product.

Goals:

1. Peer review activities are planned.


2. Defects in the software work products are identified and removed.

…and so on into each of the five common features (including examples).

This KPA maps into each of the common features, as do all of the process areas. For example, the
"Commitment to Perform" is a corollary to the specification because "The project follows a written
organizational policy for performing peer reviews". The "Ability to Perform" is encompassed by the fact
that the KPA must provide for "Adequate resources and funding are provided for performing peer
reviews on each software work product to be reviewed". Each of the practices is framed by one of the
five common features similarly. This example was truncated in the interest of brevity. The CMM is a
verbose guideline consisting of over 500 pages.

We see that the CMM does not dictate exactly how the implementor is to execute the procedure. For
example, it does not state how the peer review is to be planned, rather, only that it must be planned. This
flexibility is intentional, so that each organization can tailor the specification to its culture and business
goals.

The ISO 9000 Model

The International Organization for Standardization is a worldwide federation of national standards


bodies. The term ISO stands for the International Organization for Standardization. One would assume
that the acronym for the International Organization for Standardization would be IOS, but that is not the
case. Apparently, the term ISO was chosen instead, because the term "ISO" in Greek means equal, and
the association wanted to convey the idea of equality, that is, the idea that they develop standards to
place organizations on an equal footing (Praxiom, 2002).

The preparation of standards occurs through ISO technical committees of interested parties. The United
States has input into the ISO standards through the American National Standards Institute (ANSI). ANSI
had developed a series of quality management and quality assurance standards in 1987 labeled
ANSI/ASQ Z1.15-1979. This work was later revised and renamed to align with the international
guidelines developed by the ISO. The ANSI standards: Q9000, Q9001, Q9002, Q9004 are now closely
aligned and consistent with the ISO standards of the same name (Frank, Marriott, & Warzusen, 2002).

Adoption of the ISO standards has been ubiquitous across the globe. Over 60 countries including all
members of the European Community, Canada, Mexico, the United States, Australia, New Zealand, and
the Pacific Rim have adopted the standards.

After adopting the standards, a country typically permits only ISO registered companies to supply goods
and services to government agencies and public utilities. Telecommunications equipment and medical
devices are examples of product categories that must be supplied by ISO registered companies. In turn,
manufacturers of these products often require their suppliers to become registered. Private companies
such as automobile and computer manufacturers frequently require their suppliers to be ISO registered
as well. In the United States, General Motors, Ford, Chrysler, and several truck companies have
developed QS 9000, an automotive specific variant of ISO 9000 (Quality System Requirements QS-
9000, 1994). Similarly, the telecommunications industry has also developed a deviant of the ISO
standard that incorporates elements specific to that industry. The standard has been coined TL-9000.
Typically, companies will put their suppliers on notice that they must register to become ISO certified or
risk the loss of a business relationship. This type of motivating force has proven to drive the ISO
standards into widespread acceptance across many types of industries--both manufacturing and service.

To become registered to one of the quality assurance system models contained in ISO 9000, a
company’s quality system and operations are scrutinized by third party auditors auditors scrutinize a
company's quality system operation for compliance to the standard and for effective operation. Upon
successful registration, a company is issued a certificate from a registration body represented by the
auditors.

Like the CMM methodology, ISO 9000 describes the quality elements that must be present for a quality
assurance system to be compliant with the standard, but it does not describe how an organization should
implement these elements. (Pressman, 2001)

ISO 9000:1994 Overview

The ISO 9000/Q9000-1994 family of standards consisted of the following main categories (note
that the Q9000 standards are technically equivalent to the ISO 9000 standards):

• ISO 9000:1994- Quality management and quality assurance standards.

Fundamentals and vocabulary horizontal to all categories.

• ISO 9001: 1994- Quality systems

Model for quality assurance in design, development, production, installation and


service.

• ISO 9002:1994- Quality systems

Model for quality assurance in production, installation and service.

• ISO 9003:1994- Quality systems

Model for quality assurance in final inspection and test.

• ISO 9004:1994- Quality management and quality system elements.

Guidelines for performance improvements.

ISO 9000:2000 Revisions

The ISO standard underwent a major revision in the years leading up to the turn of the century,
and the result was a set of updated standards now known as ISO 9000/Q9000-2000. The major
changes included:

• The adoption of a process approach to quality management.


• Recognition of the needs of stakeholders (customer focus).
• Additional requirements for continual improvement.
• Compatibility with other management system standards.
• Connects quality management systems to business processes.

• Simplified terminology:

subcontractor is now "supplier"

supplier is now "organization"

inspection and testing is now "product verification and validation"

quality system element is now "process"

quality system is now "interrelated processes"

One of the most obvious manifestations of the update is the fact that ISO 9002 and 9003 have
been discontinued. Here are the consolidated standards of the ISO9000:2000 series at a high-
level:

• ISO 9000:2000- Quality management systems - Fundamentals and vocabulary. ISO


• ISO 9001:2000- Quality management systems - Requirements.

• ISO 9004:2000- Quality management systems - Guidance for performance improvement.

ISO is also working on a fourth new standard: ISO 19011. ISO 19011 will replace the old ISO
10011 quality auditing standards. The final version of this new standard is expected sometime
this year. Another milestone related to the ISO 9000: 2000 update is the looming deadline for
companies to certify to the updated standard. The cut-off date is December 15, 2003 to become
ISO 9001:2000 certified. The ISO 9001, 9002, and 9003 standards will officially expire on
December 15, 2003 (Praxiom, 2002).

ISO 9000 Relevance to Software Engineering

The ISO 9000:3 guideline provides an adaptation of the ISO 9001 standard to the field of
software engineering. ISO 9000:3 was approved as an American National Standard on August 18
1998. The standard is listed as: ANSI/ISO/ASQ Q9000-3-1997, Guidelines for the Application
of ANSI/ISO/ASQC Q9001-1994 to the Development, Supply, Installation and Maintenance of
Computer Software. A revision to match ISO 9001:2000 has been assigned to ISO/IEC
JTC1/SC7 (i.e. software engineering standards) subcommittee to make it fully compatible with
ISO 9001:2000 (Frank, et al., 2002).

The 9000:3 standard was developed to satisfy the need for guidelines for processes and
procedures that are specific to the creation and maintenance of software. ISO 9001 is included
verbatim, and guidelines for adaptation of the "hard" manufacturing emphasis of ISO 9001 to
software engineering is provided as required. For example, the 9000:3 requirement provides
additional articles for quality planning (Frank, et al., 2002):

o Measurable quality requirements


o Use of a life cycle model
o Criteria for starting and ending each project phase
o Identification of reviews, tests, verification and validation activities
o Identification of configuration management techniques used
o Provision for detailed planning, specific responsibilities and authorities

ISO 9000:3 Example

The ISO 9000:3 specification consists of approximately twenty main "Quality Systems
Requirements", each with several subheadings. It is out of the scope of this paper to list each of
the requirements here, but an example will give a sense of how the specification is elaborated.

As before, the review area of this specification will be illustrated. In the 9000:3 outline, the
review is mentioned in the context of section 4.4 "Design control". Specifically, section 4.4.6
lists the heading "Design review". This requirement dictates the general approach and
methodologies that are to be used in software design reviews. From the specification section
4.4.6 (Frank, et al., 2002):

Design Review

Representatives of all functions shall be present at appropriate reviews of design results. If


required, the customer can be a part of the review meeting. These reviews may be scheduled or
unscheduled. The documented procedure should include the following details:

o Topics to be reviewed
o Chair of the review and review participants
o Records and actions of the meetings are kept
o Review methods
o Agenda setting
o Review guidelines for participants
o Review metrics
o Corrective action procedures for the meeting

All known deficiencies from the design review meeting should be resolved before permitting
project activities to proceed to the next step.

As one can see, the specification is broad enough to give individual organizations flexibility in how they
implement the requirements. For example, the ISO requirement states that review metrics must be
included in the review procedures--but does not articulate what metrics, or how they are to be collected.
It is up to each organization to interpret the requirement and make it applicable to their business goals.

Comparisons, Contrasts and Applicability

In general, the CMM and ISO 9000 address similar issues and have the common concern of quality and
process management. However, the genesis of each framework is distinctly disparate. The ISO focus is
the customer-supplier relationship, attempting to reduce a customer's risk in choosing a supplier. In
contrast, the CMM strength is the attention on the software supplier to improve its internal processes to
achieve a higher quality product for the benefit of the customer.

The ISO 9000 standard is intentionally written for a wide range of industries other than software. Hard-
goods manufacturing was the original focus for this specification. How may times have you seen the
proclamation on the banner of a rusting factory "ISO 9001 Certified"? In contrast, the CMM framework
was created from the ground-up to be specific to the software industry.

The CMM has more depth that the ISO standards. The 9001:2000 and 9000:3 criterion combined
makeup only about 60 pages of text. The CMM is over 500 pages long. Verbosity in itself does not
make a standard better; however, one can get a sense of the depth of the CMM compared to the ISO by
the peer review example presented previously. The ISO standard for peer review was presented in its
entirety in this paper. It was not practical to do the same with the CMM parallel. In essence, the ISO
states that these items should be present. The CMM states this also, but identifies the purpose and
focuses on how this activity will benefit the organization. The ISO 9000’s concept is to follow a set of
standards to make success repeatable. The CMM emphasizes a process of continuous improvement.

Once an organization has met the criteria to be ISO certified by an independent audit, the next step is
only to maintain that level of certification. By definition, the CMM is an on-going process of evaluation
and improvement, moving from one level of achievement to the next. Even at the highest level of
maturity in CMM, the focus is on continuous improvement.

Conclusions

The 20 elements of the ISO 9001:1994 standard that Mark Paulk used in his original CMM to ISO
comparison paper are now gone (Paulk, 1995). However, his statement "Although the CMM does not
adequately address some specific issues, in general it encompasses the concerns of ISO 9001. The
converse is less true" is still applicable (p. 9). His more recent work that takes into account the revisions
of the ISO 9001:2000 standard confirms this "A Level 2 or 3 organization should have little difficulty in
obtaining an ISO 9001 certificate" (Paulk, 2002, p.27).

The peer review example presented here gives insight into how the ISO standard is shallow, in terms of
software engineering, compared to the CMM. The ISO standard gives brief guidelines for conducting a
review in the context of a "Design review". Although it would be possible for an organization to tailor
the ISO specification, the standard implies the waterfall lifecycle where a review would be a one-time
occurrence—for the software design. Contrast this one time review approach to the CMM peer review
KPA, which is to be universally and liberally applied throughout an iterative lifecycle.

Conclusion: The Capability Maturity Model is better suited to organizations that are currently using, or
plan to implement, an iterative lifecycle.

Nevertheless, other business concerns may dictate the best model for your organization. For example,
your customers may want you to become ISO 9001 certified the market you sell in may expect the
status, and your business competitors may be already ISO certified. Remember that ISO 9001 is
intended to be a supplier certification vehicle, so it benefits the customer, more so than the supplier.

Another aspect of the business model is the scope of your market. ISO 9000 is, by definition, and
international standard. As of December 31, 1999, ISO 9000 certificates were issued in 150 countries
(Praxiom, 2002). So if your business model takes your products into countries where the ISO standard is
more widely recognized than CMM, the decision on which model to implement may be made for you by
the marketplace.

Conclusion: Your organization may benefit in terms of customer relations and market status by
becoming ISO 9001 certified.

When the ISO 9001:2000 revisions are placed under a microscope, it appears that the goals where to
make the standard more like the CMM. For example, as discussed previously, the recognition of the
needs of stakeholders (customer focus), and the addition of requirements for continual improvement. It
is the aspect of continual improvement that tilts the scales in favor of the CMM for software
organizations.

As was demonstrated by the peer review example, the ISO functions at a more abstract level than the
CMM. It could be viewed as a preparatory system, and once certified, your work is mostly complete.
Not true with the CMM, where even at the highest level of certification, the focus is "continuous process
improvement is enabled by quantitative feedback from the process and from piloting innovative ideas
and technologies" (Paulk, 1999). This concept is most beneficial to software organizations who are
faced with the integral component of changing technologies in their business models; must be constantly
re-inventing themselves to keep pace with that change.

Conclusion: A software organization will be better positioned to accommodate technology evolution by


embracing the CMM.

This paper has analyzed the evolution of software engineering into the complex and challenging
discipline that it is today; determined the difference between a lifecycle model and a process model--and
discovered how a process model benefits a software development organization. Two of the most
common process models in use where briefly compared and contrasted--the CMM and the ISO. Have we
answered the question, which is the best for your organization?

The answer is that it is impossible to authoritatively state that one model is superior given the vast
variables in product, culture, and business environment. As demonstrated here, the content of the
process model may not be as important as the customer's expectations--the ultimate benchmark. In
addition, it is important not to focus solely on a scorecard of certification status. As Mark Paulk states
"focusing on achieving a maturity level or certification without improving process performance is a real
danger" (2002, p.30). In the end, it is up to the individual organization to make the best choice--the
CMM or ISO.

Project Risk Management – Identifying Risks and Assessment Process


Risk is one of the few certainties of life. It may concern future happenings. They may be the risks that a software
projects may undergo and cause planning to be in jeopardy. It may involve changes concerning the project. This
may be change in customer requirements, change in development environment and technologies, change in
opinions and actions of leading team members.

______________________________________________________________________________

______________________________________________________________________________

Risk involves choice and the uncertainty that is tagged with choice. These choices may be regarding development
tools and methods, the resources and the quality standards adopted. Risk thus includes a combination of
uncertainty, changes and choice. Risk can be defined as the event or circumstances which is a threat to
execution of project.

Identifying risk and doing assessment and analysis help software team to understand and prevent potential
problems and mitigate the degree of risk in software projects to a large extent. In this process, all the resources
from software team including managers should be involved. Identifying risks beforehand by team will help either
to avoid or manage them effectively. Unmanaged risk is one of the main causes of project failure. Once the risks
or what can go wrong is identified they can be ranked by the probability or likelihood of occurrence and impact. A
Risk Mitigation, Monitoring and Management Plan (RMMM) can be developed to ensure that the high risks in the
list are tackled and contingency planning is done.

In the words of Peter Drucker, ‘While it is futile to try to eliminate risk and questionable to try to minimize, it is
essential that the risks taken be the right risks’. Two main risks strategies are Reactive and Proactive. Majority of
software teams rely on reactive risk strategies. This strategy monitors a project for risks and assigns resources to
deal with them when the risks turn into problems. The team moves to fire fighting mode to correct the problem.
When this happens and the problem is unresolved the project is in chaos. The more superior strategy is the
proactive one, where the potential risks are identified, assessed and ranked. The contingency plan developed
enables the team to respond with controlled and effective measures.

It is important to quantify the degree of uncertainty and loss associated with risks. For this, risks can be
categorized as Project risks, Technical Risks and Market Risks. Project Risks are associated with budget,
schedule, personnel, customer etc and their impact on projects delivery. Technical risks are risks associated with
the design, implementation, interfaces, obsolescence and ambiguity in specification etc. Business risk is
associated with the market, understanding of product, losing senior management support or focus, or budgetary
requirements not met etc. All these categorizations have generic and product specific risks attached to it.

Risk identification is a systematic approach to specify the threats in a project. One way of doing is by a risk item
checklist. This checklist can attempt to focus on predictable risks in categories such as size of the product,
business impact, customer requirements, process that is adopted, environment used for development and testing,
the complexity of system and knowledge and experience of the staff. The checklist can have relevant questions
and answers to help planner determine the impact.

Assessing overall project risk can be done by proposed check list such as [SEI93], [KAR96], [KEI98] where
questions have been derived from risk data obtained by surveys conducted on project managers in different
countries. In any of the question has a negative answer the manager should initiate mitigations steps.

One such method is the guidelines given for software risk identification and abatement by the US Air force
[AFC88]. In this approach the Project Manager should identify the risk drivers that affect the risk components
which are Performance, Cost, Support and Schedule. The impact of each of this driver on the component is
categorized as catastrophic, critical, marginal and negligible. The category versus component matrix has a
characterization of potential consequences described. The impact category is decided based on characterization
that best fits the scenario.

Risk projection or estimation attempts to rate risk based on the probability of risk occurrence and the
consequence of the problem arising with the risk. A project team can list all risks no matter how remote, the
probability of occurrence, probability value for each risk and the impact of each risk. The list is sorted by
probability and impact. A cutoff is determined to pinpoint risks that require detailed attention. Risk mitigation,
monitoring or management plan should be developed for risks falling within cutoff. Three factors that affect the
consequences of occurrences of risk are the nature of risk, its scope and timing.

The overall risk exposure RE is determined by


RE = P X C
where P is the Probability of occurrence and C is the cost incurred if risk occurs.
Risk Exposure needs to be computed for each risk and cost estimated. The total of risk exposure for all relevant
risks can be used to adjust the cost estimate of the project. Risks should be re evaluated periodically during the
course of the project life cycle as its probability and impact may keep changing.

Risk assessment can be computed in the form [CHA89] : [rj,lj,xj] where rj is the risk, lj is the likelihood of the risk
and xj is the impact of risk. For the assessment to be useful a risk referent level must be defined. This may be
represented by risk components such as performance, cost, support and schedule. In software risk analysis a risk
referent level has a single referent point or break point at which the decision has to be made to proceed with the
project or terminate it. During risk assessment we define the risk referent level, attempt to develop a relationship
between each [rj,lj,xj] and each of the referent level, predict the referent points and try to predict how compound
combination of risks affects referent points.

In continuing Risk Management three basic processes are, monitoring identified risks, monitoring identified
assumption and identifying new risks. Incorporating disciplined risk analysis, assessment techniques increases
the quality of software being produced by minimizing or eradicating risks. Thus risk management, in concept
referring to making decisions based on evaluation of factors that prove as a threat is critical for the overall
success of a project.

Software prototyping
From Wikipedia, the free encyclopedia

Jump to: navigation, search

The introduction to this article provides insufficient context for those


unfamiliar with the subject. Please help improve the article with a good
introductory style. (October 2009)

Software prototyping, refers to the activity of creating prototypes of software applications, i.e.,
incomplete versions of the software program being developed. It is an activity that occurs during
certain software development and is comparable to prototyping as known from other fields, such as
mechanical engineering or manufacturing.

A prototype typically simulates only a few aspects of the features of the eventual program, and may be
completely different from the eventual implementation.

The conventional purpose of a prototype is to allow users of the software to evaluate developers'
proposals for the design of the eventual product by actually trying them out, rather than having to
interpret and evaluate the design based on descriptions. Prototyping can also be used by end users to
describe and prove requirements that developers have not considered, so "controlling the prototype"
can be a key factor in the commercial relationship between developers and their clients. [1] Interaction
design in particular makes heavy use of prototyping with that goal.

Prototyping has several benefits: The software designer and implementer can obtain feedback from the
users early in the project. The client and the contractor can compare if the software made matches the
software specification, according to which the software program is built. It also allows the software
engineer some insight into the accuracy of initial project estimates and whether the deadlines and
milestones proposed can be successfully met. The degree of completeness and the techniques used in
the prototyping have been in development and debate since its proposal in the early 1970s.[6]

This process is in contrast with the 1960s and 1970s monolithic development cycle of building the
entire program first and then working out any inconsistencies between design and implementation,
which led to higher software costs and poor estimates of time and cost.[citation needed] The monolithic
approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the
software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping
can also avoid the great expense and difficulty of changing a finished software product.

The practice of prototyping is one of the points Fred Brooks makes in his 1975 book The Mythical
Man-Month and his 10-year anniversary article No Silver Bullet.
Contents
[hide]

• 1 Overview
• 2 Dimensions of prototypes
o 2.1 Horizontal Prototype
o 2.2 Vertical Prototype
• 3 Types of prototyping
o 3.1 Throwaway prototyping
o 3.2 Evolutionary prototyping
o 3.3 Incremental prototyping
o 3.4 Extreme prototyping
• 4 Advantages of prototyping
• 5 Disadvantages of prototyping
• 6 Best projects to use prototyping
• 7 Methods
o 7.1 Dynamic systems development method
o 7.2 Operational prototyping
o 7.3 Evolutionary systems development
o 7.4 Evolutionary rapid development
o 7.5 Scrum
• 8 Tools
o 8.1 Screen generators, design tools & Software Factories
o 8.2 Application definition or simulation software
o 8.3 Requirements Engineering Environment
o 8.4 LYMB
o 8.5 Non-relational environments
o 8.6 PSDL
• 9 Notes

• 10 References

[edit] Overview

The process of prototyping involves the following steps

1. Identify basic requirements

Determine basic requirements including the input and output information


desired. Details, such as security, can typically be ignored.

2. Develop Initial Prototype


The initial prototype is developed that includes only user interfaces. (See
Horizontal Prototype, below)

3. Review

The customers, including end-users, examine the prototype and provide


feedback on additions or changes.

4. Revise and Enhance the Prototype

Using the feedback both the specifications and the prototype can be improved.
Negotiation about what is within the scope of the contract/product may be
necessary. If changes are introduced then a repeat of steps #3 and #4 may be
needed.

[edit] Dimensions of prototypes

Nielsen summarizes the various dimension of prototypes in his book Usability Engineering

[edit] Horizontal Prototype

A common term for a user interface prototype is the horizontal prototype. It provides a broad view of
an entire system or subsystem, focusing on user interaction more than low-level system functionality,
such as database access. Horizontal prototypes are useful for:

• Confirmation of user interface requirements and system scope


• Demonstration version of the system to obtain buy-in from the business
• Develop preliminary estimates of development time, cost and effort.

[edit] Vertical Prototype

A vertical prototype is a more complete elaboration of a single subsystem or function. It is useful for
obtaining detailed requirements for a given function, with the following benefits:

• Refinement database design


• Obtain information on data volumes and system interface needs, for network
sizing and performance engineering
• Clarifies complex requirements by drilling down to actual system functionality
[edit] Types of prototyping

Software prototyping has many variants. However, all the methods are in some way based on two
major types of prototyping: Throwaway Prototyping and Evolutionary Prototyping.

[edit] Throwaway prototyping

Also called close-ended prototyping. Throwaway or Rapid Prototyping refers to the creation of a model
that will eventually be discarded rather than becoming part of the final delivered software. After
preliminary requirements gathering is accomplished, a simple working model of the system is
constructed to visually show the users what their requirements may look like when they are
implemented into a finished system.

Rapid Prototyping involved creating a working model of various parts of


the system at a very early stage, after a relatively short investigation. The
method used in building it is usually quite informal, the most important factor
being the speed with which the model is provided. The model then becomes the
starting point from which users can re-examine their expectations and clarify
their requirements. When this has been achieved, the prototype model is 'thrown
away', and the system is formally developed based on the identified
requirements.[7]

The most obvious reason for using Throwaway Prototyping is that it can be done quickly. If the users
can get quick feedback on their requirements, they may be able to refine them early in the development
of the software. Making changes early in the development lifecycle is extremely cost effective since
there is nothing at that point to redo. If a project is changed after a considerable work has been done
then small changes could require large efforts to implement since software systems have many
dependencies. Speed is crucial in implementing a throwaway prototype, since with a limited budget of
time and money little can be expended on a prototype that will be discarded.

Another strength of Throwaway Prototyping is its ability to construct interfaces that the users can test.
The user interface is what the user sees as the system, and by seeing it in front of them, it is much easier
to grasp how the system will work.

…it is asserted that revolutionary rapid prototyping is a more effective


manner in which to deal with user requirements-related issues, and therefore a
greater enhancement to software productivity overall. Requirements can be
identified, simulated, and tested far more quickly and cheaply when issues of
evolvability, maintainability, and software structure are ignored. This, in turn,
leads to the accurate specification of requirements, and the subsequent
construction of a valid and usable system from the user's perspective via
conventional software development models. [8]

Prototypes can be classified according to the fidelity with which they resemble the actual product in
terms of appearance, interaction and timing. One method of creating a low fidelity Throwaway
Prototype is Paper Prototyping. The prototype is implemented using paper and pencil, and thus mimics
the function of the actual product, but does not look at all like it. Another method to easily build high
fidelity Throwaway Prototypes is to use a GUI Builder and create a click dummy, a prototype that looks
like the goal system, but does not provide any functionality.

Not exactly the same as Throwaway Prototyping, but certainly in the same family, is the usage of
storyboards, animatics or drawings. These are non-functional implementations but show how the
system will look.

SUMMARY:-In this approach the prototype is constructed with the idea that it will be discarded and
the final system will be built from scratch. The steps in this approach are:

1. Write preliminary requirements


2. Design the prototype
3. User experiences/uses the prototype, specifies new requirements
4. Repeat if necessary
5. Write the final requirements
6. Develop the real products

[edit] Evolutionary prototyping

Evolutionary Prototyping (also known as breadboard prototyping) is quite different from Throwaway
Prototyping. The main goal when using Evolutionary Prototyping is to build a very robust prototype in
a structured manner and constantly refine it. "The reason for this is that the Evolutionary prototype,
when built, forms the heart of the new system, and the improvements and further requirements will be
built.

When developing a system using Evolutionary Prototyping, the system is continually refined and
rebuilt.

"…evolutionary prototyping acknowledges that we do not understand all


the requirements and builds only those that are well understood."[5]

This technique allows the development team to add features, or make changes that couldn't be
conceived during the requirements and design phase.

For a system to be useful, it must evolve through use in its intended


operational environment. A product is never "done;" it is always maturing as the
usage environment changes…we often try to define a system using our most
familiar frame of reference---where we are now. We make assumptions about
the way business will be conducted and the technology base on which the
business will be implemented. A plan is enacted to develop the capability, and,
sooner or later, something resembling the envisioned system is delivered.[9]

Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional
systems. Although they may not have all the features the users have planned, they may be used on an
interim basis until the final system is delivered.

"It is not unusual within a prototyping environment for the user to put an
initial prototype to practical use while waiting for a more developed version…
The user may decide that a 'flawed' system is better than no system at all."[7]

In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they
understand instead of working on developing a whole system.

To minimize risk, the developer does not implement poorly understood


features. The partial system is sent to customer sites. As users work with the
system, they detect opportunities for new features and give requests for these
features to developers. Developers then take these enhancement requests
along with their own and use sound configuration-management practices to
change the software-requirements specification, update the design, recode and
retest.[10]

[edit] Incremental prototyping

The final product is built as separate prototypes. At the end the separate prototypes are merged in an
overall design.

[edit] Extreme prototyping

Extreme Prototyping as a development process is used especially for developing web applications.
Basically, it breaks down web development into three phases, each one based on the preceding one.
The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the
screens are programmed and fully functional using a simulated services layer. In the third phase the
services are implemented. The process is called Extreme Prototyping to draw attention to the second
phase of the process, where a fully-functional UI is developed with very little regard to the services
other than their contract.
[edit] Advantages of prototyping

There are many advantages to using prototyping in software development – some tangible, some
abstract.[11]

Reduced time and costs: Prototyping can improve the quality of requirements and specifications
provided to developers. Because changes cost exponentially more to implement as they are detected
later in development, the early determination of what the user really wants can result in faster and less
expensive software.[8]

Improved and increased user involvement: Prototyping requires user involvement and allows them
to see and interact with a prototype allowing them to provide better and more complete feedback and
specifications.[7] The presence of the prototype being examined by the user prevents many
misunderstandings and miscommunications that occur when each side believe the other understands
what they said. Since users know the problem domain better than anyone on the development team
does, increased interaction can result in final product that has greater tangible and intangible quality.
The final product is more likely to satisfy the users desire for look, feel and performance.

[edit] Disadvantages of prototyping

Using, or perhaps misusing, prototyping can also have disadvantages.[11]

Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing
the complete project. This can lead to overlooking better solutions, preparation of incomplete
specifications or the conversion of limited prototypes into poorly engineered final projects that are hard
to maintain. Further, since a prototype is limited in functionality it may not scale well if the prototype is
used as the basis of a final deliverable, which may not be noticed if developers are too focused on
building a prototype as a model.

User confusion of prototype and finished system: Users can begin to think that a prototype, intended
to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for
example, often unaware of the effort needed to add error-checking and security features which a
prototype may not have.) This can lead them to expect the prototype to accurately model the
performance of the final system when this is not the intent of the developers. Users can also become
attached to features that were included in a prototype for consideration and then removed from the
specification for a final system. If users are able to require all proposed features be included in the final
system this can lead to conflict.

Developer misunderstanding of user objectives: Developers may assume that users share their
objectives (e.g. to deliver core functionality on time and within budget), without understanding wider
commercial issues. For example, user representatives attending Enterprise software (e.g. PeopleSoft)
events may have seen demonstrations of "transaction auditing" (where changes are logged and
displayed in a difference grid view) without being told that this feature demands additional coding and
often requires more hardware to handle extra database accesses. Users might believe they can demand
auditing on every field, whereas developers might think this is feature creep because they have made
assumptions about the extent of user requirements. If the developer has committed delivery before the
user requirements were reviewed, developers are between a rock and a hard place, particularly if user
management derives some advantage from their failure to implement requirements.

Developer attachment to prototype: Developers can also become attached to prototypes they have
spent a great deal of effort producing; this can lead to problems like attempting to convert a limited
prototype into a final system when it does not have an appropriate underlying architecture. (This may
suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.)

Excessive development time of the prototype: A key property to prototyping is the fact that it is
supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop
a prototype that is too complex. When the prototype is thrown away the precisely developed
requirements that it provides may not yield a sufficient increase in productivity to make up for the time
spent developing the prototype. Users can become stuck in debates over details of the prototype,
holding up the development team and delaying the final product.

Expense of implementing prototyping: the start up costs for building a development team focused on
prototyping may be high. Many companies have development methodologies in place, and changing
them can mean retraining, retooling, or both. Many companies tend to just jump into the prototyping
without bothering to retrain their workers as much as they should.

A common problem with adopting prototyping technology is high


expectations for productivity with insufficient effort behind the learning curve. In
addition to training for the use of a prototyping technique, there is an often
overlooked need for developing corporate and project specific underlying
structure to support the technology. When this underlying structure is omitted,
lower productivity can often result.[13]

[edit] Best projects to use prototyping

It has been argued that prototyping, in some form or another, should be used all the time. However,
prototyping is most beneficial in systems that will have many interactions with the users.

It has been found that prototyping is very effective in the analysis and
design of on-line systems, especially for transaction processing, where the use
of screen dialogs is much more in evidence. The greater the interaction between
the computer and the user, the greater the benefit is that can be obtained from
building a quick system and letting the user play with it.[7]

Systems with little user interaction, such as batch processing or systems that mostly do calculations,
benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be
too intensive and the potential gains that prototyping could provide are too small.[7]
Prototyping is especially good for designing good human-computer interfaces. "One of the most
productive uses of rapid prototyping to date has been as a tool for iterative user requirements
engineering and human-computer interface design."[8]

[edit] Methods

There are few formal prototyping methodologies even though most Agile Methods rely heavily upon
prototyping techniques.

[edit] Dynamic systems development method

Dynamic Systems Development Method (DSDM)[18] is a framework for delivering business solutions
that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. It expands
upon most understood definitions of a prototype. According to DSDM the prototype may be a diagram,
a business process, or even a system placed into production. DSDM prototypes are intended to be
incremental, evolving from simple forms into more comprehensive ones.

DSDM prototypes may be throwaway or evolutionary. Evolutionary prototypes may be evolved


horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations
detailing subsequent sections). Evolutionary prototypes can eventually evolve into final systems.

The four categories of prototypes as recommended by DSDM are:

• Business prototypes – used to design and demonstrates the business


processes being automated.
• Usability prototypes – used to define, refine, and demonstrate user interface
design usability, accessibility, look and feel.
• Performance and capacity prototypes - used to define, demonstrate, and
predict how systems will perform under peak loads as well as to demonstrate
and evaluate other non-functional aspects of the system (transaction rates, data
storage volume, response time, etc.)
• Capability/technique prototypes – used to develop, demonstrate, and
evaluate a design approach or concept.

The DSDM lifecycle of a prototype is to:

1. Identify prototype
2. Agree to a plan
3. Create the prototype
4. Review the prototype

[edit] Operational prototyping

Operational Prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary
prototyping with conventional system development. "[It] offers the best of both the quick-and-dirty and
conventional-development worlds in a sensible manner. Designers develop only well-understood
features in building the evolutionary baseline, while using throwaway prototyping to experiment with
the poorly understood features."[5]

Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct approach when
trying to combine the two approaches. His idea is to engage in an evolutionary prototyping
methodology and rapidly prototype the features of the system after each evolution.

The specific methodology follows these steps: [5]

• An evolutionary prototype is constructed and made into a baseline using


conventional development strategies, specifying and implementing only the
requirements that are well understood.
• Copies of the baseline are sent to multiple customer sites along with a trained
prototyper.
• At each site, the prototyper watches the user at the system.
• Whenever the user encounters a problem or thinks of a new feature or
requirement, the prototyper logs it. This frees the user from having to record the
problem, and allows them to continue working.
• After the user session is over, the prototyper constructs a throwaway prototype
on top of the baseline system.
• The user now uses the new system and evaluates. If the new changes aren't
effective, the prototyper removes them.
• If the user likes the changes, the prototyper writes feature-enhancement
requests and forwards them to the development team.
• The development team, with the change requests in hand from all the sites, then
produce a new evolutionary prototype using conventional methods.

Obviously, a key to this method is to have well trained prototypers available to go to the user sites. The
Operational Prototyping methodology has many benefits in systems that are complex and have few
known requirements in advance.

[edit] Evolutionary systems development

Evolutionary Systems Development is a class of methodologies that attempt to formally implement


Evolutionary Prototyping. One particular type, called Systemscraft is described by John Crinnion in his
book: Evolutionary Systems Development.

Systemscraft was designed as a 'prototype' methodology that should be modified and adapted to fit the
specific environment in which it was implemented.

Systemscraft was not designed as a rigid 'cookbook' approach to the


development process. It is now generally recognised[sic] that a good
methodology should be flexible enough to be adjustable to suit all kinds of
environment and situation…[7]
The basis of Systemscraft, not unlike Evolutionary Prototyping, is to create a working system from the
initial requirements and build upon it in a series of revisions. Systemscraft places heavy emphasis on
traditional analysis being used throughout the development of the system.

[edit] Evolutionary rapid development

Evolutionary Rapid Development (ERD)[12] was developed by the Software Productivity Consortium, a
technology development and integration agent for the Information Technology Office of the Defense
Advanced Research Projects Agency (DARPA).

Fundamental to ERD is the concept of composing software systems


based on the reuse of components, the use of software templates and on an
architectural template. Continuous evolution of system capabilities in rapid
response to changing user needs and technology is highlighted by the evolvable
architecture, representing a class of solutions. The process focuses on the use
of small artisan-based teams integrating software and systems engineering
disciplines working multiple, often parallel short-duration timeboxes with
frequent customer interaction.

Key to the success of the ERD-based projects is parallel exploratory


analysis and development of features, infrastructures, and components with and
adoption of leading edge technologies enabling the quick reaction to changes in
technologies, the marketplace, or customer requirements.[9]

To elicit customer/user input, frequent scheduled and ad hoc/impromptu meetings with the stakeholders
are held. Demonstrations of system capabilities are held to solicit feedback before
design/implementation decisions are solidified. Frequent releases (e.g., betas) are made available for
use to provide insight into how the system could better support user and customer needs. This assures
that the system evolves to satisfy existing user needs.

The design framework for the system is based on using existing published or de facto standards. The
system is organized to allow for evolving a set of capabilities that includes considerations for
performance, capacities, and functionality. The architecture is defined in terms of abstract interfaces
that encapsulate the services and their implementation (e.g., COTS applications). The architecture
serves as a template to be used for guiding development of more than a single instance of the system. It
allows for multiple application components to be used to implement the services. A core set of
functionality not likely to change is also identified and established.

The ERD process is structured to use demonstrated functionality rather than paper products as a way
for stakeholders to communicate their needs and expectations. Central to this goal of rapid delivery is
the use of the "timebox" method. Timeboxes are fixed periods of time in which specific tasks (e.g.,
developing a set of functionality) must be performed. Rather than allowing time to expand to satisfy
some vague set of goals, the time is fixed (both in terms of calendar weeks and person-hours) and a set
of goals is defined that realistically can be achieved within these constraints. To keep development
from degenerating into a "random walk," long-range plans are defined to guide the iterations. These
plans provide a vision for the overall system and set boundaries (e.g., constraints) for the project. Each
iteration within the process is conducted in the context of these long-range plans.

Once an architecture is established, software is integrated and tested on a daily basis. This allows the
team to assess progress objectively and identify potential problems quickly. Since small amounts of the
system are integrated at one time, diagnosing and removing the defect is rapid. User demonstrations
can be held at short notice since the system is generally ready to exercise at all times.

[edit] Scrum

Scrum is an agile method for project management. The approach was first described by Takeuchi and
Nonaka in "The New New Product Development Game" (Harvard Business Review, Jan-Feb 1986)

[edit] Tools

Efficiently using prototyping requires that an organization have proper tools and a staff trained to use
those tools. Tools used in prototyping can vary from individual tools like 4th generation programming
languages used for rapid prototyping to complex integrated CASE tools. 4th generation visual
programming languages like Visual Basic and ColdFusion are frequently used since they are cheap,
well known and relatively easy and fast to use. CASE tools, supporting requirements analysis, like the
Requirements Engineering Environment (see below) are often developed or selected by the military or
large organizations. Object oriented tools are also being developed like LYMB from the GE Research
and Development Center. Users may prototype elements of an application themselves in a spreadsheet.

[edit] Screen generators, design tools & Software Factories

Also commonly used are screen generating programs that enable prototypers to show users systems that
don't function, but show what the screens may look like.[2] Developing Human Computer Interfaces can
sometimes be the critical part of the development effort, since to the users the interface essentially is
the system.

Software Factories are Code Generators that allow you to model the domain model and then drag and
drop the UI. Also they enable you to run the prototype and use basic database functionality. This
approach allows you to explore the domain model and make sure it is in sync with the GUI prototype.
Also you can use the UI Controls that will later on be used for real development.

[edit] Application definition or simulation software

A new class of software called also Application definition or simulation software enable users to
rapidly build lightweight, animated simulations of another computer program, without writing code.
Application simulation software allows both technical and non-technical users to experience, test,
collaborate and validate the simulated program, and provides reports such as annotations, screenshot
and schematics. As a solution specification technique, Application Simulation falls between low-risk,
but limited, text or drawing-based mock-ups (or wireframes) sometimes called paper based
prototyping, and time-consuming, high-risk code-based prototypes, allowing software professionals to
validate requirements and design choices early on, before development begins. In doing so, risks and
costs associated with software implementations can be dramatically reduced[3].

To simulate applications one can also use software which simulate real-world software programs for
computer based training, demonstration, and customer support, such as screencasting software as those
areas are closely related. There are also more specialised tools.[4][5][6] Some of the leading tools in this
category are iRise, LucidChart, ProtoShare, Axure, Justinmind Prototyper, and DefineIT from Borland.
[7][8][9]

[edit] Requirements Engineering Environment

"The Requirements Engineering Environment (REE), under development at Rome Laboratory since
1985, provides an integrated toolset for rapidly representing, building, and executing models of critical
aspects of complex systems."[15]

Requirements Engineering Environment is currently used by the Air Force to develop systems. It is:

an integrated set of tools that allows systems analysts to rapidly build


functional, user interface, and performance prototype models of system
components. These modeling activities are performed to gain a greater
understanding of complex systems and lessen the impact that inaccurate
requirement specifications have on cost and scheduling during the system
development process. Models can be constructed easily, and at varying levels of
abstraction or granularity, depending on the specific behavioral aspects of the
model being exercised.[15]

REE is composed of three parts. The first, called proto is a CASE tool specifically designed to support
rapid prototyping. The second part is called the Rapid Interface Prototyping System or RIP, which is a
collection of tools that facilitate the creation of user interfaces. The third part of REE is a user interface
to RIP and proto that is graphical and intended to be easy to use.

Rome Laboratory, the developer of REE, intended that to support their internal requirements gathering
methodology. Their method has three main parts:

• Elicitation from various sources which means u loose (users, interfaces to other
systems), specification, and consistency checking
• Analysis that the needs of diverse users taken together do not conflict and are
technically and economically feasible
• Validation that requirements so derived are an accurate reflection of user needs.
[15]

In 1996, Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create
"a commercial quality REE that supports requirements specification, simulation, user interface
prototyping, mapping of requirements to hardware architectures, and code generation…"[16] This system
is named the Advanced Requirements Engineering Workstation or AREW.

[edit] LYMB

LYMB[17] is an object-oriented development environment aimed at developing applications that require


combining graphics-based user interfaces, visualization, and rapid prototyping.

[edit] Non-relational environments

Non-relational definition of data (e.g. using Caché or associative models) can help make end-user
prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a
simulation. This may yield earlier/greater clarity of business requirements, though it does not
specifically confirm that requirements are technically and economically feasible in the target
production system.

[edit] PSDL

PSDL is a prototype description language to describe real-time software.

[edit] Notes

1. ^ C. Melissa Mcclendon, Larry Regot, Gerri Akers: The Analysis and Prototyping
of Effective Graphical User Interfaces. October 1996. [4]
2. ^ D.A. Stacy, professor, University of Guelph. Guelph, Ontario. Lecture notes on
Rapid Prototyping. August, 1997. [5]
3. ^ Stephen J. Andriole: Information System Design Principles for the 90s: Getting
it Right. AFCEA International Press, Fairfax, Virginia. 1990. Page 13.
4. ^ R. Charette, Software Engineering Risk Analysis and Management. McGraw
Hill, New York, 1989.
5. ^ Alan M. Davis: Operational Prototyping: A new Development Approach. IEEE
Software, September 1992. Page 71.
6. ^ Todd Grimm: The Human Condition: A Justification for Rapid Prototyping.
Time Compression Techn
Prototyping is the process of building a model of a system. In terms of an
information system, prototypes are employed to help system designers
build an information system that intuitive and easy to manipulate for end
users. Prototyping is an iterative process that is part of the analysis phase
of the systems development life cycle.
During the requirements determination portion of the systems analysis
phase, system analysts gather information about the organization's
current procedures and business processes related the proposed
information system. In addition, they study the current information
system, if there is one, and conduct user interviews and collect
documentation. This helps the analysts develop an initial set of system
requirements.
Prototyping can augment this process because it converts these basic, yet
sometimes intangible, specifications into a tangible but limited working
model of the desired information system. The user feedback gained from
developing a physical system that the users can touch and see facilitates
an evaluative response that the analyst can employ to modify existing
requirements as well as developing new ones.
Prototyping comes in many forms - from low tech sketches or paper
screens(Pictive) from which users and developers can paste controls and
objects, to high tech operational systems using CASE (computer-aided
software engineering) or fourth generation languages and everywhere in
between. Many organizations use multiple prototyping tools. For example,
some will use paper in the initial analysis to facilitate concrete user
feedback and then later develop an operational prototype using fourth
generation languages, such as Visual Basic, during the design stage.
Some Advantages of Prototyping:

Reduces development time.


Reduces development costs.
Requires user involvement.
Developers receive quantifiable user feedback.
Facilitates system implementation since users know what to expect.
Results in higher user satisfaction.
Exposes developers to potential future system enhancements.

Some Disadvantages of Prototyping

Can lead to insufficient analysis.


Users expect the performance of the ultimate system to be the same as the
prototype.
Developers can become too attached to their prototypes
Can cause systems to be left unfinished and/or implemented before they
are ready.
Sometimes leads to incomplete documentation.
If sophisticated software prototypes (4th GL or CASE Tools) are employed,
the time saving benefit of prototyping can be lost.
Because prototypes inherently increase the quality and amount of
communication between the developer/analyst and the end user, its' use
has become widespread. In the early 1980's, organizations used
prototyping approximately thirty percent (30%) of the time in development
projects. By the early 1990's, its use had doubled to sixty percent (60%).
Although there are guidelines on when to use software prototyping, two
experts believed some of the rules developed were nothing more than
conjecture.
In the article "An Investigation of Guidelines for Selecting a Prototyping
Strategy", Bill C. Hardgrave and Rick L. Wilson compare prototyping
guidelines that appear in information systems literature with their actual
use by organizations that have developed prototypes. Hardgrave and
Wilson sent out 500 prototyping surveys to information systems managers
throughout the United States. The represented organizations were
comprised of a variety of industries - educational, health service, financial,
transportation, retail, insurance, government, manufacturing and service.
A copy of the survey was also presented to a primary user and a key
developer of two systems that the company had implemented within the
two years of the survey.
There were usable survey results received from 88 organizations
representing 118 different projects. Hardgrave and Wilson wanted to find
out how many of the popular prototyping guidelines outlined in literature
were actually used by organizations and whether compliance affected
system success (measured by the user's stated level of satisfaction). It
should be noted that, although not specifically stated, the study was
based on the use of "high tech" software models, not "low tech" paper or
sketch prototypes.
Based on the results of their research, Hardgrave and Wilson found that
industry followed only six of the seventeen recommended in information
system literature. The guidelines practiced by industry whose adherence
was found to have a statistical effect on system success were:
Prototyping should be employed only when users are able to actively
participate in the project.
Developers should either have prototyping experience or given training.
Users involved in the project should also have prototyping experience or
be educated on the use and purpose of prototyping.
Prototypes should become part of the final system only if the developers
are given access to prototyping support tools.
If experimentation and learning are needed before there can be full
commitment to a project, prototyping can be successfully used.
Prototyping is not necessary if the developer is already familiar with the
language ultimately used for system design.
Instead of software prototyping , several information systems consultants
and researchers recommend using "low tech" prototyping tools (also
known as paper prototypes or Pictive), especially for initial systems
analysis and design. The paper approach allows both designers and users
to literally cut and paste the system interface. Object command and
controls can be easily and quickly moved to suit user needs.
Among its' many benefits, this approach lowers the cost and time involved
in prototyping, allows for more iterations, and gives developers the
chance to get immediate user feedback on refinements to the design. It
effectively eliminates many of the disadvantages of prototyping since
paper prototypes are inexpensive to create, developers are less likely to
become attached to their work, users do not develop performance
expectations, and best of all, your paper prototypes are usually "bug-free"
(unlike most software prototypes)!

You might also like