Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Reengineering the CorporationA Manifesto for

IT Evolution
Harry Sneed , Chris Verhoef

CaseConsult GmbH, Wiesbaden, Germany

Free University of Amsterdam, Department of Mathematics and Computer Science
De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
Harry.Sneed@t-online.de, x@cs.vu.nl

Abstract
We describe the intricate relationship between business processes and
the underlying technology in an attempt to demonstrate that business
organizations have always been technology driven. That is, the business
processes have been fitted to the available hardware and software. The
technology determines the way business is conducted. With the advent
of the Internet age, business processes will change radically, not because
they are purposely reengineered, but because the Internet offers both to
companies and customers radically different means of fulfilling their re-
quirements. Thus, new business processes will emerge more or less au-
tonomously, as people discover the opportunities offered to them by the
new technology. The technical possibilities of today determine business
consciousness. The current state of information and communication tech-
nology enables a migration from the information age to the age of imme-
diate answers. After the underlying technology enables the possibilities,
the business has the opportunity to align to it.

1 Introduction
It is 5 AM, the passenger is in transit from one country to another. The airline
food was not particularly good, so he fancies a drink to get rid of the aftertaste.
The transit area has no restaurant, no shop, except a vending machine. Shoot,
no local currency. Now what? When he approaches the machine, he notices the
number. With a big smile he grabs his mobile and starts dialing. The phone
call does not last long, but immediately after he hung up, the vending machine
dispenses him a coke.
The coke is billed from his telephone account, after which an SMS (Short
Message Service) message was send to the vending machine causing it to dispense
the drink. This is a revolutionary change in business process: the entire interface
between customer and vendor is automated, currency problems are dealt with,
no shop is needed, it has become useless to vandalize the vending machine since
there is no money in it, supply management is enabled since it is at any time
known what the content of the machine is, and so on.
This clever combination of available technologies enabled the business oppor-
tunity of this wireless networked coke machine. This example illustrates that a

1
well-established technological infrastructure prerequisites a revolutionary busi-
ness change to become commonplace. Prerequisite, since we know already for
many years that networked coke machines are technically feasible: in the early
eighties you could already probe a networked coke machine via the Internet lo-
cated at Carnegie Mellon University. It listed the current status of the machine,
so you could see which slots were emptied last, and thus recently refilled. Useful
knowledge when you want to have the coldest bottle. Still this idea could not
easily be turned into wide-spread practice, since before wireless communication
took off, it did not make sense to communicate as such with a networked coke
dispenser for the average customer. However, with the appropriate infrastruc-
ture in place, it is possible for everyone to communicate electronically with a
vending machine. And this has been the case, the introductory story is not fic-
tion but fact. Nowadays, you can buy low-cost hardware components designed
for machine-to-machine and machine-to-customer applications such as vending
machines, routinely using authentication and encryption via GSM algorithms,
mobile originated and terminated SMS, which are GPRS ready (GSM stands
for Groupe Speciale Mobile, GPRS means General Packet Radio Service). So,
business can change revolutionary if the technological climate allows that. Not
only in this example, but also in general, business aligns at best with evolving
IT. In this paper we hope to illustrate this further.
In Section 2, we recall the fact that the revolution of business process reengi-
neering was not widely implemented, and that many efforts were doomed to
fail. In Section 3, we explain two major reasons in favor of reengineering old
economy information systems: keeping pace with technology advances and im-
proving business processes. We illustrate how traditionally business processes
always aligned with the technological waves of batch oriented information sys-
tems and client/server systems, and in the future can align with Web based
architectures. Then in Section 4 we discuss how corporations can align their
businesses to the information technology of the 21st century: by revolution or
by evolution. Subsequently we illustrate in Section 5 that when the techno-
logical infrastructure is in place, new business processes can emerge. Section 6
concludes by stressing that technology drives business change, and that business
(process) reengineering follows software reengineering.

Acknowledgements Thanks to John J. Marciniak (Editor in Chief) for his


kind invitation to contribute to the Encyclopedia of Software Engineering. We
thank Stevie Sneed for the fine graphics in this paper, and Bas Toeter (Univer-
sity of Amsterdam) for converting the graphics into the right format. We thank
the anonymous reviewers for their substantial comments.

2 Business Reengineering in the 1990sa Failed


Start
In their book Reengineering the CorporationA Manifesto for Business Revo-
lution published in 1993, Hammer and Champy [20] called for a radical restruc-
turing of business organizations. Businesses should be converted from hierarchi-
cal centralized structures to networked decentralized business units cooperating
with one another. They should be organized around the business processes, to

2
react quicker and more efficiently to the demands of a changing market. This
meant delegating decision making and forming individual business units as au-
tonomous as possible. The wireless networked coke machine vision if you will.
Later reports on the realization of their vision were discouraging. According
to the study of the Standish Group most business reengineering efforts failed
miserably and the corporations returned to their traditional way of doing busi-
ness [18, 26]. One major reason for this was the disparity between the existing
information technology and the new decentralized organization. The technology
was simply not ripe to support distributed customer oriented business processes,
as in the coke machine example. This should not have come as a big surprise,
since most large corporations of that time were still using the mainframe as the
base of their information technology and the software application systems were
themselves hierarchical and monolithic. Another major reason was that Ham-
mer and Champy proposed to discard the past, and start anew. In the words
of the reengineering gurus (quotes collected by Paul A. Strassmann [48]):
Business reengineering means starting all over, starting from scratch.
What you do with the existing structure is nuke it!
Reengineering [..] cant be carried out in small and cautious steps. It is an
all-or-nothing proposition that produces dramatically impressive results.
What they overlooked is that discarding the past really means that you
destroy possibly valuable knowledge assets determining how an organization was
run. Throwing away all knowledge, good and bad, turned out not to be working
in reality. The good parts should be cherished, and from the bad parts you can
learn how to avoid them in the future. On top of that, business reengineering
was oversold. As pointed out by Paul A. Strassmann, reengineering as applied to
business excels more in packaging than in substance [49]. Strassmann writes [48,
p. 2389]:
The long record of miscarriages of centrally planned radical reforms
and the dismal record of reengineering as acknowledged by Hammer,
suggest that an evolutionary approach is likely to deliver better im-
provements [38]. No wonder that eighty-five percent of the 350 top
executives who have tried reengineering in their operations are dis-
satisfied with the results of their efforts [30, p. 1]. [..] These findings
are echoed in another study by the Computer Sciences Corporation,
the firm with the reputation of having the largest and most pros-
perous reengineering practice. In a study of 500 corporations with
experience in reengineering they found a dissatisfaction rate close to
50% [30, p. 14].
So, Hammer, being one of the drivers of the concept, acknowledges the failure
of reengineering. Champy writes in a second book on reengineering on page 1
sentence 1 [13]:
Reengineering is in trouble. Its not easy for me to make this admis-
sion. I was one of the two people who introduced the concept.
The abandonment of the past seems to be one of the major reasons for
failure. Not only Hammer acknowledges this but also Champy [13, p. 60]:

3
What is the historical context in which you are operating? What
can you learn from the past? How can you teach from the past?

Champy seems to break completely here with their initial idea to nuke the
past. Apparently, they did not realize this when they embarked on their reengi-
neering work, since he writes [13, p. 69]:

[..] the past, paradoxically, may also be a place to go for new


ideas.

So the idea that we can learn valuable things from the past, seems a paradox
to him. In their book Competing on the Edge Brown and Eisenhardt advocate
to use the past in order to explore the new. They warn for abandoning the
past [12, p. 104]:

Some managers believe that their business model is so hopelessly


locked in the past that a drastic leap into new businesses is their
only option. Other managers may be so mesmerized by the lure
of exciting new technologies or markets that they are blind to the
advantages of the past. Inexperienced managers and those with
leading-edge strategies are especially vulnerable here. [..] But re-
gardless of why the past is ignored, managers begin to see the past
as the problem and the future as the hope. They gloss over the risks
of the unknown even as they dismiss the value of experience. [..]
managers who disconnect from the past make mistakes that could
have been avoided.

Not only they warn for the so-called disconnect-trap, but they also conclude
about reengineering, or massive transformation as they call it: [12, p. 20]:

In reality, the striking characteristic across these firms is the absence


of massive transformation. [..] Massive transformations are signals
of missed inflection points, not successes.

So their analysis of companies that successfully adapt to the ever changing


environment is that they do not apply reengineering. That business process
reengineering failed as a transformation tool is also acknowledged by Haeckel in
his work on adaptive enterprises. He writes [19, p. 191]:

Reengineering, TQM, team-based structures, continuous improve-


ment, and a host of other prescriptions may produce valuable op-
erational improvements, but they achieve reformation, not transfor-
mation.

Also other authors who deal with the difficult subject of innovation in orga-
nizations share these viewpoints. For instance, Christensen writes in his book
on innovation strategies [14, p. 171]:

Despite beliefs spawned by popular change-management and reengi-


neering programs, processes are not nearly as flexible, or trainable
as are resources [..]

4
But even if you would not destroy your valuable business process knowl-
edge, we can savely say that, in retrospect, the failure of business reengineering
in the 1990s was caused by the lack of widely available technology to support it.
There was Internet, but only academics, military, and hackers were inhabiting
this exciting new virtual world, creating their own customer oriented networked
coke machines located in the Foo Bar. There were no Internet Service Providers
for the masses to cheaply connect customers to the corporate information sys-
tems, no request broker architecture to connect distributed objects, no work
flow management systems to guide the processes, no reliable distributed on-
line transaction monitors to monitor transactions in the network, no platform
independent languages like Java, no wireless communication, and no XML to
exchange data. This lack of supporting technology prohibited new streamlined
business processes from being put into practice. The awareness of need was
there, but not the means of satisfying it. After all, Hammer and Champy are
graduates of a business school and not engineers. Their knowledge of informa-
tion technology was, to say the least, meager [6].

3 Motivation for Reengineering


There are two main motivations for wanting to reengineer old economy corpo-
rate information systems. One is to upgrade the underlying technology, the
other is to improve the business processes [13]. There is, of course, an intri-
cate relationship between the two issues making it difficult to deal with them
separately, but this separation of concerns is essential to understanding their
interdependence.

3.1 Upgrading Technology


First, let us look at the technology which traditional corporationsbanks, in-
surance companies, utility providers, retailers etc.employ. Some old economy
corporations are still caught up in the technology of the 1970s. A significant
portion of their business logic is coded in Assembler or some other exotic lan-
guage, their data is stored in hierarchical or networked database systems and
their business transactions are dispensed from 3270-terminals using line oriented
CICS- or IMS-masks with a fixed layout. Their systems are maintained by a
burned out group of mainframe programmers, who are intent on maintaining
the status quo, or they have been outsourced for remote maintenance to some
foreign software house. Such corporations are the dinosaurs of information tech-
nology. Yet they exist and must be dealt with [50]. To give the reader an idea
whether the IT dinosaurs are almost extinct or not heres some bench marking
data.

70% of all the business critical software runs on mainframes [44, p. 13].
60% of all mainframe applications makes use of COBOL [39].
75% of all production transactions on mainframes is done using COBOL [4,
p. 70].
Over 60% of all Web-access data resides on a mainframe [4, p. 70].

5
COBOL mainframes process more than 83% of all transactions world-
wide [4, p. 70].
Over 95% of finance-insurance data is processed with COBOL [4, p. 70].

It is also interesting to see what the global language distribution is, to give
you an idea of how important certain languages are.

30% of all the software is estimated to be written in COBOL. This amounts


to about 225 billion lines of code.
10% of the code is written in Assembler code. Also this amounts to 140
220 billion lines of code.
20% of the code is C or C++. We mention them in one category, since
most C++ code is not object oriented at all.
the other 40% of the code is written in at least 700 exotic languages. Think
of PL/I, Fortran, Natural, RPG, ADS-Online, Ideal, CSP, Focus, Telon,
ABAP/4, ADABAS, Magic, Insight, etc.
30% of the US software uses mixed languages. For instance, COBOL with
embedded CICS and/or embedded SQL. About 10% of the US Software
uses even 3 languages.

An alarming 510% of the business-critical software systems can no longer


be (re)compiled: either source code is completely lost, or compilers are
lost, or version management is so bad that source code turned into so-
called mystery sources. Depending on the age of the software the risk of
losing sources drastically increases: for young code the probability is 1%.
For 20 year old code this can top a 85% probability.

Most of these data stem from Capers Jones [28, pp. 50, 129, 295], and [27,
pp. 284]. Of course, per country there are differences, for instance based on
our experience in Germany we estimated that about half of the data processing
organizations uses information systems written in Assembler, also obscure 4GLs
are still omnipresent.
All in all, the conclusion must be that mainframes, minis and their language
repertoire are to this day the vehicles for information systems. Or in plain
English: the IT dinosaurs are alive and kicking.
Thus, the bulk of corporate users managed to introduce the technology of the
1980s and havent progressed much since. Their business logic is programmed
in COBOL, PL/I or some 4th Generation Language like Natural or ADS-Online.
They managed to convert their data to a relational database, mostly DB-2, but
their critical applications still operate on the mainframe, their user interfaces
are mainly fixed screens that are emulated on a PC-workstation and they still
use CICS as a transaction monitor under MVS.
Maybe not many of us heard about CICS, but we use it probably daily
without realizing it. CICS stands for Customer Information Control System.
CICS is also pretty much alive, for instance, T. Scott Ankrum writes [3] that
nine of the top ten Internet brokers use CICS, that almost all of us use CICS,
whether we know it or not. That most of the big insurance companies and

6
financial institutions use CICS. And when you put your card into an Automatic
Teller Machine (ATM) that you might very well be starting a CICS transaction
somewhere.
These mastodons have just made a tremendous effort to upgrade their legacy
systems to be Y2K- and Euro-compatible so that their programming staffs are
tired and unwilling to take on any new tasks in the near future. They are more
interested in consolidating their applications and keeping them up and running
for a while [15].
Actually only a minority of corporate users really went over to distributed
client/server computing in the 1990s. In reality there was more talk about
client/server systems than real work in implementing them. As pointed out by
the study of the Standish Group in America, at least half of the client/server
projects in large corporations failed for whatever reasons [26]. This is reflected in
the language benchmarks also indicating that only a small portion of the world
wide business logic managed to find its way into C++ or Smalltalk classes. If
C++ projects are not cancelled, this does not mean there are no problems:
C++ efforts are sometimes challenged. We know of one case where a C++
based information system needed to be reengineered during development [46].
Somewhat more managed to get their information systems programmed in
newer 4GL development environments like Delphi, PowerBuilder and Oracle
Forms. Many corporations learned the hard way that there is either no tool
support to aid in the maintenance phase of these 4GL systems, or the vendors
stopped servicing the product (bankruptcy, mergers, and take-overs are often the
reasons for that). As a result, we see more and more conversion projects where
4GLs are stamped out in favor of languages that are much more mainstream.
The relational databases were distributed among several database servers
and newer distributed on-line teleprocessing monitors such as ENCINA and
TUXEDO were employed to allow transactions to transcend server boundaries.
A few bold users even progressed to employ Object Request Brokers such as
ORBIX and VisiBroker to connect their distributed objects. Their users enjoyed
the luxury of having a real graphical user interface complete with pulldown
menus, radio buttons, icons and pop-up boxes. Whether this really contributed
to a productivity increase is open to debate, but it helped put more color into
an otherwise grey working environment [31].
Now we stand at the threshold of a new era in computing, the Internet era.
Already some pilot applications have been built to test the possibilities of com-
municating via the Web and using Web pages as a customer interface. Almost
all corporations, even the dinosaurs, have realized that Internet technology with
the promise of direct electronic business to business and business to customer
commerce is vital to their survival. So what could be more important than
migrating their old application systems into this new world? What is there to
impede them from doing this? [36]
The answers are evident. Their business logic is programmed in antiquated
languages unsuitable to Internet applications. Their data is stored in antiquated
databases inaccessible to modern Internet components. Their transaction pro-
cessing is locked into a monolithic mainframe environment. Their users are
accustomed to working with fixed maps where they only have to fill in the
blanks. And, last but not least, their programming staff is composed mainly
of disgruntled MVS-programmers with little desire to start a new career as a
Web page designer or a Java applet constructor. These are really not the best

7
prerequisites for launching an expedition into the unknown world of electronic
commerce systems requiring a whole new set of languages, databases, transac-
tion monitors, users and programming skills [47].

Figure 1: The waves of change.

Considering these enormous legacy burdens of the past, it would seem to be


difficult enough for old economy corporations to even migrate their applications
one-to-one to the new technology. If they do not succeed, competitive survival
is at stake, causing corporations to be flooded by the waves of technological
change (see Figure 1). It is similar to a diver, trying to come up to the surface
with weights chained to his feet. The task becomes even more complicated
when one considers the business processes of traditional corporations and public
service departments, many of which are hopelessly behind. So lets move on to
discussing improving business processes.

3.2 Improving Business Processes


Business processes should ideally be designed to fulfill the so-called essential
requirements of a business independently of the existing technology [37]. Un-
fortunately, this ideal has never been realized. Even if the business analysts
started out by defining optimal virtual processes, they always had to corrupt
them to accommodate to existing information technology. In the early 1970s
there was no on-line data entry so processes had to be constructed to cap-
ture the data on punch cards, paper tape or machine readable forms to be
submitted to a data entry department which produced a tape for sequential
processing. The software systems consisted of series of batch programs which
merged and sorted sequential transaction files to update master files or hier-
archical databases. Whole methodologies such as standard programming logic
and the Michael Jackson structured programming technique emerged to handle
this type of processing [24]. It would have been convenient to design business
processes with direct interaction between the end user and the system but there
was no technology available to support that. Consequently, a lot of time and
effort was lost in preparing data entry error reports which went back to the
data originator to be corrected and recycled. As a result, data correction and
reentry became an essential part of the business process, just as did the man-

8
ual distribution of computer printouts to the parties effected who had to check
them for errors before passing the information on [21]. For instance, an entire
subindustry in the USA emerged on the concept of lockbox communication.
This is a proprietary post box that can be locked; customers use it for paper
based communication with a corporation.

Figure 2: A host centered business process.

In a sense, the early batch oriented data processing systems were wrapped
behind a layer of data submission and information output checking clerks acting
as a firewall between the real end users and the information systems. The
business processes of that time were based upon these constraints. In Figure 2
we illustrate a typical mainframe host centered business process. The actors
that needed to interact with the information systems were not only customers,
but also auditors, other businesses and managers. Behind the organizational
wall the data clerks interacted with the systems, operating a batch oriented
host computer.
In the late 1970s on-line data entry became wide spread as the first transac-
tion processing monitors became available. Data entry clerks could now submit
data in a fixed form and have it checked immediately, eliminating the data
correction loop. However, the database systems of that time were not capable
of making on-line updates without a lot of additional programming effort to
handle deadlocks and to commit or rollback transactions. This was expressed
in the early software cost estimation approaches. Even to this day there are
still influence factors in the Function Point analysis method for project cost
estimation which consider whether or not data has to be updated on-line. In
fact most of the influence factors in the Function Point analysis methods used

9
today are based upon the technology constraints of that era where data was
submitted on-line to be checked and corrected [2]. However, the real database
updates took place at night in batch mode. This meant that in the most op-
timal situation, business processes were geared to a delay of up to 24 hours
between the submission of a transaction and the time that the results of that
transaction went into effect. This time delay in transactions is abbreviated by
using the notation T + n, where T stands for Transaction and the number n in-
dicates the delay in days. It is not unusual to have a T + 5 transaction system:
it takes 5 business days to settle some business transaction. It goes without
saying that this time delay had a profound effect upon the way business could
be done. So, the quest for migrating to T + 4, . . . , T + 1 and even to T + 0 is
the next mass-changeafter Y2K and Eurothat needs to be made to lots of
data processing systems. For, only in the real-time transaction situation we will
really enter the age of immediate answers, something the Web makes already
possible for a selected number of services, but not at all in the case of batch
systems, where even a T + 1 system is considered an achievement. Such systems
need to be rearchitected to comply with (or simulate) real-time settlement of
transactions. Realizing that a direct-connect, high-throughput architecture is
an absolute necessity for the real-time, zero-latency connectivity requirements
of todays customer-driven enterprises is not enough. If there is simply no T + 0
functionality you cannot reengineer your business processes to genuine real-time
transaction processing.
Of course there have been always cases where it was not possible to wait. In
those cases special systems were developed to handle the volume of transactions
and to update the databases on-line. These systems were designed to support a
very special kind of business process: real-time transaction processing [33]. One
example is TPF (Transaction Processing Facility), which is an operating sys-
tem to support high volume transaction processing systems. Originally it was
called ACP, which stood for Airlines Control Package, and was developed jointly
by IBM and the first airlines to need such a system (among which American
Airlines). TPF is perhaps the most underestimated and misunderstood of all
major operating systems. So for those of you not familiar with TPF, it is older
than Unix, it is capable of connecting many mainframes, CPUs, databases, as
one large transaction cluster computer. The reliability of TPF systems is 24
hours/365 days a year, with performance up to 4000 transactions per second.
Moreover TPF systems support networks in which you can hook up 30.000 ter-
minals while the response time is still within seconds. Indeed when you think
of it, the classical example of a system that needs all these properties is a global
airline reservation system. The SABRE system of American Airlines is indeed
using TPF. But adopting TPF in banking industry, where high volume transac-
tion processing is also becoming more and more a necessity, is not without risk.
For instance, the once famous Bank of America, hired an executive of American
Airlines responsible for the implementation of the SABRE system to construct
a high volume real-time transaction facility (using TPF) at Bank of America.
This effort was not successful [22].
It was not until the mid 1980s that it became possible to update databases in
real-time with sufficient reliability. This technology breakthrough paved the way
for a whole new type of business process, one in which end users could submit
data directly and query the results of that data entry seconds later. They could
also produce random reports at any time. This was the high water mark of

10
Figure 3: The client/server business process.

4th Generation Languages and CASE tools which prompted James Martin to
declare the end of the programming profession. In the future it would be possible
for end users to implement their own business processes on demand [34].
In contrast, by the end of the 1980s most commercial business processes
were still built around paper forms which were faxed or mailed from one de-
partment to another until they came to a clerk trained in interacting with the
host computer who then submitted the transaction via a fixed data screen and
had reports produced. The reports were printed out by central mass printers
and distributed through the organization. This is not marginal: about 70% of
all computer input could have been output from another one [23], if only the
output was not printed on paper. How much does that cost? For instance, pa-
per based purchase orders at General Electric used to cost about 55 US Dollar
each. When this was done via Electronic Data Interchange (EDI) the price per
purchase dropped to 2.50 US Dollar per order [42].
The client/server technology of the 1990s offered the opportunity for dis-
tributing business processes. Rather than having the data submitted to one
central location and also having reports produced there, it became possible
for every department to have its own departmental computer equipped with
workstations and local printers (see Figure 3). Now end users could submit
data on-line, query the data base and produce their own reports without going
through the central IT department. Moreover, they could even customize the
software to meet their local requirements. User interfaces could be customized
to best suit the task at hand and output reports could be formatted according
to the whims of the department [7].
Where client/server technology was applied, it had a profound effect upon
the business processes. It placed more decision making power in the hands of the

11
end user and weakened the control of the central organization, a reason for many
corporations to reject it. Scores of client/server projects failed, but because of
the inability of organizations to adapt to the technology. Corporate users were
too accustomed to centralized processes who prescribed to the users how to
think. Individual employees in the old economy were seldom keen on assuming
responsibility for their particular work segment, even though the technology
offered them the possibility. The result was a structure clash or mismatch
between the legacy business processes and the new technology [16].

Figure 4: The Web based business process.

Now we have the Internet which puts the customer in direct contact with
the system, automating the interface completely (see Figure 4). Provided the
technology is applied correctly, there is no longer an organizational entity be-
tween the customer and the data. What used to be the organizational wrapping
(see Figure 2) is now taken care of by the Web. A customer is able to invoke
money transfers, withdraw funds, order articles and query the database without
going through any other person. The clerks who once filtered all of the data
coming into the system as well as all of the information going out of the system
are promoted to the role of a call center consultant who can coach customers
through their electronic transactions. Queries can be made by both customers
and managers. Managers can request their own reports whenever they have a
need for them. Decision support systems are available for decision makers and
customer relationship systems are available to advise customers. There is no
longer any organizational staff between users and their data, only software [41].
The wireless networked coke machine vision is feasible with this technological
infrastructure in place.
Of course, this revolution in the relationship between users and computers
calls for a revision of the used legacy business processes. A large segment of

12
jobs has become superfluous, all those which exist between the user and the
systems. On the other hand, new jobs have been created, all those for testing
the systems and consulting the users, and of course new programmers. The fact
that the greater part of the business transactions is now inside the system has
changed the nature of the game.
So here again we have an example of how technology affects the business
processes. It is not the technology which is adapted to the way we work, but
the way we work which is adapted to the technology. It was no other than Karl
Marx who documented this truism some 130 years ago. Business processes must
change in order to keep in time with the ever changing technology [35].

4 Business Alignment to Internet Technology


How to align business to the new technology for user organizations whose ap-
plication systems and legacy business processes are still in the 1980s? What
do they have to do to enter the 21st Century? The answer is obvious. Since
business processes are technology driven, they first must reengineer their infor-
mation systems. There are two ways to achieve this, one is through revolution
and the other through evolution [40].

4.1 The Revolutionary Approach

Figure 5: Productivity levels for development and evolution.

The revolutionary approach is to create a new IT organization parallel to the


existing one with the responsibility of developing new Internet systems. This
organization is free to employ the latest techniques: languages, tools and mid-
dleware. Their sole constraint is the data. They must either develop gateways

13
to access the existing databases, maintained by the legacy applications or they
must migrate the data to their own databases and provide the legacy systems
access to it. A temporary solution is to maintain the new and old databases
parallel to one another by means of copy management or data propagation [1].

Size early on-time late cancelled


(FP) projects projects projects projects
(%) (%) (%) (%)
1 6.00 92.00 1.00 1.00
10 8.00 89.00 2.00 1.00
100 7.00 80.00 8.00 5.00
1.000 6.00 60.00 17.00 17.00
10.000 3.00 23.00 35.00 39.00
100.000 1.00 15.00 36.00 48.00
Average 5.17% 59.83% 16.50% 18.50%

Table 1: Information System Schedule adherence (1999).

Another constraint associated with the revolutionary approach is that of


knowledge acquisition. The business logic is buried in the code of the legacy
programmers. To get to it, the developers of the new software have to extract
it from the code and the heads. The former alternative is a technical challenge
which has to be solved by applying reverse engineering technology. The old
programs are processed and their logical content post documented in a readable
form. This can be done either manually or by means of automated tools [17].
Executives often think that IT revolution, that is, restarting from scratch,
is a viable option for replacing their aging legacy applications. Popular belief
is that looking in existing code to find out how to make an extension takes
much longer than redoing it from scratch. Indeed if the programs are small
this is true. But if the programs increase in size, the productivity levels start
to coincide. In Figure 5 we depicted productivity levels that are typical for
information systems [29]. You can clearly see that under 100 Function Points
(think of 10.000 LOC COBOL, or 13.000 LOC C code) it is indeed the case that
maintenance productivity is lower than development productivity. The smaller
the systems the larger the difference becomes. But above 100 Function Points,
the dotted and solid lines approach each other and coincide with further size in-
creases. Moreover the productivity levels of both development and maintenance
drop below the productivity level of maintenance of small systems. People with
little large-scale programming experience often extrapolate their experience on
their experience in the 1 function point range. So they observed that mainte-
nance productivity is much lower than development productivity. In Figure 5
these levels are at at 7 FP/month and 40 FP/month respectively. In addition,
these people have in practice presumably seen the same productivity levels for
maintenance. There are two reasons for that:

most of the information systems are in their maintenance phase


the majority is about 5.000 FP in size, which has a maintenance produc-
tivity of about 7 FP/month

14
If we add this to the fact that many people tend to think in linear terms, we
end up with the horizontal dashed lines in Figure 5 that are in most peoples
minds: a 7 FP/month horizontal line for maintenance, and a 40 FP/month line
for development. In reality there is a totally different correlation between size
and productivity (as Figure 5 clearly indicates). The most important difference
is that maintenance and development productivity coincide when systems size
is increasing, and that both indicators drop sharply below the 10 FP/month
range as the size increases.
We think that this erroneous thought pattern causes many (corporate) man-
agers to assume that it is always better to start from scratch. But as indicated,
this strongly depends on the size of the system. Many business critical infor-
mation systems are over 10.000 FP so the idea that a fresh start will solve the
problems is in those cases doubtful.
Apart form this productivity aspect, another dimension comes into play
when the size of systems increase: risks. Table 1 summarizes schedule adher-
ence statistics for information systems projects [29]. As you can easily see, the
projects above 10.000 FP are significant risk factors: little less than half of them
is too late or cancelled. This data is in compliance with other benchmarking
reports such as the earlier mentioned Standish report [18].
All this data implies that you can forget about a revolutionary technology
upgrade if you want a low risk, low cost solution. It would be the same as
bombing an Egyptian pyramid and trying to rebuild it from scratch, only to
find out that you cant construct 2000 year old mummies. Hence the case for
IT evolution in the title of this paper.

4.2 The Evolutionary Approach


The evolutionary approach is somewhat more humane at least to the orga-
nizational staff. The goal here is to achieve a gradual transition to the new
technology using the existing staff [11]. Of course, the staff has to be reinforced
by experts specialized in the new technology, persons who can program the Web
user interfaces and who can design interfaces to the middleware. Also the staff
has to deal in a diligent way with resistance to change: if the message to the
programmers is that they are no longer needed when this evolutionary step is
taken, be prepared for firm resistance, sabotage, and other evasive actions in a
quest to defend their existence. However, the idea here is to reuse the legacy
software within the framework of the new technology. This is referred to as
Component Oriented Reengineering [44].
In Component Oriented Reengineering (CORE) an architectural framework
is first established with five levels (see Figure 6).

a user interface level;


a process control level;
a transaction distribution level;
a business logic level;
a data access level.

15
Figure 6: A component oriented system architecture.

At the user interface level the new Web page interfaces are designed and
implemented using the old masks as a muster. Either special custom code
running on the host computer takes care of redirecting 3270 data streams in
the right format to a PC, or standard packages such as IBMs Java to CICS
Internet gateway are used (sometimes the code needs restructuring first [10]).
At the process control level a new work flow logic is implemented to govern
the sequence of business transactions and to handle exception conditions.
The latter alternative is a psychological challenge which has to be solved by
applying pressure on the legacy programmers to relinquish their knowledge to
their successors in the new organization. We recall that these persons will often
resist, fearing loss of their positions within the organization. Therefore, they
are going to have to be given some perspective on their future. In some cases,
legacy programmers are retrained in the new technology, in other cases they are
converted to testers. In any case it entails a longer period of retraining and a
change of status. In the worst case, the entire part responsible for this flow logic
is wrappedincluding the programmersso that others can access this layer
without too much trouble.
One way or another, once the new Internet systems are up and running, the
old host centered systems can be phased out. Depending on the nature of the
systems and their programmers, this may take several years. Some end users
will go on working in the traditional manner as long as there are systems present
to support them. So to expedite the transition, a cross over deadline has to be
set at which point users have no other choice but to work with the Web enabled
applications. At this point, the staff maintaining the legacy systems can be
dissolved.

16
At the transaction distribution level, middleware is employed to direct the
transactions to the appropriate application server and to distribute the load. A
whole subindustry emerged that provides middleware solutions exactly to satisfy
these needs.
At the business logic level, the business rules are applied to produce a re-
sponse to the user requests. But one should realize that it is not easy to recover
the exact business rules from existing legacy applications. These are design ar-
tifacts persisting in the code while the rationale for them died a long time ago.
Thus it can happen that services that are no longer needed are still lurking in
the code, that obsolete tax and legal regulations are still around. On top of that,
the average information system easily tops 30% dead code. So IT evolution is
not a panacea either.
Finally, at the data access level, the data server provides access to the
databases and creates the objects to be processed at the business logic level.
Once this architectural framework has been established and tested, it is
possible to start migrating the existing programs to it. This is done either by
wrapping, converting or reimplementing them.
Wrapping means hiding them behind an access shell so that they can remain
basically as they are. The access shell converts the incoming data into a format
understandable to the old program and then invokes it. The results of the old
program are picked up and converted back into a format understandable to the
requesters. One of the means to do that is to use the XML data exchange
language [45]. However, XML is text based, there is no compression to a binary
format, there is no so-called term sharing [9], so there is a significant blow-up
factor. If the data becomes bulky the XML streams can clog the network.
Converting entails an automatic or semiautomatic transformation of the old
programs to a new language with a new design. An example for this would be
the transformation of C to C++ or COBOL to object COBOL. The transformed
program has to be retrofitted into the new environment [43]. However, language
conversion is not a simple matter and has its own difficulties and pitfalls [51].
Reimplementation means manually rewriting equivalent programs in a new
language. The new version of the program replicates the old business logic but
is designed to fit into the new architecture from the start. Only the business
rules are carried over into the specification of the new software [52].
Actually, all of the above named reengineering techniques can be combined
to migrate existing components into the new technology. Furthermore it is
possible to enhance these existing components with new components developed
to fulfill new tasks which come up in using the new technology.
This evolutionary approach to introducing new technology is also not easy to
realize, but it is less expensive, lower risk, and less wasteful of human resources.
The key objective is to migrate the IT staff together with the software and data
they are responsible for.
A good comparison between revolution and evolution is the way cities evolve
in America and in Europe. In America entire new satellite cities are built
up from scratch in outlying areas and populated with incoming higher income
groups as well as by a few upward mobile families from the old city. The old
city is left to wither away and eventually becomes uninhabitable. In Europe
individual buildings are renovated within the old city, so that the old city is
slowly renewed one building at a time until it begins to look more like a new
city. However, many old buildings remain forever, so that the city is a mixture

17
of old and new edifices located next to one another. Exactly this is the ideal
of IT evolutionthe coexistence of old and new processes interacting with one
another [25].

5 Emergence of New Business Processes


Having introduced the new technology, new business processes will follow of their
own accord. If it is more convenient communicating with the system directly
users and customers will do so. There will be no great problem in restructur-
ing the business processes to match the supporting technology. It is mainly a
question of assessing the possibilities offered by the technology and taking ad-
vantage of them, as did the entrepreneur that used the current technology. Like
the entrepreneur that used the current technology infrastructure to sell a lost
passenger a drink using a wireless networked coke machine.

Figure 7: The adaptation of business processes to the system architecture.

The idea of first designing business processes independently of the under-


lying technology and then fitting the technology to them is not feasible [49].
Human beings have always adapted themselves to technology, whether it be the
automobile, the telephone or the computer. The same applies to organizations.
They are structured to exploit the technology available to them. This is the
primary goal of business process reengineering. Given these realities of business
reengineering one could better speak of business alignment. Legacy business
processes are customized to fit the new technology. Only by aligning the busi-
ness processes with the prevailing information technology, it is possible to take
full advantage [8].

18
In the case of the Internet, that means putting data controls, information
filtering, decision making and even prompting into the software. Users of the
systems, that is, customers, must learn to converse with the system and not with
other human beings in order to do business (see Figure 7). The other human
beings are only there to help them if they get stuck, much like the yellow angels
on the motor way. The objective of modern business process reengineering is
to eliminate the human element in the chain, that is, to automate the interface
between the customer and the enterprise. This implies that business processes
should take part within the computer network. The flow of information is from
one program to another. The role of the human is to simply monitor the business
processes and make sure they are operating correctly. In effect, the persons who
up until now have been actors in the business process now become testers of
those processes [5].
Were in no way implying that once the technological infrastructure allows
for Web enabled business applications, the world will be a better place. A new
problem is now that people happen to dislike impersonal interfaces, but want
humanoid interfaces, as in the old days. For provocative enlightment in that
area we refer you to the Cluetrain Manifesto [32].

6 Technology Drives Business

Figure 8: Technology drives business.

With the advent of Internet technology in society, we have come another big
step forward in the direction of automating business processes. The challenge
we take is to get the technology working properly and reliably. The business
side will follow of its own accord and its own speed, just as it has always done.
Just think of the fact that it took about 15 to 20 years between the time the

19
first coke machine went on-line and a wireless networked coke machine was to be
found on an airport. In Figure 8 we illustrate our hypothesis that the realities of
the technology of today drive us to a fully networked society that will radically
change business processes.
This hypothesis has been proven by the short but turbulent history of in-
formation technology. Competition between companies and end user demands,
force businesses to constantly introduce newer and better ways of using the com-
puter to do business, e.g., ATMs. Some organizations, especially in the public
sector or in business sectors with little competition may hang on to an obsolete
technology for a while, but eventually they too will be pressured into changing.
If the newer technology is attractive and really offers better conditions and more
opportunities for end users, they will expect the IT department to supply it.
By reengineering the computer processes to adapt to the new technology, the
old ways of working with the computer are rendered obsolete. The business
processes always adapt to the new computer processes. For the Internet, this
means eliminating certain jobs (like bank tellers), and giving end users and cus-
tomers direct access to their data and to the services provided by the system.
This again has an effect on the way businesses are organized. In conclusion,
it can be stated that software reengineering, business process reengineering and
business reengineering are densely intertwined and highly interdependent on one
another. By reengineering the software, the business processes will follow and
sooner or later the organization. IT evolution is a prerequisite to reengineering
the corporation.

References
[1] P. Aiken. Data Reverse Engineering Slaying the Legacy Dragon. McGraw-Hill, New
York, NY, 1996.
[2] A.J. Albrecht. Measuring application development productivity. In Proceedings of the
Joint SHARE/GUIDE/IBM Application Development Symposium, pages 8392, 1979.
[3] T.S. Ankrum. The Evolution of CICS 30 Years Old and Still Modern, 2001.
http://www.cobolreport.com/columnists/tscott/ (Current February 2001).
[4] E. Arranga, I. Archbell, J. Bradley, P. Coker, R. Langer, C. Townsend, and M. Weathley.
In Cobols Defense. IEEE Software, 17(2):7072, 75, 2000.
[5] J. Bach. Rethinking the Role of Testing for the e-Business Era. Cutter IT Journal,
13(4):4042, April 2000.
[6] F.R. Bleakley. The best laid plans Many companies try management fads, only to see
them flop. The Wall Street Journal, page A1 and A6, July 6 1993.
[7] B. Boar. Implementing Client/Server Computing A strategic Perspective. McGraw-
Hill, New York, NY, 1993.
[8] S. Brady. A new Thread Work Trends in the 21st Century. Cutter IT Journal,
13(1):2328, January 2000.
[9] M.G.J. van den Brand, H.A. de Jong, P. Klint, and P.A. Olivier. Efficient annotated
terms. SoftwarePractice and Experience, 30:259291, 2000.
[10] M.G.J. van den Brand, M.P.A. Sellink, and C. Verhoef. Control Flow Normalization for
COBOL/CICS Legacy Systems. In P. Nesi and F. Lehner, editors, Proceedings of the
Second Euromicro Conference on Maintenance and Reengineering, pages 1119, 1998.
Available at http://www.cs.vu.nl/x/cfn/cfn.html.
[11] M.L. Brodie and M. Stonebraker. Migrating Legacy Systems: Gateways, interfaces and
the incremental approach. Morgan Kaufman Publishers, San Francisco, CA, 1995.
[12] S.L. Brown and K.M. Eisenhardt. Competing on the Edge Strategy as Structured
Chaos. Harvard Business School Press, 1998.

20
[13] J. Champy. Reengineering Management The Mandate for New Leadership. Harper
Business, New York, NY, 1995.
[14] C.M. Christensen. The Innovators Dilemma When New Technologies Cause Great
Firms to Fail. Harvard Business School Press, 1997.
[15] T.H. Davenport. Saving ITs Soul Human-Centered Information Management. Harvard
Business Review, 72(2):119131, March April 1994.
[16] L.B. Eliot. Critical Success Factors for implementing Client/Server Applications. Amer-
ican Programmer, 7(11):1016, November 1994.
[17] I.S. Graham. Migrating to Object Technology. Addison-Wesley, Wokingham, G.B., 1995.
[18] The Standish Group. CHAOS, 1995. Retrievable via:
http://standishgroup.com/visitor/chaos.htm (Current February 2001).
[19] S.H. Haeckel. Adaptive Enterprise Creating and Leading Sense-and-Respond Organi-
zations. Harvard Business School Press, 1999.
[20] M. Hammer and J. Champy. Reengineering the CorporationA Manifesto for Business
Revolution. Harper Business, New York, NY, 1993.
[21] D.F. Heany. Development of Information Systems What Management Needs to Know.
Ronald Press Company, New York, NY, 1968.
[22] G. Hector. Breaking the Bank The Decline of Bank of America. Little, Brown and
Company, Boston, MA, 1988.
[23] M.M. Heidkamp. Reaping the Benefits of Financial EDI. Management Accounting, pages
3943, May 1998.
[24] M.A. Jackson. System Development. Prentice Hall, Englewood Cliffs, New Jersey, US,
1983.
[25] Y. Jayachandra, G.J. Melkote, and P. Medina-Mora. RE-Engineering the networked
Enterprise. McGraw-Hill, New York, NY, 1994.
[26] J. Johnson. Chaos: The dollar drain of IT project failures. Application Development
Trends, 2(1):4147, January 1995.
[27] C. Jones. Estimating Software Costs. McGraw-Hill, 1998.
[28] C. Jones. The Year 2000 Software Problem Quantifying the Costs and Assessing the
Consequences. Addison-Wesley, 1998.
[29] C. Jones. Software Assessments, Benchmarks, and Best Practices. Information Tech-
nology Series. Addison-Wesley, 2000.
[30] J. King. Re-engineering slammed. Computerworld, June 13, 1994.
[31] R.L. Koelmel. Implementing Application Solutions in a Client/Server Environment
Concepts and Failures. John Wiley & Sons, New York, NY, 1995.
[32] C. Locke, R. Levine, D. Searls, and D. Weinberger. The Cluetrain Manifesto The End
of Business as Usual. Perseus Books, 2001.
[33] J. Martin. Design of Real-Time Computer Systems. Prentice Hall, Englewood Cliffs,
New Jersey, US, 1967.
[34] J. Martin. Programming without Programmers A Manifest for the Information Society.
Prentice Hall, Englewood Cliffs, New Jersey, US, 1984.
[35] K. Marx. Das Kapital. Otto Meissner, Hamburg, Germany, 1867. In German.
[36] M. McDonald. From Manual Commerce to e-Commerce Business Practices for Building
Relationships. Cutter IT Journal, 13(4):1224, April 2000.
[37] S. McMenamin and J. Palmer. Essential System Analysis. Yourdon Press/Prentice Hall,
New York, NY, 1984.
[38] J. Moad. Does reengineering really work? Datamation, Augustus 1, 1993.
[39] Ovum. Report on the status of programming languages in europe. Technical report,
Ovum Ltd., 1997.
[40] L. Putnam and W. Myers. Without Software, no MegaTrends. Cutter IT Journal,
13(1):1822, January 2000.

21
[41] J. Scott. The e-Business Hat Trick Adaptive Enterprises, Adaptable Software, Agile
IT Professionals. Cutter IT Journal, 13(4):712, April 2000.
[42] C. Sliwa. Net used to extent EDIs reach. Computerworld, page 1, Feb 12, 1998.
[43] H.M. Sneed. Object-Oriented COBOL Recycling. In Proceedings Third Working Con-
ference on Reverse Engineering, pages 169178. IEEE Computer Society, 1996.
[44] H.M. Sneed. Objektorientierte Softwaremigration. Addison-Wesley, 1998. In German.
[45] H.M. Sneed. Encapsulation of Legacy Software A Technique for reusing legacy Software
Components. Annals of Software Engineering, 9:293313, March 2000.
[46] H.M. Sneed and T. Dombovari. Comprehending a Complex, Distributed, Object-Oriented
Software System A Report from the Field. In D. Smith and S.G. Woods, editors,
Proceedings of the Seventh International Workshop on Program Comprehension, pages
218225. IEEE Computer Society Press, 1999.
[47] A.E. Stevenson. Experienced Drivers wanted IT Innovation and Organizational Matu-
rity. Cutter IT Journal, 12(12):1320, Dec 1999.
[48] P.A. Strassmann. The Politics of Information management Policy Guidelines. The
Information Economics Press, New Canaan, Connecticut, 1995.
[49] P.A. Strassmann. The Roots of Business Process Reengineering. American Programmer,
8(6):37, June 1995.
[50] T. Tamai. The software evolution process over generations findings from a software
replacement survey. American Programmer, 6(11):3541, 1993.
[51] A.A. Terekhov and C. Verhoef. The realities of language conversions.
IEEE Software, 17(6):111124, November December 2000. Available at
http://www.cs.vu.nl/x/cnv/s6.pdf.
[52] I. Warren. The Renaissance of Legacy Systems. Springer Pub., London, 1999.

22

You might also like