Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

C H A P T E R

5 What Went Wrong?

Our task now is not to fix the blame for the past, but to fix the course for the future.
John F. Kennedy

5.1 The Problem


The previous chapters detailed the considerable progress that has been made. So why
name this chapter, “What Went Wrong?” To answer this question, we ask another: After
so much “progress,” why does “Newton’s law of consultants,” which states that
For every expert there is an equal and opposite expert.

still remain in force? Recall that in Chapter 1 approaches to manufacturing management


were broken down into three basic trends:
1. Efficiency trend: This approach started in the early 20th century with the
scientific management movement and the first attempts at modeling
manufacturing processes. Scientific management was dead by the 1920s, but
the efficiency trend persisted quietly for decades until it experienced a huge
resurgence in international attention when the just-in-time (JIT) movement,
having been developed in Japan over the previous three decades, burst on the
corporate scene in the late 1970s. While JIT shared a focus on efficiency with
scientific management, it tended to deemphasize modeling in favor of a focus
on overall philosophy and shop floor methods. Today the efficiency movement
continues under the labels of lean manufacturing and the Toyota production
system (TPS), and it is characterized by an emphasis on visual management,
smooth flow, and low inventory. Like JIT, lean and TPS tend to utilize
benchmarking and imitation rather than mathematical models or computers.
2. Quality trend: This approach dates back to the work of Shewhart, who in the
1930s introduced statistical methods into quality control. From the 1950s to the
1980s the movement grew in scope and influence under Juran and Deming, but
it remained rooted in statistics and was largely ignored by American industry.
However, quality rode to prominence on the coattails of the JIT movement and,
in the 1980s, became known as total quality management (TQM). Although the

176
Chapter 5 What Went Wrong? 177

approach was very successful, it was oversold to the point where TQM reduced
quality to a clichéd buzzword, causing a backlash that almost drove the quality
trend from the corporate landscape during the early 1990s. However, it was
resurrected in the latter half of the decade–under the banner of Six
Sigma—when ABB, Allied Signal, General Electric, and others began
implementing a methodology that had originally been developed at Motorola in
the mid-1980s. Six Sigma reconnected the quality trend to its statistical origins,
by focusing on variance identification and reduction. But it has gone on to
evolve into a broader systems analysis framework that seeks to place quality in
the larger context of overall efficiency.
3. Integration trend: This approach began in the 1960s with the introduction of
the computer into manufacturing. Although large-scale organizations had
clearly been “integrated” prior to this, it was only with the advent of material
requirements planning (MRP) that formal integration of flows and functions
became a focus for improving productivity and profitability. In the 1970s, MRP
enjoyed the publicity spotlight that the American Production and Inventory
Control Society (APICS) generated with its “MRP crusade.” But, though
software packages continued to sell, the success of JIT in the 1980s temporarily
eclipsed MRP as a movement. It re-emerged in the 1990s, in expanded form, as
enterprise resources planning (ERP). ERP promised to use the emerging
technology of distributed processing to integrate virtually all business processes
in a single IT application. A “re-engineering” fad and fears of the Y2K bug
fueled demand for these extremely expensive systems. However, once the
millennium passed without disaster, companies started to realize that ERP had
been oversold. Undaunted, ERP vendors (sometimes literally overnight)
transformed their ERP software into supply chain management (SCM) systems.
They continue to offer their vision of computer integration, now under the
banner of SCM.

Today, the current incarnations of these trends all have strong followings. Lean, Six
Sigma, and SCM are each being sold as the solution to productivity problems in both
manufacturing and services, as well as in other sectors such as construction and health
care. The resulting competition between different approaches has fostered excessive
zeal on the part of their proponents. But, as history has shown repeatedly, excessive
zeal tends to result in programs being oversold to the point at which they degenerate into
meaningless buzzwords. The periodic tendency of these trends to descend into marketing
hype is one sign that there is a problem in the state of manufacturing management. The
separation of the three trends into competing systems, a state of affairs that keeps good
ideas from being disseminated, is another indication of this problem.
The crisis can be traced to a single common root—the lack of a scientific framework
for manufacturing management. The resulting confusions are numerous:
1. There is no universally accepted definition of the problem of improving
productivity.
2. There is no uniform standard for evaluating competing policies.
3. There is little understanding of the relations between financial measures (such
as profit and return on investment) and manufacturing measures (such as work
in process, cycle time, output rates, capacity, utilization, variability, etc.).
4. There is little understanding of how the manufacturing measures relate to each
other.
178 Part I The Lessons of History

Hence, there is no system for distinguishing the good from the bad in terms of concepts,
methods, and strategies. Moreover, since the barriers to entry are so low, the manufactur-
ing field is awash with consultants—and since the above relations are so little understood,
the consultants are left relying on buzzwords and “war stories” about how such-and-
such technique worked at company so-and-so. Thus, companies inevitably fall back on
management by imitation and cheerleading. It is no wonder that most of the people
working “in the trenches” sigh knowingly at each new fad, well aware that “this too shall
pass.”
But the practice of operations management need not be dominated by warring per-
sonalities battling over ill-defined buzzwords. In other fields this is not the case. For
instance, consider an engineer designing a circuit with a circuit breaker. The voltage
is 120 volts and the resistance is 60 ohms. What size breaker is needed? The answer
is simple. Since the engineer knows that V = I R (this is Ohm’s law, a fundamental
relationship of electrical science) the breaker must be 120 volts/60 ohms = 2 amperes.
No buzzwords or “experts” required.
Of course, this is a very simple case. But even in more complex environments,
a scientific framework can help to guide decisions. For example, building a highway
bridge is not an exact science, but there are many well-known principles that can be
used to make the process smoother and more risk-free. We know that concrete is very
strong in compression but not in tension. On the other hand, steel is strong in terms
of tension but not when it comes to compression. Consequently, long ago, engineers
designed “reinforced concrete,” something that combines the best of both materials.
Likewise, we know that all stresses must be supported and that there can be no net
torques. Therefore, for a short bridge, a good design is a curved beam that transmits
the stress of the middle of the bridge to the supporting structures. A longer bridge may
require a larger superstructure or even a suspension mechanism. Insights like these do
not completely specify how a given bridge should be built, but they provide a sufficient
body of knowledge to prevent civil engineers from resorting to faddish trends.
In manufacturing, the lack of an accepted set of principles has opened the way
for “experts” to compete for the attention of managers solely on the basis of rhetoric
and personality. In this environment, catchy buzzwords and hyperambitious book titles
are the surest way to attract the attention of busy professionals. For example, Orlicky
(1975) subtitled his book on MRP “The New Way of Life in Production and Inventory
Management,” Shingo (1985) titled his book on SMED (single-minute exchange of die)
“A Revolution in Manufacturing” and Womack and Jones (1991) gave their book on
lean manufacturing the grand title of “The Machine That Changed the World.” While
buzzword campaigns often have good ideas behind them, they almost always oversim-
plify both problems and solutions. As a result, infatuation with the trend of the moment
quickly turns to frustration as the oversold promises fail to materialize, and the disap-
pointed practitioners move on to the next fad.
This characterization of manufacturing management thought as a procession of
trendy fads is certainly a simplification of reality, but only a slight one. Searching for the
phrase “lean manufacturing” or “lean production” on Amazon.com brings up 1,700 titles,
while “supply chain” yields 5,218 and “Six Sigma” brings up 1,484. This glut of books
is marked by volumes distinguished only for their superficial ideas and flashy writing.
While the three major trends identified above show faint signs that “buzzword man-
agement” may eventually be relegated to the manufacturing history books, they have
yet to take a clear step in that direction. In their current incarnations, each of the trends
contains fundamental flaws, which prevent them from serving as comprehensive man-
agement frameworks.
Chapter 5 What Went Wrong? 179

First we consider the efficiency trend and lean manufacturing. Although this ap-
proach once stressed science and modeling, the current tools of lean are largely quite
simple. For instance, most lean consultants will start by drawing up an elementary value
stream map to identify the muda in the system, then project where the system could go
by preparing a future state map, and finally attempt to bridge the gap by applying a set
of rather standard kaizen events (setup reduction, “5s,” visual controls, kanban, etc.). If
the situation is similar enough to another in which these practices have already been
successfully employed, and if the practitioner is clever enough to recognize the analo-
gous behavior, such efforts are likely to be successful. But in situations that are new, an
experience-based approach like this is probably not going to be innovative enough to
yield a useful solution.
We next consider the quality trend and Six Sigma. In contrast to the efficiency
trend, which evolved from complex to simple, the quality movement has migrated from
simple to sophisticated. In the 1980s, TQM practices (e.g., “quality circles”) were often
criticized as superficial and overly simplistic. But current Six Sigma training places
heavy emphasis on the requisite statistical tools, imparting them over the course of four
36-hour-week “waves” and accompanying the teaching with meaningful improvement
projects. In addition, the architects of Six Sigma training have taken some lessons from
marketing and motivation experts and designated graduates of their programs “black
belts” and their management counterparts “champions.” These catchy labels, as well as
an across-the-board commitment from upper management to fund a massive training
initiative, have contributed much to the success of Six Sigma.
Unfortunately, while Six Sigma provides training in advanced statistics, it does not
offer education on how manufacturing systems behave. For instance, consider a plant with
high inventory levels, poor customer service, and low productivity. The Six Sigma black
belt would approach this scenario by trying to better define the problem (a laudable goal);
performing some measurements; analyzing the data using some kind of experimental
design to determine the “drivers” of the excess in inventory, the poor customer service,
and the low productivity; implementing some changes; and, finally, instituting some
controls. While this process might eventually yield good results, it would be tedious
and time-consuming. In a fast-moving, competitive industry there simply isn’t time to
rediscover the causes of generic problems like high WIP and poor customer service.
Managers and engineers need to be able to invoke known principles for addressing
these. Unfortunately, though they do indeed exist, these principles are not currently a
part of the Six Sigma methodology.
Finally, we consider the integration trend and supply chain management. Because
of its historical connection to the computer, this movement has always tended to view
manufacturing in IT terms: if we collect enough data, install enough hardware, and
implement the right software, the problem will be solved. Whether the software is called
MRP, ERP, or SCM, the focus is on providing accurate and timely information about the
status and location of orders, materials, and equipment, all in order to promote better
decision making.
Unfortunately, what is usually left unsaid when the manufacturing management
problem is described in IT terms is that any software package is going to rely on some
sort of model. And in the case of SCM, as with ERP, MRP II, and all the way back to
MRP, the underlying model has almost always been wrong! Hence, in practice, most
development effort has been directed toward ensuring that the database is not corrupted,
the transactions are consistent, and the user interface is easy (but not too easy) to use. The
validity of the underlying model has received very little scrutiny, since this is not viewed
as an IT issue. As a result, SAP/R3 uses the same underlying model for production
180 Part I The Lessons of History

logistics that was used by Orlicky in the 1960s (i.e., a basic input-output model with
fixed lead times and infinite capacity). Of course, most modern SCM and ERP systems
have added functions to detect the problems caused by the flawed model, but these efforts
are really too little, too late.

5.2 The Solution


The three major trends in manufacturing management contain valuable elements of an
integrated solution, specifically:
1. Six Sigma offers an improvement methodology that involves both upper
management and lower-level information workers. It also recognizes that
improvements are difficult to accomplish (despite the “rah rah” of most
buzzwords) and that there is a body of knowledge that must be taken into
account before one can be effective. Finally, it provides a detailed training
program, along with the expectation that advanced knowledge is required for
success.
2. The lean philosophy promotes the right incentives: focus on the customer,
forget unit cost, look for obviously wasteful practices and eliminate them, and
modify and improve the environment.
3. IT (e.g., SCM and ERP) systems provide the data needed to make rational
manufacturing management decisions.
But there is a key component missing from all of this—a scientific framework that
can make sense of the underlying manufacturing operations. Unlike designers of
electronic circuits and roadway bridges, most Six Sigma black belts, lean practitioners,
and SCM software salespersons simply do not have enough knowledge of the basic
workings of manufacturing systems. These include relationships between cycle time,
production rate, utilization, inventory, work in process, capacity, variability in demand,
variability in the manufacturing process, and so on. Without this knowledge, they are
forced to do one of the following:
1. Analyze the system statistically to determine cause and effect, then implement
certain changes and install controls (this is the Six Sigma approach).
2. Imitate what has been done somewhere else and hope it works again (this is the
lean approach).
3. Install a new software application (this is the IT approach).
Not surprisingly, the success rate of firms using these approaches has been decidedly
mixed. The reason is that practitioners must rely on luck or local genius to make the right
choices for their system. Luck does sometimes come through, but it doesn’t last forever,
so it is rarely a source of sustained success. An in-house genius (e.g., Ohno at Toyota) is
a much more reliable asset, but true geniuses are rare. The rest of us need some kind of
framework to enable us to select and adapt concepts from the major (and minor) trends
and create an effective management system for a given environment.
In this book, we use the term Factory Physics to refer to the necessary framework.
In Part II we will describe the fundamentals of Factory Physics. In Part III we will show
how these fundamentals can be practically applied to improve and control a wide range
of manufacturing systems. In the remainder of this chapter, we will pick up the historical
thread from Chapters 1–4 and discuss the virtues and shortcomings of past approaches
from a factory physics perspective.
Chapter 5 What Went Wrong? 181

5.3 Scientific Management


Frederick W. Taylor, like many others in the late 19th and early 20th century, placed great
faith in science. Indeed, in view of the remarkable progress made during the previous two
centuries, some felt that all the basic concepts of science had already been established.
In 1894, Albert Michelson stated
The more important fundamental laws and facts of physical science have all been discovered,
and these are now so firmly established that the possibility of their ever being supplanted in
consequence of new discoveries is exceedingly remote . . . . Our future discoveries must be
looked for in the sixth place of decimals.
Lord Kelvin agreed, saying in 1900, “There is nothing new to be discovered in physics
now. All that remains is more and more precise measurement.”
Of course, we know that the entire edifice of physics would come crashing down
within 20 years with the discoveries of relativity and quantum mechanics. But at the
turn of the century, when Taylor and others were promoting scientific management,
it appeared to many that physics had been a complete and unqualified success. The
new sciences of psychology and sociology were expected to follow a similar pattern.
Therefore, it was both plausible and popular to propose that science could bring the same
kind of success to management that it had brought to physics.
In hindsight, however, it appears that scientific management had more in common
with today’s buzzword approaches to management than to the scientific fields of the
time. Like modern buzzwords, “scientific management” was a very popular term, and
it gave consultants a “scientific” mandate to sell their services. It was only vaguely
defined, and could therefore be promoted as “the” solution to virtually all management
problems. But, unlike in modern science, Taylor made many measurements but did little
experimentation. He developed formulas, but did not unify them in any kind of general
theory. Indeed, neither Taylor nor any of his contemporaries ever posed the descriptive
question of how manufacturing systems behave. Instead, they focused on the immediate
prescriptive question of how to improve efficiency.
As a result, the entire stream of work spawned by the original scientific management
movement followed the same frameless, prescriptive approach used by Taylor. Instead of
asking progressively deeper questions about system behavior, researchers and practition-
ers simply addressed the same worn problem of efficiency with ever more sophisticated
tools. For example, in 1913, Harris published his original EOQ paper and established
a precise mathematical standard for efficiency research with his famous “square root
formula” for the lot-sizing problem. While elegant, this formula relied on assumptions
that—for many real-world production systems—were highly questionable. As we dis-
cussed in Chapter 2, these unrealistic assumptions included
r A fixed, known setup cost
r Constant, deterministic demand
r Instantaneous delivery (infinite capacity)
r A single product or no product interactions
Because of these assumptions, EOQ makes much more sense applied to purchasing
environments than to the production environments for which Harris intended it. In a
purchasing environment, setups (i.e., purchase orders) may adequately be characterized
with a constant cost. However, in manufacturing systems, setups cause all kinds of other
problems (e.g., product mix implications, capacity effects, variability effects), as we will
discuss in Part II. The assumptions of EOQ completely gloss over these important issues.
182 Part I The Lessons of History

Even worse than the simplistic assumptions themselves was the myopic perspective
toward lot sizing that the EOQ model promoted. By treating setups as exogenously
specified constraints to be worked around, the EOQ model and its successors blinded
operations management researchers and practitioners to the possibility of deliberately
reducing the setups. It took the Japanese, approaching the problem from an entirely
different perspective, to fully recognize the benefits of setup reduction.
In Chapter 2 we discussed similar aspects of unrealism in the assumptions behind
the Wagner–Whitin, base stock, and (Q, r ) models. In each case, the flaw of the model
was not that it failed to begin with a real problem or a real insight. Each of them did. As
we have noted, the EOQ insight into the trade-off between inventory and setups sheds
light on the fundamental behavior of a plant. So does the (Q, r ) insight into the trade-off
between inventory (safety stock) and service. However, with the fascination for all things
scientific, the insights themselves were rapidly sidelined by the mathematics. Realism
was sacrificed for precision and elegance. Instead of working to broaden and deepen the
insights by studying the behavior of different types of real systems, experts turned their
focus to faster computational procedures for solving the simplified problems. Instead
of working to integrate disparate insights into a strategic framework, they concentrated
on ever smaller pieces of the overall problem in order to achieve neat mathematical
formulas. Such practices continued and flourished for decades under the heading of
operations research.
Fortunately, by the late 1980s, stiff competition from the Japanese, Germans, and
others drove home to academics and practitioners alike that a change was necessary.
Numerous distinguished voices called for a new emphasis on operations. For instance,
professors from Harvard Business School stressed the strategic importance of operational
details (Hayes, Wheelwright, and Clark 1988, 188):
Even tactical decisions like the production lot size (the number of components or subassem-
blies produced in each batch) and department layout have a significant cumulative impact on
performance characteristics. These seemingly small decisions combine to affect significantly
a factory’s ability to meet the key competitive priorities (cost, quality, delivery, flexibility,
and innovativeness) that are established by its company’s competitive strategy. Moreover,
the fabric of policies, practices, and decisions that make up the manufacturing system cannot
easily be acquired or copied. When well integrated with its hardware, a manufacturing system
can thus become a source of sustainable competitive advantage.

Their counterparts across town at the Massachusetts Institute of Technology agreed,


calling for operations to play a larger role in the training of managers (Dertouzos, Lester,
and Solow 1989, 161):
For too long business schools have taken the position that a good manager could manage
anything, regardless of its technological base. . . . Among the consequences was that courses
on production or operations management became less and less central to business-school
curricula. It is now clear that this view is wrong. While it is not necessary for every manager to
have a science or engineering degree, every manager does need to understand how technology
relates to the strategic positioning of the firm . . .

But while observations like these led to an increased consensus that operations
management was important, they did not yield agreement on what should be taught
or how to teach it. The old approach of presenting operations solely as a series of
mathematical models has today been widely discredited. The pure case study approach
is still in use at some business schools and may be superior because cases can provide
insights into realistic production problems. However, covering hundreds of cases in a
short time only serves to strengthen the notion that executive decisions can be made
Chapter 5 What Went Wrong? 183

with little or no knowledge of the fundamental operational details. The factory physics
approach in Part II is our attempt to provide both the fundamentals and an integrating
framework. In it we build upon the past insights surveyed in this section and make use of
the precision of mathematics to clarify and generalize these insights. Better understanding
builds better intuition, and good intuition is a necessity for good decision making. We are
not alone in seeking a framework for building practical operations intuition via models
(see Askin and Standridge 1993, Buzacott and Shanthikumar 1993, and Suri 1998 for
others). We take this as a hopeful sign that a new paradigm for operations education is
finally emerging.
Ironically, the main trouble with the scientific management approach is that it is
not scientific. Science involves three main activities: (1) observation of phenomena, (2)
conjecture of causes, and (3) logical deduction of other effects. The result of the proper
application of the scientific method is the creation of scientific models that provide a
better understanding of the world around us. Purely mathematical models, not grounded
in experiment, do not provide a better understanding of the world. Fortunately, we may
be turning a corner. Practitioners and researchers alike have begun to seek new methods
for understanding and controlling production systems.

5.4 The Rise of the Computer


Because the stream of models spawned by the scientific management movement did not
provide practical solutions to real-world management problems, it was only a matter of
time before managers turned to another approach. The emergence of the digital computer
provided what seemed like a golden opportunity. The need of manufacturing managers
for better tools intersected with the need of computer developers for applications, and
MRP was born.
From at least one perspective, MRP was a stunning success. The number of MRP
systems in use by American industry grew from a handful in the early 1960s to 150 in
1971 (Orlicky 1975). The American Production and Inventory Control Society (APICS)
launched its MRP crusade to publicize and promote MRP in 1972. By 1981, claims were
being made that the number of MRP systems in America had risen as high as 8,000 (Wight
1981). In 1984 alone, 16 companies sold $400 million in MRP software (Zais 1986). In
1989, $1.2 billion worth of MRP software was sold to American industry, constituting
just under one-third of the entire American market for computer services (Industrial
Engineering 1991). By the late 1990s, ERP had grown to a $10 billion industry—ERP
consulting did even bigger business—and SAP, the largest ERP vendor, was the fourth-
largest software company in the world (Edmondson and Reinhardt 1997). After a brief
lull following the Y2K nonevent, ERP sales picked up, exceeding $24 billion in revenue
in 2005. So, unlike many of the inventory models we discussed in Chapter 2, MRP was,
and still is, used widely in industry.
But has it worked? Were the companies that implemented MRP systems better off
as a result? There is considerable evidence that suggests not.
First, from a macro perspective, American manufacturing inventory turns remained
roughly constant throughout the 1970s and 1980s, during and after the MRP crusade
(Figure 5.1). (Note that inventory turns did increase in the 1990s, but this is almost cer-
tainly a consequence of the pressure to reduce inventory generated by the JIT movement,
and not directly related to MRP.) On the other hand, it is obvious that many firms were
not using MRP during this period; so while it appears that MRP did not revolutionize
184 Part I The Lessons of History

Figure 5.1 10.00


Inventory turns from 1960 9.50
to 2003. 9.00
8.50

Inventory turns
8.00
7.50
7.00
6.50
6.00
5.50
5.00
1940 1950 1960 1970 1980 1990 2000
Year

the efficiency of the entire manufacturing sector, these figures alone do not make a clear
statement about MRP’s effectiveness at the individual firm level.
At the micro level, early surveys of MRP users did not paint a rosy picture either.
Booz, Allen, and Hamilton, in a 1980 survey of more than 1,100 firms, reported that
much less than 10 percent of American and European companies were able to recoup
their investment in an MRP system within 2 years (Fox 1980). In a 1982 APICS-funded
survey of 679 APICS members, only 9.5 percent regarded their companies as being class
A users (Anderson et al. 1982).1 Fully 60 percent reported their firms as being class C
or class D users. To appreciate the significance of these responses, we must note that
the respondents in this survey were all both APICS members and materials managers—
people with strong incentive to see MRP in as good a light as possible! Hence, their
pessimism is most revealing. A smaller survey of 33 MRP users in South Carolina arrived
at similar numbers concerning system effectiveness; it also reported that the eventual total
average investment in hardware, software, personnel, and training for an MRP system
was $795,000, with a standard deviation of $1,191,000 (LaForge and Sturr 1986).
Such discouraging statistics and mounting anecdotal evidence of problems led many
critics of MRP to make strongly disparaging statements. They declared MRP the “$100
billion mistake,” stating that “90 percent of MRP users are unhappy” with it that “MRP
perpetuates such plant inefficiencies as high inventories” (Whiteside and Arbose 1984).
This barrage of criticism prompted the proponents of MRP to leap to its defense.
While not denying that it was far less successful than they had hoped when the MRP
crusade was first launched, they did not attribute this lack of success to the system itself.
The APICS literature (e.g., Orlicky as quoted by Latham 1981), cited a host of reasons for
most MRP system failures but never questioned the system itself. John Kanet, a former
materials manager for Black & Decker who wrote a glowing account of its MRP system
in 1984 (Kanet 1984), but had by 1988 turned sharply critical of MRP, summarized the
excuses for MRP failures as follows.
For at least ten years now, we have been hearing more and more reasons why the MRP-based
approach has not reduced inventories or improved customer service of the U.S. manufacturing
sector. First we were told that the reason MRP didn’t work was because our computer records
were not accurate. So we fixed them; MRP still didn’t work. Then we were told that our

1 The survey used four categories proposed by Oliver Wight (1981) to classify MRP systems: classes A,

B, C, and D. Roughly, class A users represent firms with fully implemented, effective systems. Class B users
have fully implemented but less than fully effective systems. Class C users have partially implemented,
modestly effective systems. And class D users have marginal systems providing little benefit to the company.
Chapter 5 What Went Wrong? 185

master production schedules were not “realistic.” So we started making them realistic, but
that did not work. Next we were told that we did not have top management involvement; so
top management got involved. Finally we were told that the problem was education. So we
trained everyone and spawned the golden age of MRP-based consulting (Kanet 1988).

Because these efforts still did not make MRP effective, Kanet and many others
concluded that there must be something more fundamentally wrong with the approach.
The real reason for MRP’s inability to perform is that MRP is based on a flawed model.
As we discussed in Chapter 3, the key calculation underlying MRP is performed by using
fixed lead times to “back out” releases from due dates. These lead times are functions
only of the part number and are not affected by the status of the plant. In particular, lead
times do not consider the loading of the plant. An MRP system assumes that the time for
a part to travel through the plant is the same whether the plant is empty or overflowing
with work. As the following quote from Orlicky’s original book shows, this separation
of lead times from capacity was deliberate and basic to MRP (Orlicky 1975, 152):
An MRP system is capacity-insensitive, and properly so, as its function is to determine what
materials and components will be needed and when, in order to execute a given master
production schedule. There can be only one correct answer to that, and it cannot therefore
vary depending on what capacity does or does not exist.

But unless capacity is infinite, the time for a part to get through the plant does depend
on the loading. Since all plants have finite capacity, the fixed-lead-time assumption is
always, at best, only an approximation of reality. Moreover, because releasing jobs too
late can destroy the desired coordination of parts at assembly or cause finished products
to come out too late, there is strong incentive to inflate the MRP lead times to provide a
buffer against all the contingencies that a part may have to contend with (waiting behind
other jobs, machine outages, etc.). But inflating lead times lets more work into the plant,
increases congestion, and increases the flow time through the plant. Hence, the result is
yet more pressure to increase lead times. The net effect is that MRP, touted as a tool to
reduce inventories and improve customer service, can actually make them worse. It is
quite telling that the flaws Kanet pointed out more than 20 years ago are still present in
most MRP and ERP systems.
This flaw in MRP’s underlying model is so simple, so obvious, that it seems incred-
ible we could have come this far without noticing (or at least worrying about) it. Indeed,
it is a case in point of the dangers of allowing the mathematical model to replace the
empirical scientific model in manufacturing management. But to some extent, we must
admit that we have the benefit of 20–20 hindsight. Viewed historically, MRP makes per-
fect sense and is, in some ways, the quintessential American production control system.
When scientific management met the computer, MRP was the result. Unfortunately, the
computer that scientific management met was the computer of the 1960s and had very
limited power. Consequently, MRP is poorly suited to the environment and computers
of the 21st century.
As we pointed out in Chapter 3, the original, laudable goal of MRP was to explicitly
examine dependent demand. The alternative, treating all demands as independent and
using reorder point methods for lower-level inventories, required performing a bill-of-
material explosion and netting demands against current inventories—both tedious data
processing tasks in systems with complicated bills of material. Hence there was strong
incentive to computerize.
The state-of-the-art in computer technology in the mid-1960s, however, was an
IBM 360 that used “core” memory with each bit represented by a magnetic doughnut
about the size of the letter o on this page. When the IBM 370 was introduced in 1971,
186 Part I The Lessons of History

integrated circuits replaced the core memory. At that time a 14 inch-square chip would
typically hold less than 1,000 characters. As late as 1979, a mainframe computer with
more than 1,000,000 bytes of RAM was a large machine. With such limited memory,
performing all the MRP processing in RAM was out of the question. The only hope for
realistically sized systems was to make MRP transaction-based. That is, individual part
records would be brought in from a storage medium (probably tape), processed, and then
written back to storage. As we pointed out in Chapter 3, the MRP logic is exquisitely
adapted to a transaction-based system.
Thus, if one views the goal as explicitly addressing dependent demands in a
transaction-based environment, MRP is not an unreasonable solution. The hope of the
MRP proponents was that through careful attention to inputs, control, and special cir-
cumstances (e.g., expediting), the flaw of the underlying model could be overcome and
MRP would be recognized as representing a substantial improvement over older pro-
duction control methods. This was exactly the intent of MRP II modules like CRP and
RCCP. Unfortunately, these were far from successful, and MRP II was roundly criticized
in the 1980s, while Japanese firms were strikingly successful by going back to methods
resembling the old reorder point approach. JIT advocates were quick to sound the death
knell of MRP.
But MRP did not die, largely because MRP II handled important nonproduction
data maintenance and transaction processing functions, jobs that were not replaced by
JIT. So MRP persisted into the 1990s, expanded in scope to include other business
functions and multiple facilities, and was rechristened ERP. Simultaneously, computer
technology advanced to the point where the transaction-based restriction of the old MRP
was no longer necessary. A host of independent companies emerged in the 1990s offering
various types of finite-capacity schedulers to replace basic MRP calculations. However,
because these were ad hoc and varied, many industrial users were reluctant to adopt
them until they were offered as parts of comprehensive ERP packages. As a result, a
host of alliances, licensing agreements, and other arrangements between ERP vendors
and application software developers emerged.
There is much that is positive about the recent evolution of ERP systems. The
integration and connectivity they provide make more data available to decision makers in
a more timely fashion than ever before. Finite-capacity scheduling modules are promising
as replacements for old MRP logic in some environments. However, as we will discuss in
Chapter 15, scheduling problems are notoriously difficult. It is not reasonable to expect
a uniform solution for all environments. For this reason, ERP vendors are beginning
to customize their offerings according to “best practices” in various industries. But the
resulting systems are more monolithic than ever, often requiring firms to restructure their
businesses to comply with the software. Although many firms, conditioned by the BPR
movement to think in revolutionary terms, seem willing to do this, it may be a dangerous
trend. The more firms conform to a uniform standard in the structure of their operations
management, the less they will be able to use it as a strategic weapon, and the more
vulnerable they will be to creative innovators in the future.
By the late 1900s, more cracks had begun to appear in the ERP landscape. In
1999, SAP AG, the largest ERP supplier in the world, was stung by two well-publicized
implementation glitches at Whirlpool Corp., resulting in the delay of appliance shipments
to many customers, particularly Hershey Foods Corp. As a result, the shelves of candy
retailers were empty just before Halloween. Meanwhile, several companies decided to
pull the plug on SAP installations costing between $100 and $250 million (Boudette
1999). Finally, a survey by Meta Group of 63 companies revealed an average return on
investment of a negative $1.5 million for an ERP installation (Stedman 1999).
Chapter 5 What Went Wrong? 187

However, once the millennium passed without any major software issues, enterprise
resources planning almost immediately became passé. Software vendors replaced their
ERP offerings with supply chain management (SCM) packages almost instantaneously.
The speed and readiness with which they made the switch leads one to wonder just how
much the software was actually altered. Nevertheless, SCM became the hot new fad in
manufacturing. Many firms created VP level positions to “manage” their supply chains.
Legions of consultants offered support. And software sales continued.
As with past manufacturing “revolutions,” hopes were high for SCM. The popular
press was full of articles prophesying that SCM would revolutionize industry by coordi-
nating suppliers, customers, and production facilities to reduce inventories and improve
customer service. But SCM, just like past revolutions, failed to deliver on the promises
made in its behalf and ERP returned along with SCM, thereby further confusing the
issue. During the 1990s and early 2000s a number of surveys showed improvement in
the implementation of information technology although the performance remains below
expectations.
A survey, now known as the “Chaos Report,” conducted by the Standish Group
in 1995, showed that over 31 percent of all IT projects are canceled before they get
completed and that almost 53 percent of the projects would cost 189 percent of their
original estimates. Moreover, only 16 percent of software projects were completed on-
time and on-budget. More recently, the Robbins-Gioia Survey (2001) indicated that 51
percent of the companies surveyed thought their ERP implementation was unsuccessful.
The main problem appears to be the inherent complexity of such systems and the lack
of understanding of exactly what the system is supposed to do.
The well-publicized spate of finger-pointing and recriminations that flared between
Nike and i2 in 2001 illustrated the frustration experienced by managers who felt they had
again been denied a solution to their long-standing coordination problem (Koch 2007).
The situation even reached the attention of the top levels of government, as evidenced
when Federal Reserve Chairman Alan Greenspan testified to Congress in mid-February
2001 that a buildup in inventories was anticipated, in spite of the advances in supply-chain
automation.
Despite all this, the original insight of MRP—that independent and dependent de-
mands should be treated differently—remains fundamental. The hierarchical planning
structure central to the construct of MRP II (and to ERP/SCM systems as well) provides
coordination and a logical structure for maintaining and sharing data. However, making
effective use of the data processing power and scheduling sophistication promised by
ERP/SCM systems of the future will require tailoring the information system to a firm’s
business needs, and not the other way around. This would require a sound understanding
of core processes and the effects of specific planning and control decisions on them.
The evolution from MRP to ERP/SCM represented an impressive series of advances
in information technology. However, as in scientific management, the flaw in MRP has
always been the lack of an accurate scientific model of the underlying material flow
processes. The ultimate success of the SCM movement will depend far more on the
modeling progress it promotes than on additional advances in information technology.

5.5 Other “Scientific” Approaches


Neither operations research nor MRP fully succeeded in reaching a systematic solution
to the manufacturing management problem raised by the original scientific management
movement. But this did not put a stop to the search. Over the years a variety of movements
188 Part I The Lessons of History

have attempted to assume the mantle of scientific management, with varying degrees of
success. Below we summarize some of ones that have had a significant impact on current
practice.

5.5.1 Business Process Re-engineering


At its core, business process re-engineering (BPR) was systems analysis applied to
management.2 But in keeping with the American proclivity for the big and the bold,
emphasis was placed heavily on radical change. Leading proponents of BPR defined
it as “the fundamental rethinking and radical redesign of business processes to achieve
dramatic improvements in critical, contemporary measures of performance, such as cost,
quality, service, and speed” (Hammer and Champy 1993). Because most of the redesign
schemes spawned by BPR involved eliminating jobs, it soon became synonymous with
downsizing.
As a buzzword, BPR fell out of favor as quickly as it arose. By the late 1990s it
had been banished from most corporate vocabularies. Still, it left a lasting legacy. The
layoffs of the 1990s, during bad times and good, certainly had a positive effect on labor
productivity. But because the layoffs affected both labor and middle management to an
unprecedented degree, they undermined worker loyalty.3 Moreover, BPR represented
an extreme backlash against the placid stability of the golden era of the 1960s; radical
change was not only no longer feared, it was sought. This paved the way for more
revolutions. For example, it is hard to imagine management embracing the ERP systems
of the late 1990s, which required fundamental restructuring of processes to fit software
as opposed to the other way around, without first having been conditioned by BPR to
think in revolutionary terms.

5.5.2 Lean Manufacturing


Although BPR disappeared quickly, the systems analysis perspective it engendered lived
on. Lean manufacturing practitioners evolved a version of systems analysis—value
stream mapping (VSM)—that had much in common with BPR. Value stream map-
ping is really a variation of an older procedure known as “process flow mapping,” which
provides a visual representation of the process to be studied or improved. VSM starts
by making a “current state map” that identifies a characteristic flow of parts through the
plant and then compares the “value-added” time with the total cycle time of the part. The
results are often stunning, with value-added time being less than 1 percent of the total.
The practitioner then goes on to create a “future state map,” showing how the system
will look once all the improvements are in place. However, while VSM is a useful first
step, it is not a fully developed systems analysis paradigm for the following reasons:
1. There is no exact definition of “value-added,” an omission that frequently leads
to a great deal of wasted time spent arguing about what is value added and what
is not.
2. Value-added time is frequently so short that it does not offer a reasonable target
for cycle time.

2 Systems analysis is a rational means–ends approach to problem solving in which actions are evaluated
in terms of specific objectives. We discuss it in greater detail in Chapter 6.
3 The enormous popularity of “Dilbert” cartoons, which poke freewheeling fun at BPR and other

management fads, tapped into the growing sense of alienation felt by the workforce in corporate America.
Ironically, some companies actually responded by banning them from office cubicles.
Chapter 5 What Went Wrong? 189

3. VSM does not provide a means for diagnosing the causes of long cycle times.
4. Even though VSM collects capacity and demand data, it does not compute
utilization and therefore never discovers when a process is facing demand
beyond its capacity.
5. There is no feasibility check for the “future state.”
This is not to say that VSM is not useful—it is. Any improvement project should start
with an assessment of the current state. Hundreds of companies have discovered signif-
icant opportunities by simply using VSM to carefully examine their current processes.
However, once the low-hanging fruit has been harvested, VSM does not offer a means
for identifying further improvements. Taking this further step requires a model that sys-
tematically connects policies to performance. Nothing in the current lean manufacturing
movement is poised to provide such a model.

5.5.3 Six Sigma


Since it purports to be based on the scientific method, Six Sigma has roots in the sci-
entific management movement. But, as we noted earlier, Six Sigma emphasizes only
the experimentation aspect of the scientific method. Lacking an underlying model, it
treats each production system as a “black box” and does not retain the discoveries ob-
tained during experiments. In this regard, Six Sigma specialists are like scientists who
throw away their old data each time they make new observations. Of course, this kind
of approach is ludicrous, from a scientific perspective. But it remains the norm, because
manufacturing practitioners are not scientists. So, instead of promulgating their results
and slowly building an edifice of theory, they view each situation as new and unique.
Little is retained from previous experiences and even less is shared between companies.
Nonetheless, Six Sigma, which started as a means for identifying and reducing
variability in processes, now offers its own systems analysis approach, and it uses some
very sophisticated methods. The method is called DMAIC:
Define the problem.
Measure performance and quantify the problem.
Analyze using mostly statistical techniques.
Improve the situation.
Control, as in “keep the process in control.”
The DMAIC approach is extremely useful in addressing problems that Six Sigma was
originally designed to handle. It shows its quality control roots in the analyze and control
steps. However, like value stream mapping, DMAIC is not a substitute for a general
systems analysis paradigm.
To see why, consider applying the DMAIC approach to the problem of reducing
cycle time (the time for a job to go through the factory). Once some data have been
collected on current cycle times and a quantitative goal has been set, the process calls for
the analyze step to begin. But invariably someone on the improvement team will claim
that the measure step is not complete because insufficient data have been collected.
The reason this occurs is that measuring and analyzing always go together, and cannot
be arbitrarily separated into completely different processes. It simply isn’t possible to
collect all necessary data up front. Instead, the process of measurement and analysis
proceed iteratively as each analyze step leads to more questions.
190 Part I The Lessons of History

Another problem with the analyze step in DMAIC is that it typically uses statistical
methods to determine cause and effect. The authors have observed the dangers of this
while teaching basic Factory Physics to a group of Six Sigma blackbelt candidates,
who had just undergone 2 weeks of conventional Six Sigma training, including design
of experiments and analysis of variance. The group spent 4 days studying the basic
behavior of manufacturing systems (i.e., Part II of this book), and then, on the last day
of the course, the members were assigned a case study in reducing cycle time. In spite
of our efforts to show them the root causes of long cycle times through Factory Physics
theory, every member of the class resorted to designing an experiment to determine the
cause of long cycle times. Evidently, strict devotion to the DMAIC approach had blinded
the group from seeing why cycle times were long.
Ironically, it appears that BPR and Six Sigma, movements that have their roots in
the ultrarational field of systems analysis, may actually have left many manufacturing
professionals more vulnerable to irrational buzzword fads than ever before.

5.6 Where to from Here?


In Part I of the book, and particularly in this last chapter, we have made the following
points:
1. Scientific management has become mathematical management in that it has
reduced the manufacturing management problem to analytically tractable
subproblems, often making use of unrealistic modeling assumptions that
provide little useful guidance from an overall perspective. The mathematical
methods and some of the original insights can certainly still be useful, but we
need a better framework for applying these in the context of an overall business
strategy.
2. Information technology without a suitable model of the flow process is
fundamentally flawed. For instance, MRP is flawed not in the details, but in the
basics, because it uses an infinite-capacity, fixed-lead-time approach to control
work releases. “Patches,” such as MRP II and CRP, may improve the system in
small ways, but they cannot rectify this basic problem. Moreover, the
fundamental flaw of MRP has been carried over into ERP and SCM.
3. Other “scientific” approaches, such as business process re-engineering, have
typically exhorted managers to rethink their processes without providing a
framework for doing so. These approaches ended in becoming too closely
identified with exclusively radical solutions and downsizing to provide a
balanced alternative.
4. Lean manufacturing provides many useful tools for improving operations.
But the methodology is one of imitation and, as such, does not offer a general
approach for improving any operation, nor does it offer a comprehensive
systems analysis paradigm. Value stream mapping provides a good start in that
direction but does not go far enough to provide solutions to many real problems
(e.g., practical lot sizing and stock setting).
5. Six Sigma is based on the scientific method, particularly the experimentation
step. However, Six Sigma does not provide a paradigm for organizing and
retaining the knowledge obtained from experiments. Moreover, while the
DMAIC procedure is useful in determining causes and implementing variability
controls, it does not offer a comprehensive tool set.
Chapter 5 What Went Wrong? 191

In practice, it is true that the morphing of JIT and TQM into lean and Six Sigma
has resulted in more robust methods, but what we are left with are still sets of tech-
niques rather than comprehensive systems. Despite many grandiose claims, none of
these methods has reduced Toyota’s successes of the 1980s to a cookbook of rules.
The many creative insights of the JIT and TQM founders need a subtler, more com-
plex framework to be fully understood and applied correctly. They need a science of
manufacturing.
The historical trends contain many of the components needed in a science of manu-
facturing, but not all of them. Indeed, if scientific management had included more science
with its math, if information technology had added a scientific model to its data models, if
“re-engineering” had relied on a full-fledged systems paradigm rather than an overhyped
downsizing program, if lean manufacturing had developed an understanding that went
beyond previous experience, and if Six Sigma constructed a paradigm on which to hang
the results of its experiments, any of these movements might have led to the emergence
of a true science. But since all of them seem to have been stymied by periodic detours
into “buzzword blitzes,” it is left to the manufacturing research and practice community
to step back and apply genuine science to the vital problem of managing manufacturing
systems.
We have no illusions that this will be easy. Americans seem to have a stubborn
faith in the eventual emergence of a swift and permanent solution to the manufactur-
ing problem. Witness the famous economist John Kenneth Galbraith’s echoing of Lord
Kelvin’s overconfidence about physics, Galbraith stating that we had “solved the prob-
lem of production” and could move on to other things (Galbraith 1958). Even though
it quickly became apparent that the production problem was far from solved, faith in
the possibility of a quick fix remained unshaken. Each successive approach to manu-
facturing management—scientific management, operations research, MRP, JIT, TQM,
BPR, ERP, SCM, lean, Six Sigma, and so on—has been sold as the solution. Each one
has disappointed us, but we continue to look for the elusive “technological silver bullet”
which will save American manufacturing.
Manufacturing is complex, large scale, multiobjective, rapidly changing, and highly
competitive. There cannot be a simple, uniform solution that will work well across
a spectrum of manufacturing environments. Moreover, even if a firm can come up
with a system that performs extremely well today, failure to continue improving is
an invitation to be overtaken by the competition. Ultimately, each firm must depend
on its own resources to develop an effective manufacturing strategy, support it with
appropriate policies and procedures, and continue to improve these over time. As
global competition intensifies, the extent to which a firm does this will become not
just a matter of profitability, but one of survival. Factory Physics provides a frame-
work for understanding these core processes and the relationships between performance
measures.
Will we learn from history or will we continue to be diverted by the allure of a
quick fix? Will we bring ideas together within a rational scientific framework or will we
simply continue to play the game making up new buzzwords. Or, worse, will we grow
tired and begin to simply concatenate existing buzzwords, as in the case of the “lean
sigma”?
We believe that the era of science in manufacturing is at hand. As we will show
in Part II, many basic principles governing the behavior of manufacturing systems are
known. If we build on these to provide a framework for the many ideas inherent in the
management trends we have discussed above, we may finally realize the promise of
scientific management: namely, scientific management based on science!
192 Part I The Lessons of History

Discussion Points
1. Consider the following quote referring to the two-machine minimize-makespan scheduling
problem:
At this time, it appears that one research paper (that by Johnson) set a wave of research in
motion that devoured scores of person-years of research time on an intractable problem
of little practical consequence. (Dudek, Panwalkar, and Smith 1992)
(a) Why would academics work on such a problem?
(b) Why would academic journals publish such research?
(c) Why didn’t industry practitioners either redirect academic research or develop effective
scheduling tools on their own?
2. Consider the following quotes:
An MRP system is capacity-insensitive, and properly so, as its function is to determine
what materials and components will be needed and when, in order to execute a given
master production schedule. There can be only one correct answer to that, and it cannot
therefore vary depending on what capacity does or does not exist. (Orlicky 1975)
For at least ten years now, we have been hearing more and more reasons why the MRP-
based approach has not reduced inventories or improved customer service of the U.S.
manufacturing sector. First we were told that the reason MRP didn’t work was because
our computer records were not accurate. So we fixed them; MRP still didn’t work. Then
we were told that our master production schedules were not “realistic.” So we started
making them realistic, but that did not work. Next we were told that we did not have
top management involvement; so top management got involved. Finally we were told
that the problem was education. So we trained everyone and spawned the golden age of
MRP-based consulting. (Kanet 1988)
(a) Who is right? Is MRP fundamentally flawed, or can its basic paradigm be made to work?
(b) What types of environment are best suited to MRP?
(c) What approaches can you think of to make an MRP system account for finite capacity?
(d) Suggest opportunities for integrating JIT concepts into an MRP system.

Study Questions
1. Why have relatively few CEOs of American manufacturing firms come from the
manufacturing function, as opposed to finance or accounting, in the past half century? What
factors may be changing this situation now?
2. In what way did the American faith in the scientific method contribute to the failure to
develop effective OM tools?
3. What was the role of the computer in the evolution of MRP?
4. In which of the following situations would you expect MRP to work well? To work poorly?
(a) A fabrication plant operating at less than 80 percent of capacity with relatively stable
demand
(b) A fabrication plant operating at less than 80 percent of capacity with extremely lumpy
demand
(c) A fabrication plant operating at more than 95 percent of capacity with relatively stable
demand
(d) A fabrication plant operating at more than 95 percent of capacity with extremely lumpy
demand
(e) An assembly plant that uses all purchased parts and highly flexible labor (i.e., so that
effective capacity can be adjusted over a wide range)
Chapter 5 What Went Wrong? 193

(f) An assembly plant that uses all purchased parts and fixed labor (i.e., capacity) running at
more than 95 percent of capacity
5. Could a breakthrough in scheduling technology make ERP the perfect production control
system and render all JIT ideas unnecessary? Why or why not?
6. What is the difference between romantic and pragmatic JIT? How may this distinction have
impeded the effectiveness of JIT in America?
7. Name some JIT terms that may have served to cause confusion in America. Why might such
terms be perfectly understandable to the Japanese but confusing to Americans?
8. How long did it take Toyota to reduce setups from three hours to three minutes? How
frequently have you observed this kind of diligence to a low-level operational detail in an
American manufacturing organization?
9. How would history have been different if Taiichi Ohno had chosen to benchmark Toyota
against the American auto companies of the 1960s instead of using other sources (e.g.,
Toyota Spinning and Weaving Company, American supermarkets, and the ideas of Henry
Ford expressed in the 1920s)? What implications does this have for the value of
benchmarking in the modern environment of global competition?

You might also like