Professional Documents
Culture Documents
What Went Wrong?: 5.1 The Problem
What Went Wrong?: 5.1 The Problem
Our task now is not to fix the blame for the past, but to fix the course for the future.
John F. Kennedy
176
Chapter 5 What Went Wrong? 177
approach was very successful, it was oversold to the point where TQM reduced
quality to a clichéd buzzword, causing a backlash that almost drove the quality
trend from the corporate landscape during the early 1990s. However, it was
resurrected in the latter half of the decade–under the banner of Six
Sigma—when ABB, Allied Signal, General Electric, and others began
implementing a methodology that had originally been developed at Motorola in
the mid-1980s. Six Sigma reconnected the quality trend to its statistical origins,
by focusing on variance identification and reduction. But it has gone on to
evolve into a broader systems analysis framework that seeks to place quality in
the larger context of overall efficiency.
3. Integration trend: This approach began in the 1960s with the introduction of
the computer into manufacturing. Although large-scale organizations had
clearly been “integrated” prior to this, it was only with the advent of material
requirements planning (MRP) that formal integration of flows and functions
became a focus for improving productivity and profitability. In the 1970s, MRP
enjoyed the publicity spotlight that the American Production and Inventory
Control Society (APICS) generated with its “MRP crusade.” But, though
software packages continued to sell, the success of JIT in the 1980s temporarily
eclipsed MRP as a movement. It re-emerged in the 1990s, in expanded form, as
enterprise resources planning (ERP). ERP promised to use the emerging
technology of distributed processing to integrate virtually all business processes
in a single IT application. A “re-engineering” fad and fears of the Y2K bug
fueled demand for these extremely expensive systems. However, once the
millennium passed without disaster, companies started to realize that ERP had
been oversold. Undaunted, ERP vendors (sometimes literally overnight)
transformed their ERP software into supply chain management (SCM) systems.
They continue to offer their vision of computer integration, now under the
banner of SCM.
Today, the current incarnations of these trends all have strong followings. Lean, Six
Sigma, and SCM are each being sold as the solution to productivity problems in both
manufacturing and services, as well as in other sectors such as construction and health
care. The resulting competition between different approaches has fostered excessive
zeal on the part of their proponents. But, as history has shown repeatedly, excessive
zeal tends to result in programs being oversold to the point at which they degenerate into
meaningless buzzwords. The periodic tendency of these trends to descend into marketing
hype is one sign that there is a problem in the state of manufacturing management. The
separation of the three trends into competing systems, a state of affairs that keeps good
ideas from being disseminated, is another indication of this problem.
The crisis can be traced to a single common root—the lack of a scientific framework
for manufacturing management. The resulting confusions are numerous:
1. There is no universally accepted definition of the problem of improving
productivity.
2. There is no uniform standard for evaluating competing policies.
3. There is little understanding of the relations between financial measures (such
as profit and return on investment) and manufacturing measures (such as work
in process, cycle time, output rates, capacity, utilization, variability, etc.).
4. There is little understanding of how the manufacturing measures relate to each
other.
178 Part I The Lessons of History
Hence, there is no system for distinguishing the good from the bad in terms of concepts,
methods, and strategies. Moreover, since the barriers to entry are so low, the manufactur-
ing field is awash with consultants—and since the above relations are so little understood,
the consultants are left relying on buzzwords and “war stories” about how such-and-
such technique worked at company so-and-so. Thus, companies inevitably fall back on
management by imitation and cheerleading. It is no wonder that most of the people
working “in the trenches” sigh knowingly at each new fad, well aware that “this too shall
pass.”
But the practice of operations management need not be dominated by warring per-
sonalities battling over ill-defined buzzwords. In other fields this is not the case. For
instance, consider an engineer designing a circuit with a circuit breaker. The voltage
is 120 volts and the resistance is 60 ohms. What size breaker is needed? The answer
is simple. Since the engineer knows that V = I R (this is Ohm’s law, a fundamental
relationship of electrical science) the breaker must be 120 volts/60 ohms = 2 amperes.
No buzzwords or “experts” required.
Of course, this is a very simple case. But even in more complex environments,
a scientific framework can help to guide decisions. For example, building a highway
bridge is not an exact science, but there are many well-known principles that can be
used to make the process smoother and more risk-free. We know that concrete is very
strong in compression but not in tension. On the other hand, steel is strong in terms
of tension but not when it comes to compression. Consequently, long ago, engineers
designed “reinforced concrete,” something that combines the best of both materials.
Likewise, we know that all stresses must be supported and that there can be no net
torques. Therefore, for a short bridge, a good design is a curved beam that transmits
the stress of the middle of the bridge to the supporting structures. A longer bridge may
require a larger superstructure or even a suspension mechanism. Insights like these do
not completely specify how a given bridge should be built, but they provide a sufficient
body of knowledge to prevent civil engineers from resorting to faddish trends.
In manufacturing, the lack of an accepted set of principles has opened the way
for “experts” to compete for the attention of managers solely on the basis of rhetoric
and personality. In this environment, catchy buzzwords and hyperambitious book titles
are the surest way to attract the attention of busy professionals. For example, Orlicky
(1975) subtitled his book on MRP “The New Way of Life in Production and Inventory
Management,” Shingo (1985) titled his book on SMED (single-minute exchange of die)
“A Revolution in Manufacturing” and Womack and Jones (1991) gave their book on
lean manufacturing the grand title of “The Machine That Changed the World.” While
buzzword campaigns often have good ideas behind them, they almost always oversim-
plify both problems and solutions. As a result, infatuation with the trend of the moment
quickly turns to frustration as the oversold promises fail to materialize, and the disap-
pointed practitioners move on to the next fad.
This characterization of manufacturing management thought as a procession of
trendy fads is certainly a simplification of reality, but only a slight one. Searching for the
phrase “lean manufacturing” or “lean production” on Amazon.com brings up 1,700 titles,
while “supply chain” yields 5,218 and “Six Sigma” brings up 1,484. This glut of books
is marked by volumes distinguished only for their superficial ideas and flashy writing.
While the three major trends identified above show faint signs that “buzzword man-
agement” may eventually be relegated to the manufacturing history books, they have
yet to take a clear step in that direction. In their current incarnations, each of the trends
contains fundamental flaws, which prevent them from serving as comprehensive man-
agement frameworks.
Chapter 5 What Went Wrong? 179
First we consider the efficiency trend and lean manufacturing. Although this ap-
proach once stressed science and modeling, the current tools of lean are largely quite
simple. For instance, most lean consultants will start by drawing up an elementary value
stream map to identify the muda in the system, then project where the system could go
by preparing a future state map, and finally attempt to bridge the gap by applying a set
of rather standard kaizen events (setup reduction, “5s,” visual controls, kanban, etc.). If
the situation is similar enough to another in which these practices have already been
successfully employed, and if the practitioner is clever enough to recognize the analo-
gous behavior, such efforts are likely to be successful. But in situations that are new, an
experience-based approach like this is probably not going to be innovative enough to
yield a useful solution.
We next consider the quality trend and Six Sigma. In contrast to the efficiency
trend, which evolved from complex to simple, the quality movement has migrated from
simple to sophisticated. In the 1980s, TQM practices (e.g., “quality circles”) were often
criticized as superficial and overly simplistic. But current Six Sigma training places
heavy emphasis on the requisite statistical tools, imparting them over the course of four
36-hour-week “waves” and accompanying the teaching with meaningful improvement
projects. In addition, the architects of Six Sigma training have taken some lessons from
marketing and motivation experts and designated graduates of their programs “black
belts” and their management counterparts “champions.” These catchy labels, as well as
an across-the-board commitment from upper management to fund a massive training
initiative, have contributed much to the success of Six Sigma.
Unfortunately, while Six Sigma provides training in advanced statistics, it does not
offer education on how manufacturing systems behave. For instance, consider a plant with
high inventory levels, poor customer service, and low productivity. The Six Sigma black
belt would approach this scenario by trying to better define the problem (a laudable goal);
performing some measurements; analyzing the data using some kind of experimental
design to determine the “drivers” of the excess in inventory, the poor customer service,
and the low productivity; implementing some changes; and, finally, instituting some
controls. While this process might eventually yield good results, it would be tedious
and time-consuming. In a fast-moving, competitive industry there simply isn’t time to
rediscover the causes of generic problems like high WIP and poor customer service.
Managers and engineers need to be able to invoke known principles for addressing
these. Unfortunately, though they do indeed exist, these principles are not currently a
part of the Six Sigma methodology.
Finally, we consider the integration trend and supply chain management. Because
of its historical connection to the computer, this movement has always tended to view
manufacturing in IT terms: if we collect enough data, install enough hardware, and
implement the right software, the problem will be solved. Whether the software is called
MRP, ERP, or SCM, the focus is on providing accurate and timely information about the
status and location of orders, materials, and equipment, all in order to promote better
decision making.
Unfortunately, what is usually left unsaid when the manufacturing management
problem is described in IT terms is that any software package is going to rely on some
sort of model. And in the case of SCM, as with ERP, MRP II, and all the way back to
MRP, the underlying model has almost always been wrong! Hence, in practice, most
development effort has been directed toward ensuring that the database is not corrupted,
the transactions are consistent, and the user interface is easy (but not too easy) to use. The
validity of the underlying model has received very little scrutiny, since this is not viewed
as an IT issue. As a result, SAP/R3 uses the same underlying model for production
180 Part I The Lessons of History
logistics that was used by Orlicky in the 1960s (i.e., a basic input-output model with
fixed lead times and infinite capacity). Of course, most modern SCM and ERP systems
have added functions to detect the problems caused by the flawed model, but these efforts
are really too little, too late.
Even worse than the simplistic assumptions themselves was the myopic perspective
toward lot sizing that the EOQ model promoted. By treating setups as exogenously
specified constraints to be worked around, the EOQ model and its successors blinded
operations management researchers and practitioners to the possibility of deliberately
reducing the setups. It took the Japanese, approaching the problem from an entirely
different perspective, to fully recognize the benefits of setup reduction.
In Chapter 2 we discussed similar aspects of unrealism in the assumptions behind
the Wagner–Whitin, base stock, and (Q, r ) models. In each case, the flaw of the model
was not that it failed to begin with a real problem or a real insight. Each of them did. As
we have noted, the EOQ insight into the trade-off between inventory and setups sheds
light on the fundamental behavior of a plant. So does the (Q, r ) insight into the trade-off
between inventory (safety stock) and service. However, with the fascination for all things
scientific, the insights themselves were rapidly sidelined by the mathematics. Realism
was sacrificed for precision and elegance. Instead of working to broaden and deepen the
insights by studying the behavior of different types of real systems, experts turned their
focus to faster computational procedures for solving the simplified problems. Instead
of working to integrate disparate insights into a strategic framework, they concentrated
on ever smaller pieces of the overall problem in order to achieve neat mathematical
formulas. Such practices continued and flourished for decades under the heading of
operations research.
Fortunately, by the late 1980s, stiff competition from the Japanese, Germans, and
others drove home to academics and practitioners alike that a change was necessary.
Numerous distinguished voices called for a new emphasis on operations. For instance,
professors from Harvard Business School stressed the strategic importance of operational
details (Hayes, Wheelwright, and Clark 1988, 188):
Even tactical decisions like the production lot size (the number of components or subassem-
blies produced in each batch) and department layout have a significant cumulative impact on
performance characteristics. These seemingly small decisions combine to affect significantly
a factory’s ability to meet the key competitive priorities (cost, quality, delivery, flexibility,
and innovativeness) that are established by its company’s competitive strategy. Moreover,
the fabric of policies, practices, and decisions that make up the manufacturing system cannot
easily be acquired or copied. When well integrated with its hardware, a manufacturing system
can thus become a source of sustainable competitive advantage.
But while observations like these led to an increased consensus that operations
management was important, they did not yield agreement on what should be taught
or how to teach it. The old approach of presenting operations solely as a series of
mathematical models has today been widely discredited. The pure case study approach
is still in use at some business schools and may be superior because cases can provide
insights into realistic production problems. However, covering hundreds of cases in a
short time only serves to strengthen the notion that executive decisions can be made
Chapter 5 What Went Wrong? 183
with little or no knowledge of the fundamental operational details. The factory physics
approach in Part II is our attempt to provide both the fundamentals and an integrating
framework. In it we build upon the past insights surveyed in this section and make use of
the precision of mathematics to clarify and generalize these insights. Better understanding
builds better intuition, and good intuition is a necessity for good decision making. We are
not alone in seeking a framework for building practical operations intuition via models
(see Askin and Standridge 1993, Buzacott and Shanthikumar 1993, and Suri 1998 for
others). We take this as a hopeful sign that a new paradigm for operations education is
finally emerging.
Ironically, the main trouble with the scientific management approach is that it is
not scientific. Science involves three main activities: (1) observation of phenomena, (2)
conjecture of causes, and (3) logical deduction of other effects. The result of the proper
application of the scientific method is the creation of scientific models that provide a
better understanding of the world around us. Purely mathematical models, not grounded
in experiment, do not provide a better understanding of the world. Fortunately, we may
be turning a corner. Practitioners and researchers alike have begun to seek new methods
for understanding and controlling production systems.
Inventory turns
8.00
7.50
7.00
6.50
6.00
5.50
5.00
1940 1950 1960 1970 1980 1990 2000
Year
the efficiency of the entire manufacturing sector, these figures alone do not make a clear
statement about MRP’s effectiveness at the individual firm level.
At the micro level, early surveys of MRP users did not paint a rosy picture either.
Booz, Allen, and Hamilton, in a 1980 survey of more than 1,100 firms, reported that
much less than 10 percent of American and European companies were able to recoup
their investment in an MRP system within 2 years (Fox 1980). In a 1982 APICS-funded
survey of 679 APICS members, only 9.5 percent regarded their companies as being class
A users (Anderson et al. 1982).1 Fully 60 percent reported their firms as being class C
or class D users. To appreciate the significance of these responses, we must note that
the respondents in this survey were all both APICS members and materials managers—
people with strong incentive to see MRP in as good a light as possible! Hence, their
pessimism is most revealing. A smaller survey of 33 MRP users in South Carolina arrived
at similar numbers concerning system effectiveness; it also reported that the eventual total
average investment in hardware, software, personnel, and training for an MRP system
was $795,000, with a standard deviation of $1,191,000 (LaForge and Sturr 1986).
Such discouraging statistics and mounting anecdotal evidence of problems led many
critics of MRP to make strongly disparaging statements. They declared MRP the “$100
billion mistake,” stating that “90 percent of MRP users are unhappy” with it that “MRP
perpetuates such plant inefficiencies as high inventories” (Whiteside and Arbose 1984).
This barrage of criticism prompted the proponents of MRP to leap to its defense.
While not denying that it was far less successful than they had hoped when the MRP
crusade was first launched, they did not attribute this lack of success to the system itself.
The APICS literature (e.g., Orlicky as quoted by Latham 1981), cited a host of reasons for
most MRP system failures but never questioned the system itself. John Kanet, a former
materials manager for Black & Decker who wrote a glowing account of its MRP system
in 1984 (Kanet 1984), but had by 1988 turned sharply critical of MRP, summarized the
excuses for MRP failures as follows.
For at least ten years now, we have been hearing more and more reasons why the MRP-based
approach has not reduced inventories or improved customer service of the U.S. manufacturing
sector. First we were told that the reason MRP didn’t work was because our computer records
were not accurate. So we fixed them; MRP still didn’t work. Then we were told that our
1 The survey used four categories proposed by Oliver Wight (1981) to classify MRP systems: classes A,
B, C, and D. Roughly, class A users represent firms with fully implemented, effective systems. Class B users
have fully implemented but less than fully effective systems. Class C users have partially implemented,
modestly effective systems. And class D users have marginal systems providing little benefit to the company.
Chapter 5 What Went Wrong? 185
master production schedules were not “realistic.” So we started making them realistic, but
that did not work. Next we were told that we did not have top management involvement; so
top management got involved. Finally we were told that the problem was education. So we
trained everyone and spawned the golden age of MRP-based consulting (Kanet 1988).
Because these efforts still did not make MRP effective, Kanet and many others
concluded that there must be something more fundamentally wrong with the approach.
The real reason for MRP’s inability to perform is that MRP is based on a flawed model.
As we discussed in Chapter 3, the key calculation underlying MRP is performed by using
fixed lead times to “back out” releases from due dates. These lead times are functions
only of the part number and are not affected by the status of the plant. In particular, lead
times do not consider the loading of the plant. An MRP system assumes that the time for
a part to travel through the plant is the same whether the plant is empty or overflowing
with work. As the following quote from Orlicky’s original book shows, this separation
of lead times from capacity was deliberate and basic to MRP (Orlicky 1975, 152):
An MRP system is capacity-insensitive, and properly so, as its function is to determine what
materials and components will be needed and when, in order to execute a given master
production schedule. There can be only one correct answer to that, and it cannot therefore
vary depending on what capacity does or does not exist.
But unless capacity is infinite, the time for a part to get through the plant does depend
on the loading. Since all plants have finite capacity, the fixed-lead-time assumption is
always, at best, only an approximation of reality. Moreover, because releasing jobs too
late can destroy the desired coordination of parts at assembly or cause finished products
to come out too late, there is strong incentive to inflate the MRP lead times to provide a
buffer against all the contingencies that a part may have to contend with (waiting behind
other jobs, machine outages, etc.). But inflating lead times lets more work into the plant,
increases congestion, and increases the flow time through the plant. Hence, the result is
yet more pressure to increase lead times. The net effect is that MRP, touted as a tool to
reduce inventories and improve customer service, can actually make them worse. It is
quite telling that the flaws Kanet pointed out more than 20 years ago are still present in
most MRP and ERP systems.
This flaw in MRP’s underlying model is so simple, so obvious, that it seems incred-
ible we could have come this far without noticing (or at least worrying about) it. Indeed,
it is a case in point of the dangers of allowing the mathematical model to replace the
empirical scientific model in manufacturing management. But to some extent, we must
admit that we have the benefit of 20–20 hindsight. Viewed historically, MRP makes per-
fect sense and is, in some ways, the quintessential American production control system.
When scientific management met the computer, MRP was the result. Unfortunately, the
computer that scientific management met was the computer of the 1960s and had very
limited power. Consequently, MRP is poorly suited to the environment and computers
of the 21st century.
As we pointed out in Chapter 3, the original, laudable goal of MRP was to explicitly
examine dependent demand. The alternative, treating all demands as independent and
using reorder point methods for lower-level inventories, required performing a bill-of-
material explosion and netting demands against current inventories—both tedious data
processing tasks in systems with complicated bills of material. Hence there was strong
incentive to computerize.
The state-of-the-art in computer technology in the mid-1960s, however, was an
IBM 360 that used “core” memory with each bit represented by a magnetic doughnut
about the size of the letter o on this page. When the IBM 370 was introduced in 1971,
186 Part I The Lessons of History
integrated circuits replaced the core memory. At that time a 14 inch-square chip would
typically hold less than 1,000 characters. As late as 1979, a mainframe computer with
more than 1,000,000 bytes of RAM was a large machine. With such limited memory,
performing all the MRP processing in RAM was out of the question. The only hope for
realistically sized systems was to make MRP transaction-based. That is, individual part
records would be brought in from a storage medium (probably tape), processed, and then
written back to storage. As we pointed out in Chapter 3, the MRP logic is exquisitely
adapted to a transaction-based system.
Thus, if one views the goal as explicitly addressing dependent demands in a
transaction-based environment, MRP is not an unreasonable solution. The hope of the
MRP proponents was that through careful attention to inputs, control, and special cir-
cumstances (e.g., expediting), the flaw of the underlying model could be overcome and
MRP would be recognized as representing a substantial improvement over older pro-
duction control methods. This was exactly the intent of MRP II modules like CRP and
RCCP. Unfortunately, these were far from successful, and MRP II was roundly criticized
in the 1980s, while Japanese firms were strikingly successful by going back to methods
resembling the old reorder point approach. JIT advocates were quick to sound the death
knell of MRP.
But MRP did not die, largely because MRP II handled important nonproduction
data maintenance and transaction processing functions, jobs that were not replaced by
JIT. So MRP persisted into the 1990s, expanded in scope to include other business
functions and multiple facilities, and was rechristened ERP. Simultaneously, computer
technology advanced to the point where the transaction-based restriction of the old MRP
was no longer necessary. A host of independent companies emerged in the 1990s offering
various types of finite-capacity schedulers to replace basic MRP calculations. However,
because these were ad hoc and varied, many industrial users were reluctant to adopt
them until they were offered as parts of comprehensive ERP packages. As a result, a
host of alliances, licensing agreements, and other arrangements between ERP vendors
and application software developers emerged.
There is much that is positive about the recent evolution of ERP systems. The
integration and connectivity they provide make more data available to decision makers in
a more timely fashion than ever before. Finite-capacity scheduling modules are promising
as replacements for old MRP logic in some environments. However, as we will discuss in
Chapter 15, scheduling problems are notoriously difficult. It is not reasonable to expect
a uniform solution for all environments. For this reason, ERP vendors are beginning
to customize their offerings according to “best practices” in various industries. But the
resulting systems are more monolithic than ever, often requiring firms to restructure their
businesses to comply with the software. Although many firms, conditioned by the BPR
movement to think in revolutionary terms, seem willing to do this, it may be a dangerous
trend. The more firms conform to a uniform standard in the structure of their operations
management, the less they will be able to use it as a strategic weapon, and the more
vulnerable they will be to creative innovators in the future.
By the late 1900s, more cracks had begun to appear in the ERP landscape. In
1999, SAP AG, the largest ERP supplier in the world, was stung by two well-publicized
implementation glitches at Whirlpool Corp., resulting in the delay of appliance shipments
to many customers, particularly Hershey Foods Corp. As a result, the shelves of candy
retailers were empty just before Halloween. Meanwhile, several companies decided to
pull the plug on SAP installations costing between $100 and $250 million (Boudette
1999). Finally, a survey by Meta Group of 63 companies revealed an average return on
investment of a negative $1.5 million for an ERP installation (Stedman 1999).
Chapter 5 What Went Wrong? 187
However, once the millennium passed without any major software issues, enterprise
resources planning almost immediately became passé. Software vendors replaced their
ERP offerings with supply chain management (SCM) packages almost instantaneously.
The speed and readiness with which they made the switch leads one to wonder just how
much the software was actually altered. Nevertheless, SCM became the hot new fad in
manufacturing. Many firms created VP level positions to “manage” their supply chains.
Legions of consultants offered support. And software sales continued.
As with past manufacturing “revolutions,” hopes were high for SCM. The popular
press was full of articles prophesying that SCM would revolutionize industry by coordi-
nating suppliers, customers, and production facilities to reduce inventories and improve
customer service. But SCM, just like past revolutions, failed to deliver on the promises
made in its behalf and ERP returned along with SCM, thereby further confusing the
issue. During the 1990s and early 2000s a number of surveys showed improvement in
the implementation of information technology although the performance remains below
expectations.
A survey, now known as the “Chaos Report,” conducted by the Standish Group
in 1995, showed that over 31 percent of all IT projects are canceled before they get
completed and that almost 53 percent of the projects would cost 189 percent of their
original estimates. Moreover, only 16 percent of software projects were completed on-
time and on-budget. More recently, the Robbins-Gioia Survey (2001) indicated that 51
percent of the companies surveyed thought their ERP implementation was unsuccessful.
The main problem appears to be the inherent complexity of such systems and the lack
of understanding of exactly what the system is supposed to do.
The well-publicized spate of finger-pointing and recriminations that flared between
Nike and i2 in 2001 illustrated the frustration experienced by managers who felt they had
again been denied a solution to their long-standing coordination problem (Koch 2007).
The situation even reached the attention of the top levels of government, as evidenced
when Federal Reserve Chairman Alan Greenspan testified to Congress in mid-February
2001 that a buildup in inventories was anticipated, in spite of the advances in supply-chain
automation.
Despite all this, the original insight of MRP—that independent and dependent de-
mands should be treated differently—remains fundamental. The hierarchical planning
structure central to the construct of MRP II (and to ERP/SCM systems as well) provides
coordination and a logical structure for maintaining and sharing data. However, making
effective use of the data processing power and scheduling sophistication promised by
ERP/SCM systems of the future will require tailoring the information system to a firm’s
business needs, and not the other way around. This would require a sound understanding
of core processes and the effects of specific planning and control decisions on them.
The evolution from MRP to ERP/SCM represented an impressive series of advances
in information technology. However, as in scientific management, the flaw in MRP has
always been the lack of an accurate scientific model of the underlying material flow
processes. The ultimate success of the SCM movement will depend far more on the
modeling progress it promotes than on additional advances in information technology.
have attempted to assume the mantle of scientific management, with varying degrees of
success. Below we summarize some of ones that have had a significant impact on current
practice.
2 Systems analysis is a rational means–ends approach to problem solving in which actions are evaluated
in terms of specific objectives. We discuss it in greater detail in Chapter 6.
3 The enormous popularity of “Dilbert” cartoons, which poke freewheeling fun at BPR and other
management fads, tapped into the growing sense of alienation felt by the workforce in corporate America.
Ironically, some companies actually responded by banning them from office cubicles.
Chapter 5 What Went Wrong? 189
3. VSM does not provide a means for diagnosing the causes of long cycle times.
4. Even though VSM collects capacity and demand data, it does not compute
utilization and therefore never discovers when a process is facing demand
beyond its capacity.
5. There is no feasibility check for the “future state.”
This is not to say that VSM is not useful—it is. Any improvement project should start
with an assessment of the current state. Hundreds of companies have discovered signif-
icant opportunities by simply using VSM to carefully examine their current processes.
However, once the low-hanging fruit has been harvested, VSM does not offer a means
for identifying further improvements. Taking this further step requires a model that sys-
tematically connects policies to performance. Nothing in the current lean manufacturing
movement is poised to provide such a model.
Another problem with the analyze step in DMAIC is that it typically uses statistical
methods to determine cause and effect. The authors have observed the dangers of this
while teaching basic Factory Physics to a group of Six Sigma blackbelt candidates,
who had just undergone 2 weeks of conventional Six Sigma training, including design
of experiments and analysis of variance. The group spent 4 days studying the basic
behavior of manufacturing systems (i.e., Part II of this book), and then, on the last day
of the course, the members were assigned a case study in reducing cycle time. In spite
of our efforts to show them the root causes of long cycle times through Factory Physics
theory, every member of the class resorted to designing an experiment to determine the
cause of long cycle times. Evidently, strict devotion to the DMAIC approach had blinded
the group from seeing why cycle times were long.
Ironically, it appears that BPR and Six Sigma, movements that have their roots in
the ultrarational field of systems analysis, may actually have left many manufacturing
professionals more vulnerable to irrational buzzword fads than ever before.
In practice, it is true that the morphing of JIT and TQM into lean and Six Sigma
has resulted in more robust methods, but what we are left with are still sets of tech-
niques rather than comprehensive systems. Despite many grandiose claims, none of
these methods has reduced Toyota’s successes of the 1980s to a cookbook of rules.
The many creative insights of the JIT and TQM founders need a subtler, more com-
plex framework to be fully understood and applied correctly. They need a science of
manufacturing.
The historical trends contain many of the components needed in a science of manu-
facturing, but not all of them. Indeed, if scientific management had included more science
with its math, if information technology had added a scientific model to its data models, if
“re-engineering” had relied on a full-fledged systems paradigm rather than an overhyped
downsizing program, if lean manufacturing had developed an understanding that went
beyond previous experience, and if Six Sigma constructed a paradigm on which to hang
the results of its experiments, any of these movements might have led to the emergence
of a true science. But since all of them seem to have been stymied by periodic detours
into “buzzword blitzes,” it is left to the manufacturing research and practice community
to step back and apply genuine science to the vital problem of managing manufacturing
systems.
We have no illusions that this will be easy. Americans seem to have a stubborn
faith in the eventual emergence of a swift and permanent solution to the manufactur-
ing problem. Witness the famous economist John Kenneth Galbraith’s echoing of Lord
Kelvin’s overconfidence about physics, Galbraith stating that we had “solved the prob-
lem of production” and could move on to other things (Galbraith 1958). Even though
it quickly became apparent that the production problem was far from solved, faith in
the possibility of a quick fix remained unshaken. Each successive approach to manu-
facturing management—scientific management, operations research, MRP, JIT, TQM,
BPR, ERP, SCM, lean, Six Sigma, and so on—has been sold as the solution. Each one
has disappointed us, but we continue to look for the elusive “technological silver bullet”
which will save American manufacturing.
Manufacturing is complex, large scale, multiobjective, rapidly changing, and highly
competitive. There cannot be a simple, uniform solution that will work well across
a spectrum of manufacturing environments. Moreover, even if a firm can come up
with a system that performs extremely well today, failure to continue improving is
an invitation to be overtaken by the competition. Ultimately, each firm must depend
on its own resources to develop an effective manufacturing strategy, support it with
appropriate policies and procedures, and continue to improve these over time. As
global competition intensifies, the extent to which a firm does this will become not
just a matter of profitability, but one of survival. Factory Physics provides a frame-
work for understanding these core processes and the relationships between performance
measures.
Will we learn from history or will we continue to be diverted by the allure of a
quick fix? Will we bring ideas together within a rational scientific framework or will we
simply continue to play the game making up new buzzwords. Or, worse, will we grow
tired and begin to simply concatenate existing buzzwords, as in the case of the “lean
sigma”?
We believe that the era of science in manufacturing is at hand. As we will show
in Part II, many basic principles governing the behavior of manufacturing systems are
known. If we build on these to provide a framework for the many ideas inherent in the
management trends we have discussed above, we may finally realize the promise of
scientific management: namely, scientific management based on science!
192 Part I The Lessons of History
Discussion Points
1. Consider the following quote referring to the two-machine minimize-makespan scheduling
problem:
At this time, it appears that one research paper (that by Johnson) set a wave of research in
motion that devoured scores of person-years of research time on an intractable problem
of little practical consequence. (Dudek, Panwalkar, and Smith 1992)
(a) Why would academics work on such a problem?
(b) Why would academic journals publish such research?
(c) Why didn’t industry practitioners either redirect academic research or develop effective
scheduling tools on their own?
2. Consider the following quotes:
An MRP system is capacity-insensitive, and properly so, as its function is to determine
what materials and components will be needed and when, in order to execute a given
master production schedule. There can be only one correct answer to that, and it cannot
therefore vary depending on what capacity does or does not exist. (Orlicky 1975)
For at least ten years now, we have been hearing more and more reasons why the MRP-
based approach has not reduced inventories or improved customer service of the U.S.
manufacturing sector. First we were told that the reason MRP didn’t work was because
our computer records were not accurate. So we fixed them; MRP still didn’t work. Then
we were told that our master production schedules were not “realistic.” So we started
making them realistic, but that did not work. Next we were told that we did not have
top management involvement; so top management got involved. Finally we were told
that the problem was education. So we trained everyone and spawned the golden age of
MRP-based consulting. (Kanet 1988)
(a) Who is right? Is MRP fundamentally flawed, or can its basic paradigm be made to work?
(b) What types of environment are best suited to MRP?
(c) What approaches can you think of to make an MRP system account for finite capacity?
(d) Suggest opportunities for integrating JIT concepts into an MRP system.
Study Questions
1. Why have relatively few CEOs of American manufacturing firms come from the
manufacturing function, as opposed to finance or accounting, in the past half century? What
factors may be changing this situation now?
2. In what way did the American faith in the scientific method contribute to the failure to
develop effective OM tools?
3. What was the role of the computer in the evolution of MRP?
4. In which of the following situations would you expect MRP to work well? To work poorly?
(a) A fabrication plant operating at less than 80 percent of capacity with relatively stable
demand
(b) A fabrication plant operating at less than 80 percent of capacity with extremely lumpy
demand
(c) A fabrication plant operating at more than 95 percent of capacity with relatively stable
demand
(d) A fabrication plant operating at more than 95 percent of capacity with extremely lumpy
demand
(e) An assembly plant that uses all purchased parts and highly flexible labor (i.e., so that
effective capacity can be adjusted over a wide range)
Chapter 5 What Went Wrong? 193
(f) An assembly plant that uses all purchased parts and fixed labor (i.e., capacity) running at
more than 95 percent of capacity
5. Could a breakthrough in scheduling technology make ERP the perfect production control
system and render all JIT ideas unnecessary? Why or why not?
6. What is the difference between romantic and pragmatic JIT? How may this distinction have
impeded the effectiveness of JIT in America?
7. Name some JIT terms that may have served to cause confusion in America. Why might such
terms be perfectly understandable to the Japanese but confusing to Americans?
8. How long did it take Toyota to reduce setups from three hours to three minutes? How
frequently have you observed this kind of diligence to a low-level operational detail in an
American manufacturing organization?
9. How would history have been different if Taiichi Ohno had chosen to benchmark Toyota
against the American auto companies of the 1960s instead of using other sources (e.g.,
Toyota Spinning and Weaving Company, American supermarkets, and the ideas of Henry
Ford expressed in the 1920s)? What implications does this have for the value of
benchmarking in the modern environment of global competition?