Download as pdf or txt
Download as pdf or txt
You are on page 1of 302

Published by

World Scientific Publishing Co. Pte. Ltd.


5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data


Names: Lei, Lei, 1954– author.
Title: Managing supply chain operations / by Lei Lei (Rutgers Business School - Rutgers,
The State University of New Jersey), Leonardo DeCandia (Rutgers Business School - Rutgers,
The State University of New Jersey, Johnson & Johnson), Rosa Oppenheim (Rutgers Business
School - Rutgers, The State University of New Jersey), Yao Zhao (Rutgers Business School -
Rutgers, The State University of New Jersey).
Description: New Jersey : World Scientific, [2017] | Includes bibliographical references.
Identifiers: LCCN 2016036915 | ISBN 9789813108790 (hc : alk. paper)
Subjects: LCSH: Business logistics--Management.
Classification: LCC HD38.5 .L45 2017 | DDC 658.5--dc23
LC record available at https://lccn.loc.gov/2016036915

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

Copyright © 2017 by World Scientific Publishing Co. Pte. Ltd.


All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and
retrieval system now known or to be invented, without written permission from the publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.

Desk Editors: Suraj Kumar/Sandhya Venkatesh

Typeset by Stallion Press


Email: enquiries@stallionpress.com

Printed in Singapore
Dedicated to our families:

T.J., Alycia, and Robert Wang


Susan, Christopher, and Alexandra DeCandia
Alan Oppenheim, Adam Oppenheim and Seanna Brown,
David Oppenheim and Emily Boling,
and Maddie, Ellie, Lilly, and Edie Oppenheim
The Zhao Family
Preface

In developing a text that differentiates itself in the constant dynamic of the


evolving science of Supply Chain Management, the authors were
challenged in determining an ideal balance between the hard science and
the practical application of that science for the future supply chain
professional. Our task as collaborators was to design and build the bridge
for the modern student who lives in a world of evolving technologies and
megatrends, basically introducing and translating the science of supply
chain for the future in order to solve real-world problems and create new
innovative solutions. For us, it was an effort of passion and inspiration,
testing our abilities to break down complex issues into a handful of simple
core levers of focus. In developing this text, the authors debated,
researched, investigated, and leveraged personal experiences in all the key
principles of supply chain management to define a text that would represent
the critical core components for any supply chain application in any
industry. Our unique approach is the product of combining decades of
academic experience with industry experience, introducing the student to an
understanding of the basic toolbox of the supply chain professional, which
should and must include the following fundamental practices that are
covered in the text:
• Forecasting and Demand Management
• Sales and Operations Planning
• Inventory Management
• Project Scheduling and Management
• Service Management
In addition to a basic understanding of these practices, we have
included in each chapter technical case studies along with clinical case
discussions. The goal and expectation of this collaborative effort is to
introduce knowledge and insights to build the student’s ability to translate
fundamental supply chain concepts and challenges into quantitative and
qualitative opportunities. Our hope is that the student will internalize the
fundamental capabilities of analysis and problem-solving along with the
leadership skills of influencing and shaping strategies. The future will
continue to be driven by the needs and expectations of a global customer,
whose unique requirements can only be serviced by those organizations
with excellence in supply chain management. The winners will be those
with a supply chain that exhibits the ability to provide customization,
innovation, speed, and agility in the most cost-effective way. These winning
organizations will employ those supply chain professionals that have
mastered the science of the fundamental practices with those leadership
qualities to shape and lead future supply chain solutions in a complex
world.
This industry/academia collaboration is the result of a 5-year project in
which we authors challenged ourselves to ensure that we stayed true to our
goals and intentions. Surviving a few job changes and many time conflicts,
we remained patiently committed to our goal of producing the best product
of our collective efforts.
Lei Lei, Ph.D.
Dean, Rutgers Business School
Professor and Former Chair
Supply Chain Management, Rutgers Business School
Leonardo (Len) DeCandia, P.E.
Chief Procurement Officer
Johnson & Johnson
Founder and Chair
Supply Chain Industry Board, Rutgers Business School
Rosa Oppenheim, Ph.D.
RBS Distinguished Service Professor and Vice-Chair
Supply Chain Management, Rutgers Business School
Yao Zhao, Ph.D.
Professor
Supply Chain Management, Rutgers Business School
About the Authors

Lei Lei, Ph.D.


Dean, Rutgers Business School
Professor and Former Chair, Supply Chain Management,
Rutgers Business School
Lei Lei is Dean of Rutgers Business School, and
previously served as Founding Chair and Professor of
the Department of Supply Chain Management and
Marketing Sciences. She received her Ph.D. degree in Industrial
Engineering from University of Wisconsin (Madison) with a minor in
Computer Sciences. Her research expertise includes supply chain network
design and optimization, operations planning, scheduling, and process
recovery after disruptions, demand–supply planning, and resource
allocation optimization. She has over 50 refereed publications in leading
operations management journals and has served as a member of several
editorial boards and as co-guest editor and Associate Editor, as well as a
member of the National Science Foundation review panel. She has served
as dissertation adviser for numerous Ph.D. students, and has been Principal
Investigator on major government-funded research projects. She is a
recipient of numerous best teacher awards at Rutgers Business School; was
named one of the two Most Popular Business Professors at Rutgers
University by Business Week; and was selected as one of the Top 50 Women
in Business in 2015 by NJBIZ.

Leonardo (Len) DeCandia, P.E.


Chief Procurement Officer — Johnson & Johnson
Founder and Chair, Supply Chain Industry Board,
Rutgers Business School
Leonardo (Len) DeCandia has over 30 years of
experience in the Pharmaceutical/Health Care/Consumer
Products Industries with expertise in engineering,
manufacturing, procurement and end to end supply chain
management. He is currently the Chief Procurement
Officer for the Johnson & Johnson Family of Companies
based in New Brunswick, New Jersey, and has previously held supply chain
leadership roles at Roche Pharmaceuticals and the Estee Launder
Companies. He has spent the majority of his career leading the deployment
of advanced practices in supply chain and organizational transformations.
He is a founder and has served as Chair of the Rutgers Business School
Supply Chain Management Center for many years and is currently a
member of the Rutgers Business School Advisory Board, where he is also
an adjunct professor and teaches a graduate course in Innovation
Management. He resides with his family in Princeton, NJ; he is also a
Licensed Professional Engineer in the State of New Jersey.

Rosa Oppenheim, Ph.D.


Professor and Vice-Chair, Supply Chain Management,
Rutgers Business School
Rosa Oppenheim is a Professor of Supply Chain
Management and Vice Chair of the Department of
Supply Chain Management. She was previously
Executive Vice Dean and Acting Dean at Rutgers
Business School, and Interim Chair of the Department of Supply Chain
Management. She received her Ph.D. degree in Operations Research,
Master of Science degree in Operations Research, and Bachelor of Science
degree in Chemical Engineering from Polytechnic Institute of Brooklyn
(now NYU Tandon School of Engineering); she also holds Master of Arts
degrees in English and in Liberal Studies, both from Rutgers University.
Her research interests are in statistical process control, total quality
management, and six sigma management. She is the author of several texts
and many articles, and has conducted training programs for major
corporations. She is the recipient of numerous awards for excellence in
teaching and was recently named Rutgers Business School Distinguished
Service Professor.

Yao Zhao, Ph.D.


Professor, Supply Chain Management, Rutgers Business
School
Yao Zhao is a Professor and former Vice Chair of Supply
Chain Management at Rutgers Business School. He
received his Ph.D. degree from Northwestern University
in Industrial Engineering and Management Sciences.
Prior to joining Rutgers University, he taught at Northwestern University
School of Engineering, and has been a Visiting Scholar at the
Massachusetts Institute of Technology Operations Research Center and a
Visiting Professor at Duke Fuqua School of Business. His research focuses
on supply chain analytics, supply chain and project management interfaces,
applications to the pharmaceutical industry, and socially responsible
operations. He has published extensively in leading operations management
journals, has served as dissertation adviser for numerous Ph.D. students,
and is the recipient of the National Science Foundation Career Award for
project-driven supply chains. He has also received numerous teaching,
research, and service awards at Rutgers Business School, and has
collaborated with many corporations in supply chain management
applications and case studies.
Contents

Preface
About the Authors

1. Introduction to Supply Chain Management


1.1 Challenges of Supply Chain Management
1.2 Operations Management vs. Supply Chain Management
1.3 Major Drivers Affecting Supply Chain Performance
1.3.1 Demand forecasting
1.3.2 Demand–supply planning
1.3.3 Inventory management
1.3.4 Project management
1.3.5 Service management
Endnotes

2. Forecasting and Demand Management


2.1 Introduction to Forecasting
2.2 Fundamentals of Time Series
2.3 Models for Predicting Stationary Series
2.3.1 Arithmetic mean
2.3.2 Last period value model
2.3.3 Moving average (MA(N)) model
2.3.4 Exponential smoothing (α) model
2.4 Models for Predicting a Trend
2.4.1 Simple linear regression model
2.4.2 Holt’s trend (α, β) model
2.5 Models for Predicting a Seasonal Series
2.5.1 Naïve or regression model with seasonal adjustment
2.5.2 Winter’s (α, β, γ) model
2.6 Demand Categorization and Management Strategies
2.7 Collaborative Planning, Forecasting, and Replenishment
2.8 Case Studies
2.8.1 Stay Warm Call Center
2.8.2 Xenon Products Company
2.8.3 ACT — The demand–supply mismatch problem
2.9 Exercises
Appendix
A2.1: Derivation of Regression Coefficients for the Simple Linear
Regression Model
Endnotes

3. Sales and Operations Planning


3.1 Sales and Operations Planning in Practice
3.2 Fundamentals of Linear Programming Modeling
3.3 Modeling with Integer and Binary Variables
3.4 Using Microsoft Excel Solver for Demand–Supply Planning
3.5 Demand and Supply Planning Strategies
3.6 Case Studies
3.6.1 EnergyBoat, Inc
3.6.2 Air Champion outsourcing
3.6.3 PowerZoom Energy Bar
3.7 Exercises
Appendix
A3.1: How to install and access Microsoft Excel Solver
A3.2: Fundamentals of LP sensitivity analysis
Endnotes

4. Inventory Management
4.1 Introduction to Inventory Management
4.2 Characteristics of an Inventory System
4.3 Economies of Scale — Cycle Stock
4.3.1 Classical EOQ model
4.3.2 The mixed SKU strategy — joint ordering strategy
4.3.3 Quantity discount model
4.3.4 EOQ model with planned shortages
4.3.5 EOQ model with finite delivery rate
4.4 Managing Uncertainty for Short Life Cycle Items
4.4.1 The Newsvendor Model
4.5 Managing Uncertainty for Durable Items — Safety Stock Model
4.5.1 The continuous-review batch size — reorder point (Q–
R) model
4.5.2 The periodic-review base-stock model
4.5.3 Risk pooling effect
4.6 Case Studies — Economies of Scale — Cycle Stock
4.6.1 Office Supplies, Inc
4.6.2 Mountain Tent Company
4.6.3 De-Icier
4.7 Case Study — Managing Uncertainty for Durable Items —
Safety Stock Model
4.7.1 ImportHome LLC
4.8 Exercises
Endnotes

5. Project Scheduling and Management


5.1 Introduction to Project Management
5.1.1 Project management — basic concepts
5.1.2 Network representation
5.2 Critical Path Method
5.3 Time–Cost Analysis
5.3.1 Crashing activity times — a linear programming model
5.3.2 Cost vs. benefit of expediting activity time(s)
5.4 Program Evaluation and Review Techniques
5.5 Human Factors
5.6 Project Management Software — Microsoft Project
5.7 Case Study — Product Launch Process
5.7.1 PDS Company
5.8 Exercises
Endnotes

6. Service Management
6.1 Introduction to Service Management
6.1.1 Service management economics
6.2 Waiting Line Management
6.2.1 Causes of congestion
6.2.2 Characteristics of waiting lines
6.2.3 M/M/s queueing models
6.2.4 Monte Carlo simulation
6.2.5 Strategies for managing waiting lines
6.3 Capacity Management
6.3.1 Strategies for capacity management
6.3.2 Quantitative tools for staff planning and scheduling
6.4 Case Studies
6.4.1 Hillcrest Bank — staffing and scheduling
6.4.2 Brier Health Systems — Centralized Customer Contact
Center
6.5 Exercises
Endnotes

Index
Chapter 1

Introduction to Supply Chain Management

SUPPLY CHAINS CANNOT TOLERATE EVEN 24 HOURS OF DISRUPTION.


SO IF YOU LOSE YOUR PLACE IN THE SUPPLY CHAIN BECAUSE OF
WILD BEHAVIOR YOU COULD LOSE A LOT. IT WOULD BE LIKE
POURING CEMENT DOWN ONE OF YOUR OIL WELLS.

Thomas Friedman

1.1 Challenges of Supply Chain Management


The megatrends of industries, such as digitalization and globalization, have
been forcefully transforming business processes and continually introducing
new challenges to supply chain operations managers. From the Google
AlphaGo program that claimed the final game in March 2016 for a
sweeping 4-1 series victory over the top South Korean Go grandmaster Lee
Sedol to Amazon’s Prime Air drones that will fly packages directly to your
doorstep in 30 minutes, to the over $1.5 trillion in US trade with Asian
markets in 2015, the world has become increasingly complex, and
managing the operations of a supply chain around the globe has been facing
increasingly complex challenges. Examples of these challenges to the
strategic moves of a company include migrating from an outdated
purchasing order handling process to a new enterprise resource planning
(ERP)/SAP system to streamline the information flow along the supply
chain process; relocating manufacturing facilities in multiple offshore
locations to nearshore regions such as Mexico to reduce order lead time and
to respond more quickly to market dynamics; and expanding North
American sales to emerging markets in Brazil, Russia, India, China, and
South Africa (BRICS) to be near the customers that generate about 70% of
the world’s purchasing power. Even more challenges arise during the
implementation of such strategic moves, such as those encountered during
market demand forecasting, demand–supply planning, inventory
positioning, managing relocation projects, shipping and warehousing, and
fulfilling customer orders and services.

1.2 Operations Management vs. Supply Chain


Management
Operations Management refers to the transformation process which takes a
set of inputs and converts them to outputs for the customer. It includes the
planning, scheduling, and control of the activities that transform these
inputs into finished goods and services. Figure 1.1 depicts this relationship.
In a manufacturing operation, for example, raw material arrives as an
input to the plant, perhaps by ship and then truck or train car; machines and
people unload these materials and move them to the plant floor, where other
people and equipment are used to fabricate necessary parts and assemble
the desired product. More people and machines handle packaging,
warehousing, and transportation to customers. They are managed by people
who use information systems to plan and schedule activities; this
information includes demand forecasts, intelligence on the quality and
availability of raw materials, and intangibles like legal advice and market
research. The transformation converts these various inputs into the desired
outputs: goods and services which fulfill customer needs.

Figure 1.1: The Operations Management Transformation Process.


Figure 1.2: Example of a Supply Chain.

All of these operations require coordination with other business


functions, including engineering, marketing, and human resources.
However, the primary focus in operations management is on the activities
the individual organization must perform in managing its operations.
Managers must also understand how the company is linked to the
operations of its suppliers, distributors, and customers as well — this is
what we refer to as the supply chain:
A supply chain consists of all stages involved, directly or indirectly, in
fulfilling a customer request, or a network of manufacturers and service
providers who work together to convert and transform goods from the raw
materials stage through to the end user, as shown in Figure 1.2.
A supply chain is not just a product moving from supplier to
manufacturer to distributor to retailer to customer; it is also the flow of
information, capital, and product along both directions of the chain. It is not
one player at each stage; a manufacturer may receive materials from several
suppliers and then supply several distributors, with the objective of
maximizing the overall value generated, in terms of supply chain
profitability, or the difference between the revenue generated from the
customer and the overall cost across the supply chain. Supply chain
management refers specifically to an integrative approach to the
management of the flow of product, capital, and information between and
among the stages in a supply chain to maximize total profitability.
Responsiveness refers to the ability to satisfy customer demand when and
where it occurs, resilience refers to the capability of a supply chain to
respond to unexpected disruptions, and efficiency refers to minimizing the
cost across the supply chain.
The ability to quickly identify the issues, to develop business solutions,
and to adapt to the fast changes which can occur along the supply chain
becomes increasingly important for the success of supply chain operation
managers. As an example of the need for resilience in the face of
unanticipated disasters, earthquakes in southern Japan in 2016 led to the
temporary shutdown of 26 car assembly lines due to production halts by
Toyota’s suppliers. While their manufacturing systems are considered to be
models of efficiency, their reliance on just-in-time inventory systems,
holding minimal levels of inventory, means that if suppliers and/or
transportation are affected by a disaster, Toyota can quickly run out of parts
and incur very significant additional costs.1
As another illustration, consider the pharmaceutical supply chain
process shown in Figure 1.3.
The actual supply chain for pharmaceutical products is far more
complex than the one depicted in Figure 1.3; in fact, just in terms of the
Research and Development (R&D) process for certain drugs, hundreds of
operational steps may be required to produce the first step active ingredient.
Driven by the difficult-to-predict demands of patients, it is a significant
challenge to accurately forecast supply requirements. Additionally, these
demands can be impacted by seasonal influences, new innovations, and
competitor activities. In terms of other supply chain complexities, this
industry has very long manufacturing cycle times, about 1 year for most
pharmaceutical products. Given the difficulty in forecasting demand and the
long production cycle time, the responsiveness of the pharmaceutical
supply chain is limited and must be managed through flexible and advanced
inventory management practices. In terms of logistics and product
deployment, another unique aspect of the pharmaceutical supply chain is
the regulatory operating environment. Each country normally has its own
requirements relating to product approval, labeling, and distribution; these
custom requirements must be factored into advanced strategies for
inventory management, warehousing, and logistics.
Figure 1.3: Pharmaceutical Supply Chain.

The most significant investment for the pharmaceutical supply chain is


the capital assets. Manufacturing sites require complex and expensive
technologies, and the facilities are very advanced in isolating production
processes to ensure sterility and avoid product contamination. In optimizing
the cost, utilization, and product quality of these capital assets, most
pharmaceutical products are manufactured at one global site. The
customization of products for the many unique market needs typically
occurs at the packaging and downstream fulfillment logistics stages. Once
packaged, these products are then dedicated to the market and hence limit
the flexibility of supply. This stage of dedicated inventory for unique local
markets has a limited life due to product expiry. Product expiry also
requires advanced practices in reverse logistics relative to returns from
forward customer locations and proper product handling and disposal. This
need for custom packaging localizes the exposure to demand–supply
balancing challenges. The use of global processes in forecasting and
planning with local deployment creates opportunities for a more flexible
and responsive pharmaceutical supply chain.
New challenges and opportunities to optimize the pharmaceutical
supply chain have been introduced as industry innovations have evolved
towards greater use and deployment of biologics. These advanced products
are manufactured through a multi-step engineered process. Manufactured in
a dedicated global location, biologics usually require special handling due
to the need to maintain product stability. This has introduced the need for
better controls for temperature exposure in product storage and handling;
these high-value products are much more sensitive to extreme exposure to
temperature variation and also have shorter expiry cycles than traditional
pharmaceutical products. Managing a complex mix of traditional products
along with the introduction of new, next-generation products and
technologies has elevated the contribution to value and company
performance of the supply chain management function. The simple act of a
physician writing a prescription anywhere in the world requires one of the
most complex and sophisticated supply chains to ensure that the patient’s
needs are fulfilled.

1.3 Major Drivers Affecting Supply Chain


Performance
In order to make decisions which will optimize supply chain profitability,
we must consider a number of important drivers which affect performance
across the network, including demand forecasting, demand–supply
planning, inventory management, project management, and service
management.

1.3.1 Demand forecasting


One of the most important considerations in supply chain management is
the ability to accurately forecast demand. Very simply, if we can better
forecast demand, we can better plan and execute management of the supply
chain. All decisions pertaining to the supply chain, including inventory,
production, scheduling, facility location and design, workforce planning,
outsourcing contract development and negotiation, and distribution and
marketing strategies (such as promotion sales), require accurate estimates of
customer demand in the present and future. Forecasting is always uncertain,
since we cannot know the future. However, with a combination of
qualitative techniques, based on expert judgment and opinion and past
experience, and quantitative techniques, based on rigorous mathematical
theory, as well as adequate measures of forecast accuracy and flexible
procedures which can adapt to changing conditions, we are able to establish
reliable estimates of demand which provide input to subsequent decisions
for the supply chain.
In Chapter 2, we consider various quantitative approaches for
forecasting demand, including techniques for stationary behavior (where
there is no trend), like the arithmetic mean, last period value, moving
averages, and exponential smoothing; for trends, like linear regression and
Holt’s Method; and for seasonal behavior, like seasonal indices and
Winter’s Method. We also discuss the value of collaborative planning,
forecasting, and replenishment (CPFR) in improving the accuracy of
forecasting by means of information-sharing among the trading partners
along the supply chain; this allows for continuous updating of inventory and
upcoming requirements, making the end-to-end supply chain process more
efficient. In Chapter 2, as in all chapters, we include case studies illustrating
the decision-making process and exercises demonstrating the application of
the techniques presented.

1.3.2 Demand–supply planning


Balancing demand and supply is considered to be one of the greatest
challenges to any supply chain management team. Demand–supply
planning is a business process. Given a planning horizon (e.g., 6 months)
and either confirmed customer orders to be fulfilled or accurate forecasts of
expected demand, plus the supplies needed to address market dynamics
resulting from promotion sales and competitors’ actions over the given
planning horizon, this business process aims to allocate supplies to meet
customer demand in the most cost-effective manner while subject to various
practical constraints. Examples of such constraints are limitations on
production and logistics capacity, suppliers’ availability, shipping and
warehousing bottlenecks, changing priorities of the company, and customer
specifications. Effective demand–supply planning has always been a great
challenge to supply chains; even companies that have fully implemented
modern ERP systems, such as Oracle and SAP, may suffer supply chain
breakdowns and face the pressures of crisis management because of the
lack of effective demand–supply strategies. In practice, effective demand–
supply planning, evolved from traditional aggregate planning, helps to
integrate and optimize the raw material supply processes, manufacturing
operations, human resource requirements, inventory operations, and
distribution processes over the planning horizon.
An important decision support tool and mathematical optimization
technique that has been widely used to guide better demand–supply
planning of supply chains across industries is linear programming (LP).
Many real life demand–supply planning problems can be described in terms
of LP models. While there exist many powerful computer-based LP solvers,
such as Gurobi2 and IBM ILOG CPLEX Optimization Studio,3 our focus in
Chapter 3, Sales and Operations Planning, will be on the mathematical
modeling of LP problems and their solution using the Microsoft Excel
Solver Add-in and on developing appropriate strategies for this business
process.

1.3.3 Inventory management


The determination of an optimal inventory policy, or how much and how
often to order needed supplies and materials, is critical to supply chain
management. In the supply chain network, inventories are often held as
buffers: while ideally shipments arrive exactly when and where they are
needed at the required quality level, often we need to use stored inventory
to balance supply with demand. In Chapter 4, we introduce the components
of and assumptions necessary for various inventory models and address the
trade-off between economies of scale vs. inventory holding costs in
managing inventory and determining the optimal inventory policy.
These models include the economic order quantity (EOQ) Model,
appropriate to long life cycle products with known constant demand, for the
cases where demand must be met when it occurs, as well as those where we
may consider the more flexible alternative of planned shortages, or
backorders, and those where a quantity discount may be offered. In cases
where multiple stock-keeping units (SKUs) are carried, joint ordering
strategies offer a cost-effective operating policy by saving fixed ordering
costs, but at the possible expense of sub-optimal ordering cycles; we
demonstrate how to determine and evaluate the relative advantage of a
mixed SKU strategy under these circumstances.
Most real supply chains involve uncertain demand. Products with a
short life cycle are those in which no replenishment is possible during the
selling season. This would include, for example, perishable items, or
holiday items like Christmas trees, Thanksgiving turkeys, Valentine’s Day
flowers, and fashion apparel. In cases where products are purchased from
low-cost manufacturing countries, transportation issues can result in
extensive lead times, making replenishment infeasible during the season.
The Newsvendor Model addresses the trade-off between lost sales and
markdown costs in determining order quantities and managing inventory in
such environments.
Durable items, which account for a majority of retail goods and
international trade commodities, typically have a long life cycle and long
shelf-life, as well as long lead times from the time an order is placed until it
is delivered. We can use safety stock as a buffer against forecasting errors,
when actual demand exceeds the amount forecasted and planned for, or
when uncertainty in suppliers’ inventory availability results in a mismatch
between supply and demand. We consider a number of mathematical
models which balance safety stock cost with service requirements under
demand uncertainty.

1.3.4 Project management


While many of the drivers discussed above pertain to decision-making and
planning of repetitive operations, project management refers to strategies
and techniques for managing large-scale innovations or revisions in
organizations and processes by planning, control, and scheduling so that
deadlines are met at a minimum cost. Examples include major construction
projects like housing developments, assembly plants, and university
facilities; the design and installation of telecommunications systems,
accounting systems, and manufacturing systems; and R&D on new
products. In Chapter 5, we discuss and illustrate some of the techniques
used in these systems including human factors as well as project
management software.
Where activity times for all tasks in the project are assumed to be
known, the critical path method (CPM), identifies those project activities
which must be completed on time in order for the entire project to meet its
deadline. In some situations, task activity times and sequencing
requirements make it infeasible to complete the entire project by the
required deadline. In such cases, time–cost analysis (TCA) can be used to
expedite activity times, at an additional cost, in order to meet the deadline.
Two approaches are presented: an LP model for determining the optimal
allocation of additional resources to expedite selected activities; and a
technique for balancing the trade-off between the cost of expediting activity
times and the benefit of completing the project prior to the due date.
For the more general case where there is uncertainty in the completion
time for project activities, the program evaluation and review technique
(PERT) provides a more generalized methodology for estimating expected
activity times based on subjective expert opinion in order to determine the
likelihood that the project can be completed within a given required
duration with a given level of certainty. Based upon a combination of
theoretical and empirical research, this allows us to more accurately plan,
control, and schedule the tasks involved.

1.3.5 Service management


The service sector represents an increasingly significant proportion of the
world economy. In the US, for example, services which did not produce
material goods (including government activities, communications,
transportation, and finance) accounted for 77.7% of the 2014 gross
domestic product (GDP); industry (including mining, manufacturing,
energy production, and construction) accounted for 20.7%; and agriculture
(including farming, fishing, and forestry) accounted for 1.6%.4
Since services cannot be “inventoried”, capacity must be matched to
demand on a real time basis, leading to challenges in service operations that
require unique strategies and techniques for management. In Chapter 6, we
consider the design and operation of service systems in terms of queueing,
or waiting line management. We identify causes of congestion in such
systems, as well as provide tools to measure waiting times and queue sizes,
and develop demand management techniques for improving critical
operating characteristics like waiting time, total time, number of customers
waiting, and staff utilization. We consider mathematical models in which
customers arrive randomly, line up for service, are served according to first-
come-first-served (FCFS) priority, and leave the system randomly; this is
typical of many retail operations, like fast food restaurants and banks. We
also discuss more general systems, in which arrivals may be scheduled or
follow some other probability distribution, service completions are not
random, and the order of service may not be FCFS; such systems can be
addressed by the use of Monte Carlo simulation. Emergency rooms and
overbooking in airline yield management are examples of service
management issues that can be solved using this technique.
Strategies for capacity management enable us to incorporate the costs
associated with mismatched supply and demand: when capacity is greater
than demand, we have the cost of underutilized capacity, such as idle staff,
equipment, or facilities; when capacity is smaller than demand, we have the
cost of overutilized capacity, such as overtime salaries, long waiting times
for customers who may opt to leave for other providers, and loss of morale
among overworked employees. Models which balance the economic trade-
off between the cost of service (or underutilization cost) and the cost of
waiting (or overutilization cost), enable us to determine the service capacity,
or number of servers, which yields a minimum total cost of operation. We
also introduce an LP model for staff planning and scheduling given
expected demand for services.
The techniques discussed in the following chapters, along with the
illustrative case studies and extensive computer output, are designed to
present a clear view of the relevant issues in supply chain management in a
variety of business environments, as well as provide techniques for
managing the interrelationships between all segments of the supply chain in
order to optimize supply chain profitability in today’s rapidly-changing
world.

Endnotes
1. “Japan Earthquakes Rattle Toyota’s Vulnerable Supply Chain,” The Wall Street Journal, April 19,
2016.
2. Gurobi Optimizer. Available at: http://www.gurobi.com/.
3. CPLEX Optimizer, IBM ILOG CPLEX Optimization Studio. Available at: http://www-
01.ibm.com/software/commerce/optimization/cplex-optimizer/.
4. “List of Countries by GDP Sector Composition,” StatisticsTimes.com, 2015. Available at:
http://statisticstimes.com/economy/countries-by-gdp-sector-composition.php.
Chapter 2

Forecasting and Demand Management

IT’S TOUGH TO MAKE PREDICTIONS, ESPECIALLY ABOUT THE


FUTURE.

Yogi Berra
FORECASTING IS LIKE TRYING TO DRIVE A CAR BLINDFOLDED AND
FOLLOWING DIRECTIONS GIVEN BY A PERSON WHO IS LOOKING OUT
THE BACK WINDOW.

Anonymous

2.1 Introduction to Forecasting


In Chapter 1, we defined Supply Chain Management as an integrative
approach to maximizing value, or, more specifically, the management of
flow of product, capital, and information between and among the stages in a
supply chain to maximize total profitability. In this chapter, we will be
focusing on one of the major drivers which affect the performance of the
supply chain, forecasting demand. The importance of this is simple: if you
can better forecast demand, you can better plan and execute your supply
chain. Balancing demand and supply is considered to be the most important
and greatest challenge of any supply chain management team. As a starting
point, we will be considering naïve and smoothing models for forecasting
stationary series (with no trend or seasonality); simple regression models
and Holt’s method for forecasting trends, and seasonal factors and Winter’s
method for forecasting more complex periodic series.
Forecasting is one of the most important tasks we undertake in
managing supply chains. A 2011 study in the UK listed demand
management and forecasting as the number two concern of supply chain
management consultants, following inventory management and planning (to
be discussed in Chapter 4).1 At a recent presentation at the Supply Chain
Directions Summit, Greg Aimi, Research Director at Gartner, listed
reducing forecast issues/better demand visibility as the third of the 10 most
important supply chain issues. Simon Bragg, of ARC Advisory Group, said
of forecasting: “One number planning,” is a sign of excellence. Poor
companies produce a sales forecast that is a “tough but achievable” target
for the sales force. Production doesn’t believe this forecast, so it generates
its own, which usually means large batches to reduce manufacturing costs.
Logistics is left to deliver whatever sales manages to sell. Finance doesn’t
believe the production or the sales forecast, so it creates its own to manage
the cash. Excellent companies generate a single forecast, and gain
agreement, feedback, and coordination from sales, production, suppliers,
and customers.”2
Of course there is no forecasting technique that allows us to know the
future with 100% certainty. While a forecast is never perfect due to the
dynamic nature of the external business environment and marketplaces, it is
critical for all levels of functional planning, strategic planning, and
budgetary planning among the companies along a supply chain. Most
organizations operate from the principle of an annual forecasting or
budgeting cycle. The entire organization is engaged in this process, which
begins with predicting the annual sales revenues for the upcoming fiscal
year. Usually that forecast will rely on previous year sales data and then
introduce elements such as new products or services. Additionally,
discontinued products or services are factored along with competitor
activities, economic and geopolitical conditions and more predictable
internal promotional programs. Once the forecasted sales or demand figure
has been determined, the supply chain team develops the plans and
investments to support the demand plan. Supply contracts, capacity
planning and resource plans are put in place for the upcoming fiscal year.
When all this has been finalized or “frozen,” the fiscal plan is monitored
and managed dynamically throughout the year. Forecasting tools are the
critical foundation to this important process, and we will see that we have
many different models we can use, with different assumptions about the
underlying process. We will see also that these different assumptions often
result in models which produce different results for the same data or the
same systems, and how we can address this dilemma. One of the biggest
problems in forecasting is that even with good methods that are appropriate
to the data, the future doesn’t always behave like the past. For example, risk
models developed in the financial services industry resulted in tremendous
losses because the economic collapse in the first decade of the 21st century
wasn’t anticipated.3 Similarly, in the telecommunications industry,
predictions were made in 2000 of 15% growth in equipment purchases for
2001; instead, they fell 7%. Nortel predicted adding 9,600 employees
during the same period; instead 30,000 positions were cut.4 As
organizations have become more globally complex, as in the most recent
economic meltdown, very sophisticated models fail to take into account that
the future might be nothing like the past.5

The laws of forecasting


• Forecasting is always wrong.
• Shorter-term forecasting is relatively easier than longer range
forecasting.
• Predict the end product, not its components.
• Aggregate forecasting is more accurate.
• The bullwhip effect is always there: the farther away we are from the
market end product, the higher the level of variation in purchasing
orders.

Qualitative methods
• When data are not available.
• When no mathematical relationship, or algebraic model exists to
describe the data.
• When human expertise is important to incorporate.
Commonly used qualitative methods in industries include:
— Market Surveys
— Build-up Forecasts (for example, asking multiple channels for their
sales forecasts/predictions, and combining them)
— Historical Analogies (using historical like-patterns to predict future
product trends. For example, using black and white TV demand
patterns to predict color TV demand patterns. Especially used on new
products that may not have their own historical patterns, and on
products/services in introductory stage)
— The Delphi Method (a structured technique for reaching consensus on
a forecast by utilizing a panel of experts)

Quantitative (or objective) forecasting methods


Where we have data available, we can talk about two types of quantitative
forecasting models, time series models and causal models.
Time series models. Here, we are predicting future values of some variable
(e.g., stock price, quarterly sales of Claritin, etc.) based only on previous
values of that variable. For example, in the Stay Warm Call Center Case
(Section 2.8.1), we wish to predict the number of calls based on previous
observations of the number of calls, so the data is just the time series itself.
Causal models. Here, we are predicting future values of some variable
based on other explanatory variables. For example, we wish to predict sales
of shirts each month at a retail chain like Walmart or Target as a function of
the amount spent on advertizing each month, so we need monthly data on
sales as well as advertizing expenditures in prior months. Econometric
models are large-scale models which explain economic phenomena, like
Gross National Product, as a function of explanatory variables about the
economy, like the Consumer Price Index, the Dow Jones average, the rate
of inflation, etc.
In operations analysis, we are most often concerned with time series
data — predicting values of some variable, like demand, for the future. We
will focus on those techniques here.

Supply Chain Forecasting in Practice — Walmart’s Retail


Link®

Walmart’s Retail Link® is the bridge between Walmart and its


vendors. More and more Walmart/Sam’s Clubs are relying on their
suppliers to manage their own business, which greatly improves the
demand–supply planning of the giant retail chain. Walmart created,
maintains and constantly enhances Retail Link® to provide its vendors
with the information needed to forecast demand for their products.
This allows the vendors to forecast which products to stock at what
time and how many. Suppose that, for example, you are a NJ
distribution and sales manager of Procter and Gamble Tide Ultra
Liquid Detergent. Part of your daily responsibility is to predict how
many gallons of Tide Ultra you will need to produce and when to ship
them to each of the nearly 60 Walmart stores in New Jersey. The use
of Walmart Retail Link® will allow you to view the historical sales of
Tide Ultra at these locations to help you plan ahead; as this data
becomes more “real time” you can better adjust your supply plans
within the execution cycle times to better control inventory levels and
minimize stock-outs.
Source:
http://www.imsresultscount.com/resultscount/2013/03/walmarts-
secret-sauce-how-the-largest-survives-thrives.html

2.2 Fundamentals of Time Series

What is a time series?


A time series is a set of data obtained by making observations at equally-
spaced points in time. The notation Yt denotes the value of the time series
variable Y at time t. In the construction of time series models, the selection
of the model form is very important; we want to determine how we select a
model form which describes the underlying patterns in the data, called the
time series components.

Time series components


A real life time series typically consists of at least one of the following:
stationary behavior, a strong trend component, and/or a seasonal
component.
Stationary time series. A stationary series refers to a stationary (horizontal)
pattern observed over time, which does not contain any obvious trend
and/or seasonal components. In Figure 2.1, showing Apple’s revenue and
net profit since the introduction of the iPhone, the net profit between the
third quarter of 2007 and the third quarter of 2009 was stationary. In a
stationary system, we have random fluctuation around a mean level, ȳ, or y-
bar. Note that while the values vary around a constant mean level, not all
the values are exactly the same. We would like a model that characterizes
the type of variation from the mean that we observe in our data. We will
consider three models for stationary data: naïve models, simple moving
average models, and exponential smoothing models.
Figure 2.1: Apple’s Revenue and Net Profit since the Introduction of the iPhone (in billions).
Source: http://revenuesandprofits.com/apple-revenues-and-profits-2000-to-2015-pre-and-post-
iphone/

Time series with a strong trend component. Often we observe a stable


pattern of growth or decline over time, which can be linear or nonlinear. For
example, in Figure 2.1, Apple’s revenue between the third quarter 2007 and
the fourth quarter of 2011 shows a steady upward trend.
An example of a series with a downward trend is shown by the median
home values in New Jersey between 2006 and 2014 in Figure 2.2.
We will apply regression analysis and Holt’s method to the analysis of
time series with a trend component.
Time series with a seasonal component. Real life data often exhibits
fluctuations that repeat during specific portions of each year (e.g., monthly
or seasonally), as shown in Figure 2.3, where total non-farm hires peak in
June and reach a minimum in December of each year.
We will apply seasonal factors to the analysis of time series with a
seasonal component.
Figure 2.2: New Jersey Median Home Values between 2005 and 2014.
Source:
http://www.nj.com/news/index.ssf/2015/09/nj_home_values_may_have_finally_bottomed_after_year
s_of_losses_map.html

Figure 2.3: Monthly Total Non-farm Hires between 2001 and 2015.

Source: https://research.stlouisfed.org/fred2/graph/?id=JTUHIL
Figure 2.4: Cyclic Pattern of a Time Series.

Cycles. Many real life time series also exhibit fluctuations reflecting long-
term business cycles and/or economic market conditions (e.g., recessions or
inflations), often in combination with trend and/or seasonality, as shown in
Figure 2.4.
Cyclical behavior refers to the longer-term fluctuations that can coexist
with trend and/or with seasonality. Unlike seasonal behavior, cyclical
behavior often moves unpredictably. During periods of economic
expansion, the business cycle lies above the trend line; during periods of
economic recession, the cycle lies below the trend line. We often develop
econometric models to describe cyclical behavior.
Before exploring the approaches for forecasting in detail, let us
examine the techniques that measure the accuracy of a forecast.

Forecasting error
Given a time series exhibiting one or more of the components described
above, the residual, or random, error is the deviation between the actual
and predicted values of the dependent variable y which remains after trend,
seasonality, and cyclical behavior has been “explained” by the time series
model. In forecasting, we are seeking the model that minimizes this residual
error, or the model which explains as much of the behavior of y as possible.
In using different forecasting methods, we want to be able to calculate some
measure of forecast accuracy — so that we can see how good a particular
method is, and also so that we can compare methods to see which model is
better. There are several measures we can use — all are a function of the
residual error term, which we want to minimize.
Notation. We will use the following algebraic notation:
Dt = Observed value of demand during time period t (t ≥ 1);
Ft = Forecast made for period t at the end of period t − 1 (given D1, D2,
… , Dt−1) = one-step-ahead forecast;
et = Forecast error = residual error = difference between forecasted value
for period t and actual demand for period t, as shown in Equation
(2.1):

(2.1)

A one-step-ahead forecast implies that we are forecasting demand for


period t at the end of period t−1 (that is, when we know what the last
demand value was). Multiple-step-ahead forecasts can be made as well, and
we will discuss these when we introduce some of the forecasting techniques
referred to above.
Mean absolute deviation (MAD). One measure of forecast accuracy, the
MAD, is the sum of the deviations between each forecasted value and the
corresponding actual value:

(2.2)

where the index


t = period number,
n = total number of periods,
Ft = forecast in period t,
Dt = demand in period t.
Mean squared error (MSE). Another measure of forecast accuracy, the
MSE, is the sum of the squared deviations between each forecasted value
and the corresponding actual value:
(2.3)

We will demonstrate the calculations of MAD and MSE in detail, and


will also compare the information obtained to answer the question we pose
when we analyze the Stay Warm Call Center Case in Section 2.8.1: How
can we compare different forecasting methods? However, before we begin
our detailed computational discussion, we need to consider the validity of
forecasting into the future.
Measures of forecast accuracy such as the MAD and MSE tell us how
well the forecasting model is able to describe historical values of the time
series. That is, they are a measure of the deviation between the actual, or
observed, values and the corresponding values predicted by the model. To
use the same method to forecast for the future, we must assume that the
historical pattern will continue into the future. In determining how good
these models are at predicting the future, we need to be very careful: the
forecasts we generate are based on past data — they are only valid if the
future behaves like the past. We will see later that forecasts are only valid
within the range of the data used to generate the model, which in the case of
time series data is only through the present — so forecasts into the future
will always be suspect and will require special care before taking action
based on their results.
As a dramatic example, consider the Challenger space shuttle, which
tragically exploded in 1986 after an O-ring seal failed at liftoff. It later
emerged that although the temperature on the morning of takeoff was 18°
Fahrenheit, no data had ever been collected on performance at temperatures
below 40° Fahrenheit. The strength of the O-rings and other components
was assumed at a temperature outside the range of the data previously used.
As other examples of the danger of extrapolation outside the range of
the data, in a 2008 study of obesity, forecasts of the percentage of
overweight Americans were extrapolated from existing data, with ludicrous
results, including that in 40 years, every American would be overweight.6
Similarly, a study in which three points over 30 years were fit to a linear
model and projected 40 years into the future showed that 13 out of every 10
Americans will no longer have landlines! The dangers of predicting
population far into the future are well described in this 2005 article.7

2.3 Models for Predicting Stationary Series

Naïve models
Naïve models are often used when no trend is apparent in the data. They are
descriptive techniques which generate only a point estimate for the forecast,
not a confidence interval reflecting a range of values and an associated level
of confidence. More sophisticated methods, like regression analysis, allow
us to specify a level of precision (the range) and reliability (the level of
confidence) for forecasts generated. The naïve models we will examine in
detail are the arithmetic mean and the last period value.

2.3.1 Arithmetic mean


Given a set of data, our first step is to plot a scatter diagram to see what the
pattern looks like. In the example in Table 2.1, we have 12 consecutive
weeks of data on demand for a commodity in units of 1,000. A scatter
diagram, or graph of demand (the dependent variable) as a function of time,
appears in Figure 2.5.
The pattern, we see, is a horizontal, or stationary one, so one of the
naïve models may be appropriate. The arithmetic mean model assumes that
the mean demand is the best estimate of the next value of demand and
therefore assumes that deviations between the observed demand and the
mean are random. It is calculated as:

(2.4)
So = 10 and Ft = 10 for all time periods. The plot in Figure 2.6 shows the
actual vs. forecasted values.
Note that the arithmetic mean in the plot “smooths out” all fluctuations
— that is, in using the arithmetic mean as a forecast model, our assumption
is that all deviations of the actual values from the mean are random.

Table 2.1: Demand Data.

Figure 2.5: Scatter Diagram.

Figure 2.6: Arithmetic Mean Model.

2.3.2 Last period value model


When we use the arithmetic mean, we give equal weight to all past values
and we assume all deviations from the mean are due to random chance.
However, we may believe that more recent values contain more information
than earlier values, so that variations from the mean are not necessarily
random; using the last period value is another naïve model which gives full
weight only to most recent value, as shown in Equation (2.5).

(2.5)

The last period value assumes that the most recent value is the best estimate
of the next value. As an example, consider the 12 consecutive weeks of
demand considered earlier in Figure 2.5. Calculating the last period value
from Equation (2.5), we have the forecasts shown in Table 2.2.
In Figure 2.7, we show a graph of actual demand and forecasted
demand.
We see that the forecasted demand lags actual demand by one period.
To compare the two forecasting methods for this data, we can calculate the
MAD and MSE. Recall from Equations (2.2) and (2.3):

Table 2.2: Last Period Value.

Figure 2.7: Last Period Value Model.


Since the computation of the forecast using last period demand must begin
with the data in the first period (that is, there is no forecast generated for the
first period), for comparison purposes both the MAD and MSE calculations
for both the arithmetic mean and last period demand models are based on
the forecasted values from week 2 through week 12. For the arithmetic
mean model, Table 2.3 shows the calculation of the MAD and MSE.
Table 2.4 shows the calculation of the MAD and MSE for the last
period value forecasts.

Comparison between arithmetic mean and last period value


Since the MAD and MSE measure forecast accuracy in two different ways,
our calculation of the two measures are not comparable to each other. That
is, we can compare the MAD for two different forecasting methods, or we
can compare the MSE for two different forecasting methods, but we cannot
compare the MAD to the MSE. Note that in this example, both the MAD
(1.55 for the arithmetic mean; 2.63 for the last period value) and the MSD
(3.18 for the arithmetic mean; 9.36 for the last period value) are lower for
the arithmetic mean model, indicating that it is a better forecasting method
than the last period value model. If we examine Figure 2.6, we see that
fluctuations from the mean really do appear to be random — the data is
horizontal, or stationary, and there does not seem to be any reason to give
undue weight to the deviations between the actual values and the mean. As
we pointed out, averaging all the values “smooths” the data, eliminating
these deviations. However, the last period value gives all weight to the last
observation, hence “overreacting” to fluctuations which may, as in this case,
be random. In such cases, the arithmetic mean is a better measure, as our
calculations of the MAD and MSE have demonstrated quantitatively.
Table 2.3: Calculation of MAD and MSE for Arithmetic Mean Model.

Table 2.4: Calculation of MAD and MSE for Last Period Value Model.

So for data that is actually stationary and fluctuates randomly around


its arithmetic mean, the arithmetic mean will always be a better forecasting
model than the last period value. Suppose, instead, that we are measuring
sales over time and during the period over which we are collecting data, we
institute a new advertising promotion, which results in a shift in the mean,
as shown in Figure 2.8.
If we calculate the arithmetic mean, we will give equal weight to all
values, including the older, pre-promotion, no-longer-relevant, data. In this
case, the last period value would be a better forecasting method. The same
is true when the data is non-stationary — that is, when there is a trend or
other non-random pattern. A simple example is weather forecasts. If we
want to predict tomorrow’s temperature, using today’s temperature will
clearly provide a more accurate forecast than the arithmetic average of all
daily temperatures throughout the year.

Figure 2.8: Shift in the Mean.

So for non-stationary data such as a trend, a shift in the mean, or other


non-horizontal patterns which do not fluctuate randomly around an
arithmetic mean, the arithmetic mean, or average of all the historical data, is
much slower in reflecting a shift to a new level, as in our example in Figure
2.11; the last period value is a better (but not necessarily optimal)
forecasting model for data that has a trend or other pattern, or horizontal
data shifting to a new level.
Generally, as we’ll see when we discuss regression analysis, the
arithmetic mean is used primarily as a basis for comparison with other
methods; that is, Occam’s Razor, or the law of parsimony, or the law of
succinctness, is a principle that can be summarized as “the simplest
explanation is the best one.” If we have two methods that yield comparable
forecasts, the method that makes the fewer assumptions is preferable. So in
applying more complex forecast methods, we will want to know if the
results are actually better than using the simpler model, the arithmetic
mean, to forecast.
2.3.3 Moving average (MA(N)) model
We have seen that where the data values are truly stationary, the best
forecast model is one which “smooths” out the random fluctuations. Moving
average and exponential smoothing models provide stable forecasts where
there is a shift in the mean, such as the example of the advertising
promotion initiated during the period over which the data is collected, but
not necessarily trend, seasonal, or cyclical behavior.
Recall, Ft denotes the forecast made for period t at the end of period
t−1 (given D1, D2, . . . , Dt−1), or the one-step-ahead forecast. A moving
average of order N is the arithmetic average of the N most recent time series
values.

(2.6)

That is, the arithmetic mean of the N most recent observations is used
as the forecast for the next period, giving zero weight to previous
observations. Recall our earlier example, in Table 2.1, showing horizontal
data over a 12-week period.

The scatter diagram in Figure 2.5 showed that the data is horizontal, so
using a 4-week moving average, we can find the one-step-ahead forecasts
for weeks 5–12, using Equation (2.6):

For a 4-week moving average, N = 4, so we take the average of the first


four demands for the one-step-ahead forecast of demand in week 5:
For week 6, the one-step-ahead forecast becomes:

Table 2.5: 4-Week Moving Average (MA(4)) Forecasts.

Note that in this particular example, we are dropping a 9 (D1) and adding a
9 (D5), so the numerical value for F6 is identical to that of F5. Similarly, of
week 7, the one-step-ahead forecast is:

Table 2.5 shows the 4-week moving average values for weeks 5–12 and the
associated errors for each forecast, as calculated from Equation (2.1).
Recall from Equation (2.2) that the MAD is calculated as:

Recall from Figures 2.8 and 2.9 that the MAD for the last period value
was 2.63 and the MAD for the arithmetic mean forecast was 1.55; the 4-
step moving average forecast has an MAD in between the two. Not
surprisingly, since this is truly horizontal data, the arithmetic mean is the
best, and simplest, forecasting model. If we look at a graph of the actual vs.
forecasted values, as shown in Figure 2.9, we see that the moving average
has indeed “smoothed out” the fluctuations, yielding a series with much less
“up and down” behavior.

Figure 2.9: 4-Week Moving Average (MA(4)) Model.

To predict future demand for week 13, we use Equation (2.6) again:

However, this is only a valid and reliable predictor if we assume that the
future will behave like the past. While this may be accurate over the short
term, it is important to realize that this is not always the case. In a 2005
study of predicting hurricanes, the best forecast was found to be a 5-year
moving average.8

Multiple-step-ahead moving average


If we are interested in forecasting beyond one-step-ahead (for example, if
we wish to forecast for week 14 based on the 12 weeks of data), because the
data is assumed to be stationary, the moving average forecast made after
week 12 for any future week will be the same. So the forecast for week 14
will be the same numerical value as for week 13 (and for week 15 and for
week 100):
That is, the multiple-step-ahead and one-step-ahead forecasts are
numerically identical, but clearly the one-step-ahead forecast will be most
accurate since it is closer to the actual time at which data was collected.

Moving average — Choice of N


Let us consider what happens if we change the period of the moving
average forecast. For the same data, using a 6-week moving average, we
can find the one-step-ahead forecast for weeks 7–12. From Equation (2.6):

So N = 6 and

Table 2.6 shows the calculated MA(6) forecasts and the error terms. Figure
2.10 shows a graph of the actual vs. forecasted values; note that because
this forecast averages a larger number of demand values for each forecast,
the data is smoothed out even more than in the 4-week moving average
calculation graphed in Figure 2.9.
From this, we can calculate the MAD as before from Equation (2.2).

Note that this MAD is approximately the same as that calculated for the
MA(4) forecasts. So how do we choose N? In general, N is chosen to be
large enough to include enough observations to smooth out random
fluctuations, but small enough not to give weight to irrelevant past
information. Hence, the more weight we would like to give to more recent
demands, the smaller N should be; correspondingly, the more past values
are considered relevant, the larger N should be. So while the data is
assumed to be stationary (so all values would be relevant and N should be
large), we may feel that the system is changing, in which case we would
want to ignore past data (and N should be small).

Table 2.6: 6-Week Moving Average (MA(6)) Forecasts.

Figure 2.10: 6-Week Moving Average (MA(6)) Model.

To illustrate, consider the example in Table 2.7, showing eight periods


of demand data, and three-period moving average forecasts for periods 4–8.

Recall our earlier example in Figure 2.11, in which the data reflected
sales during a time period over which we institute a new advertising
promotion, resulting in a shift in the mean. In this case, a lower N would
detect the shift more quickly; since it uses only recent values, it would
detect the shift N periods after it occurs.
That is, smaller N’s will track shifts in the level of a time series more
quickly, while larger N’s are more effective in smoothing out random
fluctuations over time. In other words, a smaller N reacts more violently to
fluctuations, while a larger N smooths out those fluctuations. This may be
illustrated by considering the two limiting cases, or the smallest N (N = 1)
vs. the largest N (N = the number of data points in the time series). When N
= 1, the MA(1) forecast is calculated by taking the average of the last one
observed value, which is exactly the same as calculating the last period
value, so Equation (2.6) is identical to Equation (2.5), and the MA(1)
forecast is the same as the last period value forecast. As we have already
pointed out, this forecast responds violently to random fluctuation, and does
not smooth the data.

Table 2.7: 3-Period Moving Average (MA(3)) Forecasts.

When N equals the number of data points in the time series, the moving
average forecast is calculated by taking the average of all the observed
values, which is exactly the same as calculating the arithmetic mean, so
Equation (2.6) is identical to Equation (2.4), and the moving average
forecast is the same as the arithmetic mean forecast. As we have seen, in
this case, all random fluctuations are smoothed out. For actual data, we can
use a trial and error procedure to determine what value of N will be optimal;
that is, what value of N will yield forecasts with minimum forecast error, as
measured by either the MAD or MSE. These calculations are, of course,
more easily done by computer, as we describe in Section 2.3.4.

2.3.4 Exponential smoothing (α) model


In an exponential smoothing model, we use a series of decreasing weights,
giving the most weight to the most recent observation. Unlike the moving
average model, every data point, not just the last N data points, gets a
weight, with more recent points getting more weight and the weights
decreasing (geometrically) with increasing age of the data, as we move
back in time. We can demonstrate this algebraically by considering
Equation (2.7), the exponential smoothing model.

(2.7)

where
Ft = forecast of the time series for period t,
Dt−1 = actual value of the time series for period t − 1,
α = smoothing constant alpha (0 < α ≤ 1).
That is, the forecast at time t is a weighted average of the forecast at
time t − 1 and the value of demand at time t − 1. To show that Ft is actually
a weighted function of all previous values of demand, we can expand
Equation (2.7), substituting for Ft−1 the value given by Equation (2.7) and
then algebraically rearranging:

We can continue substituting — next, for Ft−2, then for Ft−3, etc., so that the
expression for Ft becomes the sum of decreasing fractions times each of the
prior demands, as shown in Equation (2.8).

(2.8)

where the weights multiplying each demand are decreasing non-negative


fractions which sum to 1. So the most recent demand gets the most weight,
but all prior demands are included, at decreasing weights. We can further
rearrange the terms in Equation (2.7), recalling from Equation (2.1) that the
error, et, is the difference between the forecasted and the actual values in
week t. Thus, Equation (2.7) becomes:

That is, the forecast in week t can be expressed as the forecast in week t − 1
minus some fraction (alpha) times the error in week t − 1. This means that
if we forecast high in week t − 1, the error et−1 will be positive, so the
forecast in week t (the following week) will be somewhat lower (i.e., we
will “smooth” the values). Similarly, if we forecast low in week t − 1, the
error et−1 will be negative and our forecast in week t will be somewhat
higher. Computationally, we calculate the forecast not from Equation (2.8),
which would require using all the previous demand values, but more
efficiently by using the original relationship as expressed in Equation (2.7),
where Ft is a function only of Dt−1 and Ft−1. However, while this is
computationally simpler, since there are only two terms to be calculated,
our problem is getting the recursive process started: at time t = 1, we cannot
calculate F1 from Equation (2.7) because there are no values for D0 and F0.
Without a value for F1, we cannot compute the next period forecast, F2.
Hence, one way to start the calculation is by assuming F1 = D1.
(Sometimes, when we have relevant information on past history, we may
choose to use a different value for F1, as described below.)
We can summarize the computational procedure as follows:
• Select α between 0 and 1;
• Assume F1 = D1 (or, alternatively, given sufficient historical data, if we
wish to avoid placing too much weight on the early data, we can start
with the average of early demands as the initial forecast);
• Use the recursive formula in Equation (2.7) to calculate the one-step-
ahead forecasts:
Recalling our earlier example of 12 weeks of demand values in Table
2.1, we have already seen, in Tables 2.3 and 2.4, the calculation of the 4-
week and 6-week, respectively, moving average forecasts. For this
stationary data, let us now illustrate the calculation of an exponentially
smoothed forecast, using α = 0.1. We begin by assuming F1 = D1 = 9 and
then use Equation (2.7) to generate F2, F3, etc.
Ft = αDt−1 + (1 − α)Ft−1,
F1 = D1 = 9,
F2 = (0.1)(9) + (0.9)(9) = 9,
F3 = (0.1)(8) + (0.9)(9) = 8.9,
F4 = (0.1)(9) + (0.9)(8.9) = 8.91.
Note how close (smooth) these forecasts are; we will see shortly that this is
because of the value of alpha that we chose.
In Table 2.8, we show the exponentially smoothed forecasts and error
terms, calculated from Equation (2.1), and Figure 2.11 a graph of the actual
vs. forecasted values.

Table 2.8: Exponential Smoothing Forecasts (α = 0.1).

Figure 2.11: Exponential Smoothing Model (α = 0.1).


We can now calculate the MAD as a measure of forecast accuracy from
Equation (2.2).

To predict future demand, we must remember that for reliable results,


we are assuming that future will behave like the past. Under this
assumption, we can use Equation (2.7) to generate the next forecast:

Multiple-step-ahead exponential smoothing


Just as in the moving average method, if we are interested in forecasting
beyond one-step-ahead (for example, if we wish to forecast for week 14
based on the 12 weeks of data), because the data is assumed to be
stationary, the exponentially smoothed forecast made after week 12 for any
future week will be the same. So the forecast for week 14 will be the same
numerical value as for week 13 (and for week 15 and for week 100).

That is, the multiple-step-ahead and one-step-ahead forecasts are


numerically identical, but clearly the one-step-ahead forecast will be most
accurate since it is closer to the actual time at which data was collected.
If the data is not actually stationary, the multiple-step-ahead forecasts
will not be accurate (that is, planning for holiday demand in June by
assuming stationary behavior ignores the seasonal component and will
hence be invalid). We will see later in this chapter how we incorporate
trends and seasonality.

Exponential smoothing — Choice of α


We saw earlier that the choice of N in a moving average model reflected the
weight given to recent data. Similarly, in an exponentially smoothed model,
large values of α give greater weight to more recent data (like small values
of N in the moving average model, in which only the most recent N data
values are used to calculate the forecast), and hence exhibit greater
sensitivity to variation. That is, using large values of α yields forecasts
which react quickly to shifts in the demand pattern, but exhibit more
variation (less smoothing) from period to period.
On the other hand, small values of the smoothing constant α give
greater weight to historical data (like large values of N in the moving
average model), and hence exhibit relatively little sensitivity to variation.
Using small values of α yields forecasts which are more stable (smoother)
and exhibit less variation from period to period. In our example, our value
of α = 0.1 is small, so we expect that the exponentially smoothed forecasts
will show little fluctuation from period to period. We see from Figure 2.11
that there is much less variation in the forecasts than in the original data.
We will see when we illustrate the use of Microsoft Excel that the opposite
result occurs with a larger α.
Like N in the moving average model, α can be chosen via trial and
error. Using the first half of the available historical time series to determine
the best α and then testing that value on the second half of the data series,
we are able to determine the α which minimizes the deviation between the
actual and the exponentially smoothed values (measured by the MAD
and/or the MSE).
As a rule of thumb, for production applications we often use 0.1 ≤ α ≤
0.2 or 0.3. We have seen that large values of N in the moving average model
correspond to small values of α and vice versa, in terms of the relative
weights given to recent and past observations. For values of α given by
Equation (2.9) the exponential smoothing model and the N-period moving
average models give similar forecasts.

(2.9)
For example, we used N = 4 in our moving average example, so applying
Equation (2.9), the exponential smoothing model with α = 2/5 = 0.4 should
give results close to MA(4). We also used N = 6 in the same moving
average example, which yields an α of 0.29 using Equation (2.9). We will
illustrate these cases using Microsoft Excel.
In our exponential smoothing computation for the same data we used α
= 0.1. Again using Equation (2.9) to solve for the corresponding N, we find
N = 19 (clearly the maximum possible N would be the size of the sample,
12); as we have seen, a larger N means more smoothing, just as a smaller α
does. Rather than a trial and error approach, we can find an optimal α by
solving the nonlinear optimization problem of determining the α which
minimizes the MSE.
We will be discussing the use of Microsoft Excel Solver for
optimization in more detail in Chapter 3, but we demonstrate briefly here
how it can be used to determine the optimal value of α, the smoothing
constant of the exponential smoothing method, for a given data sample. The
Microsoft Excel spreadsheet in Figure 2.12 shows this process, where we
are given weekly sales data for a 14 week period. Using α = 0.1 to predict
this series, resulted in a sum of squared errors 136.3640. Using Solver, we
obtained the optimal value of α = 0.867993, which reduced the sum of
squared error to 57.6322. Note that α = 0.867993 is optimal only for the
given data sample. When the time series changes as the marketing
conditions change, α = 0.867993 may no longer be optimal.
As we have seen, naïve methods, moving average methods, and
exponential smoothing methods are most appropriate for stationary series.
In particular, the arithmetic mean smooths out all variation from the mean;
the last period value accentuates any deviation from the mean and lags
behind the actual values by one period; the moving average allows us to
give weight to only the most recent observations; and exponential
smoothing allows us to weight all observations, but with larger weights for
the most recent, and decreasing weights as we go further back in time.
When we wish to forecast trends in our data, these methods are
inadequate; the arithmetic mean will ignore the trend completely in
weighing all observations equally; and the last period value, moving
average and exponential smoothing forecasts will all lag behind a trend if
one exists. Whenever a obvious trend exists in the time series, we should
consider the forecasting techniques in the following section.

Figure 2.12: Computation of Optimal α Using Microsoft Excel Solver.

2.4 Models for Predicting a Trend


The two most commonly used forecasting methods for predicting a series
with a trend are simple regression analysis and Holt’s trend model.

2.4.1 Simple linear regression model


Regression analysis is a methodology for measuring the relationship
between variables in order to predict the value of a variable. Specifically,
the term refers to the development of a statistical model to predict a
dependent (response) variable as a function of one or more independent
(explanatory) variables. For example, a supplier’s product quality, price,
and services could be the major factors (or independent variables) that
impact the sales of the supplier, which is the dependent variable in this case.
The simple linear regression model is a particular type of regression
model, in which we are working with a data sample containing n pairs of
observations (x1, y1), (x2, y2), (x3, y3), … , (xn, yn) where n is the sample
size, xi is the ith observation of the value of independent variable x, and yi is
the ith observation of the value of the dependent variable y. We are
interested in using the information in this data sample to construct a linear
statistical model of the form:

(2.10)

which, for any given value of x, predicts the corresponding value of y. In


Equation (2.10), parameters α and β represent the intercept and the slope,
respectively, of the linear prediction model. (Note that this α is not the same
α used in the exponential smoothing model.)
For example, if we wish to predict sales of jeans as a function of
dollars spent on TV advertising, sales would be the dependent variable (y),
and the dollars spent on TV advertising would be the explanatory, or
independent variable (x). If we wish to predict profit in a retail business as a
function of store space, in hundreds of square feet, then profit would be the
dependent variable (y), and store space would be the independent variable
(x). If we are developing a model to predict demand for a soft drink as a
function of time, in months, then demand is the dependent variable (y), and
time is the independent variable (x). ŷ is the forecasted value of the random
variable y.
Recall our earlier distinction between causal and time series models. In
the case of causal (also called cross-sectional) models, the independent
variable might be the amount spent on advertising, and so we would want to
predict demand in the current month as a function of advertising
expenditure in the previous month. Often, in the case of a time series, the
single explanatory variable is time. For example, given a set of data on 24
monthly demand values for the last 2 years, if we wish to predict the
demand in month 25, we would construct a time series model of demand as
a linear function of the independent variable time.

Computation of regression coefficients


When using a simple regression model to predict a time series, our data
sample becomes

where t is the independent variable, time, and Dt is the dependent variable,


demand. We would like to describe this behavior by the linear equation
which minimizes the total deviation between the actual and forecasted
values. Recall from our earlier discussion that the MSE is such a measure of
total deviation. For our sample data, we would like to find estimates of the
model parameters α and β which minimize the MSE. Using calculus to
derive the result (shown in Appendix 2.1), Equation (2.11) enables us to
construct (or fit) the model, or to estimate the model parameters α (the y-
intercept) and β (the slope) by the least squares estimates a and b.

(2.11)

Table 2.9: Regression on Demand for DVD Players.

t Dt t × Dt
1 44 44
2 43 86
3 44 132
4 45 180
5 46 230
6 46 276
7 49 343
8 51 408
Sample size n = 8

For example, suppose that an international supplier of a new wireless


DVD player observed its sales in Maryland stores (in units of 1,000) exhibit
an increasing trend. The sales data is shown in Table 2.9. We wish to
construct a simple regression model to help the supplier to plan for
shipments to Maryland stores in periods t = 9, 10, and 11, upon which the
contract with a third party shipping company will be based.

Thus, our simple linear regression model becomes

for t = 9, we have F9 = 41.392 + 1.024 × 9 = 50.608. Similarly, for t = 10


and 11, we have F10 = 41.392 + 1.024 × 10 = 51.632, and F11 = 41.392 +
1.024 × 11 = 52.656, respectively.

Computation of regression coefficients using Microsoft Excel


Microsoft Excel provides several functions that allow us to perform
regression analysis. The example in Figure 2.13 shows demand data for a
sample of n =10 months. The output in cell B31 is a, the least squares
estimate of the y-intercept α, and the output in cell B32 is b, the least
squares estimate of the slope, β, so that the linear regression model is:

Simple linear regression models are used widely in many business


applications. For example, Figure 2.14 exhibits a time series of percentage
of smokers in the UK, showing a downward trend between 1974 and 2013,
and suggesting the use of a simple linear regression model for its future
predictions.

Figure 2.13: Regression Analysis Using Microsoft Excel.


Figure 2.14: Percentage of Smokers in the UK between 1974 and 2013.
Source:
https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/healthandlifeexpectanci
es/compendium/opinionsandlifestylesurvey/2015-03-19/adultsmokinghabitsingreatbritain2013

Note that the use of a simple regression model could potentially


become problematic when the slope of a time series keeps changing (e.g.,
the P/E ratio series of a company).

Forecast accuracy of regression models


Recall, we use the MSE to evaluate forecast error. Since we have calculated
our regression parameters as those that minimize the sum of the squared
errors, the MSE for the resulting equation will be the minimum possible for
any line through the data. For linear regression, because we are estimating
two parameters (the y-intercept α and the slope β), we lose two degrees of
freedom, and the mean square error is the sum of the squared residuals
divided by n − 2. Thus, for simple linear regression, the MSE is given in
Equation (2.12):

(2.12)
Looking at the analysis of variance (ANOVA) section of the Microsoft
Excel output in Figure 2.12, the Sum of the Squared Residuals, SSResiduals, is
given in cell C27, or 1466.752. From Equation (2.12), since there are n = 10
data points in this example, if we divide this by n−2, or 8 degrees of
freedom (df Residuals, as shown in cell B27), the MSE is 1466.752/8 =
183.3439. Note, however, that this value is already calculated in the
Microsoft Excel output and shown as MSResiduals, in cell D27.

Cause and effect


Often we are investigating relations between variables that are assumed to
reflect causality, like increased advertising “causing” higher sales, or
increased smoking “causing” a higher incidence of lung cancer. It is
important to recognize that a significant linear regression does not imply
cause and effect, but indicates only that a linear trend exists. While high
correlation (or a highly significant regression result) may result because the
explanatory variable does reflect an underlying cause, it can also be the
result of random chance (spurious correlation) or from the effects of other
variables. This must be determined from a knowledge of the variables and
the mechanisms involved in the behavior of the system, and not from the
regression values.
As an example of spurious correlation, consider the predictive powers
of Paul the Octopus during the 2010 World Cup. Paul correctly predicted
the outcome of eight games, correctly predicting Germany’s results on
seven matches (five wins, two losses) and Spain’s final win. On a
probabilistic basis, given that there are two possible outcomes on each
match, the probability of randomly guessing eight game outcomes correctly
would be (1/2),8 or 1/256. While this is a small probability, it is not zero,
and Paul’s remarkable prophesies were clearly no more than coincidence,
and not causality (remember, Paul was an octopus!!).9
As an example of correlation which results from the effects of other
variables, early studies of the mid-20th century polio epidemic in the US
observed an increased incidence in the summer, leading public health
officials to conjecture that the high correlation between ice-cream
consumption and increased incidence of the disease might be due to dietary
effects, and to recommend a low-dairy diet for children. It quickly became
apparent that the causal factor, a virus easily transmitted in swimming
pools, was quite different; the correlation between incidence of polio and
ice-cream consumption occurred because both varied with temperature, not
because one caused the other.
A 2015 study purported to show that eating chocolate can help one lose
weight. The study, which turned out to be a hoax, was done to highlight the
fallacy of concluding that a strong linear relationship implies causality
when the sample size is too small and there are too many contributing
factors, making it more likely that some random factor (like eating
chocolate) would be a statistically significant factor in weight loss.10
Another recent example of incorrectly assuming correlation implies
causality is the often-observed relationship between music and academic
success and the assumption that early music education causes success.
However, there are alternative explanations for this relationship (such as
parents who can afford music lessons may also be inclined to read to their
children rather than allow them to spend their time in front of a TV; the
ability to memorize historical facts may facilitate better retention of chord
progressions, etc.), and the only way to determine the actual cognitive
effects of music education is not with correlations and anecdotes, but with
randomized controlled trials, where children are randomly assigned to
either music lessons or other rigoros lessons, or to a control of no lessons
and then tested afterwards. In fact, such randomized trials have failed to
support the cognitive benefits of music lessons.11
As these examples illustrate, it is very important that causality not be
inappropriately assigned to all situations where there is a strong relationship
between variables. The evidence of such a relationship is a signal to look
for appropriate causes, if they exist, based upon a sound analysis of the
underlying mechanisms involved.

Multiple Regression Models


While simple linear regression is limited to the consideration of one
explanatory variable, it is possible to predict the value of a dependent
variable, Y, as a function of more than one independent variables X1, X2, . . .
, Xp. For example, we can predict sales of jeans as a function of dollars
spent on TV advertising, and years of experience of store managers; or the
demand for heavy-duty car clutches as a function of time and the retail price
of the car. The computation and interpretation of multiple regression
models, an extension of the procedure for simple regression models, is
beyond the scope of this text, but an important tool in forecasting.

2.4.2 Holt’s trend (α, β) model


Holt’s trend model,12 which is also sometimes called the adjusted or double
exponential smoothing model, is shown in Equation (2.13).

(2.13)

where St is the base level of the time series at time t, and Gt is the slope
estimated at time t. The values of St and Gt are computed after the value of
Dt is observed, as shown in Equations (2.14) and (2.15).

(2.14)

(2.15)

In Equations (2.14) and (2.15), parameter α is the data smoothing


factor 0 < α < 1, and parameter β is the trend smoothing factor, 0 < β < 1.
(Note that α and β are not the parameters of either the simple linear
regression model or the exponential smoothing model.) We can use Holt’s
trend model to make a k-step-ahead forecast, as shown in Equation (2.16).

(2.16)

Figure 2.15: Data for Holt’s Trend Model Example.

For example, if α = 0.10 and β = 0.20 as shown in Figure 2.15, suppose


the initial base level S0 = 13, 208 which was the actual monthly demand for
December 2011. Suppose that the slope estimate (based on the simple
regression) is 84.5 and suppose that the trend is believed to continue into
year 2012. We wish to apply Holt’s trend model to predict the monthly
demand in January–March of 2012.
Assume that t = 0 refers to December 2011, and thus t = 1 refers to
January 2012.

At the end of t = 0 (i.e., the end of December 2011), we have


Now suppose that by the end of t = 1 (i.e., the end of January 2012), we
observed the actual demand D1 = 12,600. With this information, we can
update the model and revise the former forecasts.

Using Microsoft Excel Solver, we can optimize the values of the


smoothing constants α and β, similarly to the way we did for the simple
exponential smoothing factor α. Figure 2.16 shows an example of this
process, where we are given demand data observed over a 12 week period,
and the initial parameters α = β = 0.1 resulted in a MSE of 13.95601. After
optimizing the values of these parameters, the value of MSE is reduced to
4.003589, as shown in Figure 2.17.

2.5 Models for Predicting a Seasonal Series


Recall that another major component of a time series is the seasonality, or
periodic behavior. Any regularly repeated pattern in a time series should be
identified and modeled so that this characteristic is incorporated into our
forecasts. Figure 2.18 is the time series for new privately owned housing
units started between November 2009 and November 2015, which shows a
strong seasonality, with a peak in every summer and a valley in every
winter.
For any seasonal data with a cyclic pattern that repeats every M periods
(where M is greater than or equal to 3), we need to identify M, the number
of time periods in a cycle, or the length of the season. In our example, since
the pattern appears to repeat every 12 months, we assume M =12.

Figure 2.16: Holt’s Trend Model: Initial Estimates of α and β.

Forecasting models that have been commonly used for predicting


seasonal series are naïve or regression models with seasonal adjustment and
Winter’s model. Both techniques first predict the “unadjusted” or
“deseasonalized” demand for a future time period, Ft, and then modify this
predicted unadjusted demand by multiplying by a seasonal factor, Ct. By
“unadjusted” or “deseasonalized” demand, we mean the predicted demand
before the seasonal component in a series is incorporated. If the data is
stationary, this unadjusted or deseasonalized forecast of demand usually
comes from the arithmetic mean. On the other hand, if the data has a trend,
then the unadjusted or deseasonalized forecast of demand, often comes
from a simple linear regression.

Figure 2.17: Holt’s Trend Model: Final Estimates of α and β.

To illustrate the concept of a seasonal adjustment factor (the


computation will be described in the next section), suppose that four
quarterly seasonal factors for a time series are:
Figure 2.18: New Privately Owned Housing Starts between 2009 and 2015.
Source: https://fred.stlouisfed.org/series/HOUST1FQ

This means, for example, that demand during the first quarter is about
70% of the yearly average, while the demand during the third quarter is
about 35% above the yearly average.

2.5.1 Naïve or regression model with seasonal adjustment


Step 1. Collect the time series data from at least two past cycles.

Step 2. Compute the “unadjusted” forecast using either the arithmetic mean
(if the data is stationary, exhibiting no trend in the series) or simple
regression (if there is a trend in the series, where model parameters α and β
are estimated by a and b from Equation (2.11)).
Step 3. Compute the ratio:

Step 4. Estimate the value of seasonal factors by averaging the individual


ratios computed in Step 3.

(2.17)
where, the number of cycles in the data series is at least 2, as indicated in
Step 1. That is, Ct is the seasonal factor for time period t, t + m, t + 2m, …
and is equal to the average of the individual ratios computed in Step 3 for
time periods t, t + m, t + 2m, … .
Step 5. Compute seasonally-adjusted forecasts AFt for future period t, using
Equation (2.18).

(2.18)

where, Ct is the corresponding seasonal factor for period t found in Step 4,


and Ft is the unadjusted forecast using either the arithmetic mean (for a
stationary series) or a simple linear regression (for a time series with a
trend). That is, the adjusted forecast is:

As an example of predicting a time series using regression with a


seasonal adjustment, in Table 2.10 we have eight quarterly values of
demand, whose scatter diagram in Figure 2.19 shows a clear upward trend
and seasonal behavior every four quarters.

Table 2.10: Demand Data for Eight Quarters.


Figure 2.19: Scatter Diagram for Quarterly Demand.

In Figure 2.20, we compute the seasonally adjusted forecasts by first


calculating the unadjusted regression forecasts Ft (in column D); the ratio of
actual demand to unadjusted forecast (in column E); the seasonal factors Ct
(in column F) and the seasonally adjusted forecasts AFt (in column G).
Comparing the mean square error for the unadjusted regression
forecasts (cell I15, or 535.155) with that for the seasonally adjusted
forecasts (cell J15, or 19.2579) we see that by incorporating the seasonal
behavior of the system, we have reduced the MSE dramatically.

2.5.2 Winter’s (α, β, γ) model


Winter’s method is an iterative procedure similar to Holt’s method for time
series with a trend. Given initial seasonal factors (C1, C2, … , Cn), base
level S0, and slope G0, after Dt is observed, we update the model parameters
using Equations (2.19)–(2.21).

(2.19)

(2.20)

(2.21)
Figure 2.20: Seasonally Adjusted Regression Model.

We then use the new updated model parameters to predict future demand
from Equation (2.22).

(2.22)

2.6 Demand Categorization and Management


Strategies
“Demand forecasting is essentially a linear process of translating input
assumptions into a forecast of expected sales; demand management, by
contrast, is a highly iterative process that involves driving to a revenue and
profit target through prioritization of customers, channels, products,
geographies, and the demand stimulation programs available to the
enterprise.”13
Demand management is a more proactive approach than its
predecessors. It uses supply chain strategies and coordinates/controls all
sources of demand so the entire business process can operate more
profitably and more effectively. Integrating the supply chain through the
dynamic process of balancing demand and supply is an important procedure
that requires the full engagement of all functions and sales regions. Supply
chain leaders are tasked with driving and managing this process for all
advanced organizations and success is measured through sales growth and
profit optimization through the minimization of stock-outs, obsolete
inventory, and lower inventory levels. The use of demand categorization is
one of such supply chain management strategies and is designed to address
the following phenomena in a business process:
• Excessive levels of inventory created by unpredictable demand;
• Poor forecasting accuracy because of fast changes in the marketplace;
• Increasing issues with obsolescence due to short life cycles;
• Poor service due to demanding customers on shorter lead time.
The solution by demand categorization is to classify the demand, and
then manage the resulting categories differently, as illustrated in Figure
2.21.

Figure 2.21: Demand Categorization.


Source: http://www.slideshare.net/kelly12504/solving-the-supplydemand-mismatch

2.7 Collaborative Planning, Forecasting, and


Replenishment
Collaborative planning, forecasting and replenishment (CPFR) is a business
process that aims to improve the accuracy of forecasting by information-
sharing among the trading partners along a supply chain. It allows for
continuous updating of inventory and upcoming requirements, making the
end-to-end supply chain process more efficient. Collaboration among the
trading partners is required to effectively deploy this advanced method of
supply chain management. The assumption is that all supply chain actions
are driven at the point of sale and as that event takes place, there is
transparency and predefined actions that are put in place along the supply
chain to replenish or supply the replacement item.
As an example, a supplier of packaging components to a consumer
goods company will be able to see through data-sharing, exactly how much
of its components are being consumed by its consumer, goods customer and
future forecasted demand. Understanding demand risk and production cycle
times, a contract between these trading partners was previously put in place
with predefined inventory levels at the customer for the various types of
packaging components. The contract determines a minimum “on hand”
inventory or replenishment level at the customer which triggers an
automatic reorder of those components from the supplier. The supplier will
execute plans, understanding and managing the production and shipping
cycle to coincide with the customer’s reorder point to ensure the “right”
inventory is available to meet the demand. Advanced supply chain
organizations have this model in place for all elements of their production
needs. Demand forecasting complemented by an integrated and
collaborative supply chain create efficiency through decreased expenditures
for merchandising, procurement, inventory and safety stock, logistics, and
shipping across all supply chain partners, because information-sharing tells
people what and how much will be needed by when.
In business today, CPFR has evolved into a web-based tool used to
coordinate demand forecasting, production and purchase planning, and
inventory replenishment between supply chain trading partners.
2.8 Case Studies
2.8.1 Stay Warm Call Center
Stay Warm is a call center providing technical support for software
designed to access home heating systems remotely. Currently the center is
in its fourth year of operation and is preparing its staffing plan for the
upcoming quarter, based on a forecast of demand for that period.
Demand data are available for each of the four quarters of the
preceding 3 years and for the first two quarters of the current year. The data
presented in Table 2.11 and plotted in Figure 2.22 are for the number of
calls to the center. The center administrator has in the past tried using the
last period demand and has also tried using the average of all past demand
to predict the next period’s demand for the center. Neither of these two
techniques has proven satisfactory. The use of the last period demand as a
predictor of the next period’s demand produced erratic forecasts. For
example, using this method, the administrator predicted (and staffed and
scheduled for) a demand of 350 calls for the second quarter of the first year
when 800 calls actually resulted. (Overtime reached a peak during this
quarter.) The administrator then predicted 800 calls for the third quarter
when only 550 calls materialized. Clearly, this method could not sort out
the fluctuations in the demand data early in the center’s operation, and was
therefore deemed unsatisfactory.

Table 2.11: Call Center Demand for Stay Warm.


Figure 2.22: Scatter Diagram.

The administrator then turned to using the average of all demand data
to predict the next period’s demand. For the fourth quarter of the first year,
the administrator predicted a demand of 567 [i.e., (350 + 800 + 550)/3]
when 1,000 actually occurred, and for the 10th period forecast a demand of
567 (i.e., the sum of the first nine periods’ demand divided by 9), and 950
occurred. The administrator recognized that this averaging method
produced forecasts that smoothed out the fluctuations, but did not
adequately respond to any growth or reduction in the demand trend. As a
matter of fact, the averaging method performed progressively worse as the
amount of data increased. This was because each new piece of demand data
had to be averaged with all of the old data from the first period to the
present, and therefore each new data point had less overall impact on the
average.
Questions for Stay Warm Call Center Case Discussion:
• Calculate the forecasts for quarter 15 using (a) arithmetic mean; (b) last
period value; (c) moving average with N = 4; (d) exponential smoothing
with α = 0.1 and 0.6; (e) simple regression analysis; and (f) regression
analysis with seasonal adjustments.
• Compare the results of the different forecasts generated.
• What is your recommendation?

2.8.2 Xenon Products Company


Lighting products have evolved considerably from Edison’s first bulb over
100 years ago. The Xenon Products Company has been in the lighting
business since the late 1940’s. First started by a returning World War II
veteran, Xenon has grown to become a public company with a very broad
product range for the consumer market, mostly serving North American
consumers. Its products cover interior lighting for homes as well as external
landscape and security lighting. Recent developments in innovation have
been focused on energy efficient and long lasting products. Xenon Products
has positioned itself to be a mid-level priced product, with reliable quality
and broad product range. The majority of its products are manufactured in
Mexico, where there are competitive labor rates and optimal logistics for
their North American distribution. Recently Xenon’s market share has been
challenged by global competitors who have not only matched its
performance in innovation and quality, but have also been able to compete
on price. In order to maintain share in the short term, the Xenon
Management Team has had to sacrifice margin through pricing. Knowing
that this is an unsustainable practice and feeling pressure from shareholders
at the most recent quarterly earnings call, the Management Team asked its
Head of Supply Chain, Edward Shaw, to quickly evaluate and develop
options for a better, longer-term approach for competitive differentiation
and success.
Edward Shaw has been with Xenon Products for about 5 years. Prior to
joining Xenon, he held similar positions with other consumer products
companies based in North America. Edward enjoyed the fast pace of the
dynamic product technology in the lighting business, with its short product
life cycles and multiple launches. As a supply chain professional, he was
also concerned with unpredictability of demand and the very traditional
approach Xenon deployed in forecasting. Edward was having a difficult
time in his tenure achieving key targets in inventory levels and customer
service, along with minimizing the annual obsolete inventory writeoffs.
Given the current competitive situation and this history of performance of
the Xenon supply chain, Edward knew that he would need to investigate a
better approach to demand forecasting in order to better optimize the supply
costs. In the past the majority of the Company’s cost management strategies
included low-cost sourcing, automation, and low-cost labor production
sites. Feeling that the current operations in Mexico have optimized those
returns, Edward assembled his staff and asked them to think differently
about developing a solution to the Xenon Products Company challenge.
The Supply Chain Team from Xenon consisted of the key functional
leaders associated with areas such as procurement, planning,
manufacturing, logistics, and distribution. In the past, their cost
management strategies were zero-based, starting from the first point in the
value chain with the sourcing of raw materials. The Team would analyze all
the items on the bills of materials, the transportation costs to final assembly,
the labor and overhead costs at their sites, and finally the distribution and
warehousing costs. In the case of their current challenge, Edward asked the
team to work differently, starting from the customer and working its way
back to the supplier. This approach required the Team to think about its
distribution channels and better understand the demand that drives the
supply activity. The current processes at Xenon Products for managing the
supply chain were put in place about 10 years earlier with the first
installation of an enterprise resource planning (ERP) system. Under the
current processes, various product line sales are forecasted using modeling
tools based upon historical sales, seasonality and forecasted sales for new
products or promotional activities. Driven by the forecast, all the supply
activities are designed to build stock inventory to support these forecasted
sales. Given the recent dynamics of the global competitive pressures and
shorter product innovation cycles, the Supply Chain Team felt that the
concept of CPFR may represent a viable solution to improving the
efficiency and competitiveness of the supply chain.
In beginning the analysis of understanding demand, the Team pulled
together a profile of its customers and their demand needs over the past 5
years. What they found was that a significant portion of its sales (70%) was
concentrated with their top 10 customers. These customers were the large
home improvement stores (e.g., Home Depot, Loews), and a few large
national and regional grocery chains (e.g., Wegmans, Wakefern). Many of
these customers had their own central warehousing capabilities that
managed replenishment to their local stores from various regional locations.
The remaining 30% of sales was across a much broader base of customers
with smaller volumes and multiple locations serviced from Xenon’s
distribution centers. Currently the supply chain transaction relationships
with all of Xenon’s customers are managed in the same way. The customers
place an order directly with the Xenon Customer Service Team and the
order is filled by the closest predetermined Xenon warehouse. In better
understanding the profile of their customers and the various capabilities for
each of these customers, the Xenon Supply Chain Team was confident that
the concept of CPFR should be further investigated.
Questions for Xenon Case Discussion:
• What elements of the customer demand channel led the Xenon Team to
conclude that further investigation is required for the concept of CPFR?
• Where do you see the biggest opportunity to explore supply chain
efficiencies for the Xenon value chain? Any value to unit costs,
inventory carrying cost, and obsolete inventory costs?
• Do you see efficiencies not only for Xenon, but for its customers as
well? Where and why?
• Is there value beyond supply chain efficiencies in CPFR for Xenon in
this very competitive market?
• Should all Xenon’s customers operate on a CPFR model? Where would
traditional models suffice and why?

2.8.3 ACT — The demand–supply mismatch problem14


Advanced Clinical Technologies (ACT), Incorporated is a mediumsized
specialty medical device manufacturer, making and distributing various
critical and semi-critical items, such as oral endotracheal tubes, airway
devices, mucous membranes suction catheters, nebulizers, metered dose
inhaler spacers, and infant oxygen sensors, to meet the needs of hospitals,
clinics, doctors’ offices, and home patients. The company has 17 domestic
suppliers (three of them contracted manufacturers) who typically require a
lead time of about 6 weeks, two Canadian suppliers with 8 weeks of lead
time, and five Asian suppliers with 12 weeks of lead time, on average. The
deliveries to its major customers are handled by a third-party logistics
company, a strong business partner of ACT for decades, which now ensures
a 3-day accurate delivery for all destinations within the US, and online
delivery tracking for ACT.
A major challenge for ACT is the difficulty in matching its supply to
customer demand in a timely and cost-effective manner, due to long
supplier lead times (6–12 weeks), the dynamics of the healthcare industry,
health insurance policies which specify the coverage of home-use medical
devices, competition, and short life cycles of speciality device technologies.
The management team has tried several approaches, including monthly
forecasts based upon market trends and experience with similar products
(i.e., past sales), and allowing the monthly forecasts to be adjusted every 2
weeks based on the actual outbound shipments. This 2-week lead time for
adjusting the forecasts is needed in order to revise a given monthly
production schedule.
In the US market, ACT serves hospitals, clinics, and distributors, who
in turn supply doctors’ offices. One of the major areas of forecasting
disruption occurs with the hospital branch of ACT’s customer base. One of
their long-term partners has been North East Corridor Hospital Association
(NECHA), which has a number of hospitals located from Washington to
Boston. These are mid-sized hospitals with about 10 areas of defined care.
In addition, these facilities have a mix of Extreme Care, including NICU,
ICU, CCICU, and Hospice Care. These facilities also have traditional
laboratory, testing, and treatment centers. Each unique service unit has its
own demand fluctuations, which are not currently visible to ACT.
In recent years, more and more options have arisen in the healthcare
industry to improve inventory control and reduce costs. Some of these
options are defined below.
• Current Situation At NECHA — In the present situation, inventory is
sold directly to NECHA, and is held in a Central Controlled Stock
Room. Inventory is replenished at the multiple Care Centers within the
hospital via requisition. Inventory is owned by the hospital, then
transferred to the Care Center when requisitions are filled. Financial
transfer is done at the time of consumption, as a part of billing
reconciliation. The experience of NECHA is that the Care Centers do
not trust inventory replenishment and therefore hoard frequently-used
items, thus destroying inventory visibility. Each Care Center is holding
excessive safety stock, which could account for as much as 6 months’
supply.
• Vendor Managed Inventory (VMI) — In some hospitals, the vendor
monitors inventory, but only at the Central Stock Room level. Once
inventory is transferred to a Care Center, visibility is lost, just as in the
current system.
• Consignment With Modified VMI — In this system, inventory is owned
by ACT while it is held in the Central Stock Room. The Stock Room is
managed by an ACT employee and ACT is reimbursed for Payroll and
Benefits. The hospital only assumes ownership at the point of delivery
to a Care Center. At that time the financial and physical ownership is
assigned directly to the Care Center. This method maintains Visible
Chain of Control, up to the point of delivery.
• Point of Sale (POS) Replenishment — In this method, replenishment is
generated by consumption in the Care Center. It is reactionary, and does
not facilitate any seasonality, trend or other forecasting benefits. This
method requires immediate response time due to a lack of inventory at
the Care Center to fulfill immediate needs.
• Distributor-Based Control — This is identical to the above methods, but
merely has a middleman controlling inventory and maintaining interim
ownership.
It is also beneficial to understand the status of medical inventory. In
order to protect against health risks and contamination, medical devices
have internal and external safety standards. A product is considered safe
and useable within the hospital environment based on the standard
procedures of each of the Care Centers. It cannot, however, be returned for
redistribution once it has lost Visible Chain of Control.
If, however, Visible Chain of Control is maintained, then reverse
logistics can reclaim products which have been replaced. These products,
based on rating, can be resold to clinics, other locations, and in the last
scenario, donated to charitable medical practices in US and abroad,
allowing for recovery of tax dollars.
With the fast pace of innovation, there are issues of obsolescence and
destruction of good inventory. The inventory levels of NECHA have been
historically high, due to uncertainty, poor replenishment reliability, and lack
of visibility based on hoarding.
NECHA and ACT are engaged in a strategic initiative to identify
opportunities for improvement. Reviewing the above information and the
data listed below, the challenge is to:
• Lower inventory holding costs at ACT and customers;
• Reduce hospital obsolescence losses;
• Increase visibility internally and with ACT;
• Increase reliability of ACT;
• Facilitate lower cost and more efficient upgrade transitions.
Figure 2.23 shows the demand and supply records of a sample product
(ZXII_B3). Ever since the product was introduced into the market in 2011,
its sales have been increasing continuously. In addition to hospitals, major
national distributors of healthcare products have also started to order
ZXII_B3. One of the distributors discontinued its contract on a similar
product from a competitor of ACT, and switched over to ZXII_B3, which
led to a spike in the shipping quantity of ACT during March 2013. To meet
this additional demand, ACT adjusted its production schedule and
significantly increased its outbound shipments. However, this change in
production volume depleted the raw material inventory, which then forced
the production line to shut down until the inbound inventory was
replenished weeks later. Such mismatch problems have led to many
complaints by distributors and hospitals, especially those healthcare
providers involved in spring 2013 tornado relief operations.

Figure 2.23: Demand–Supply Mismatch (ZXII_B3).


The January–July demand/shipment data for ZXII_B3 is shown in
Figure 2.24. The current forecasting errors and the errors from using simple
regression are summarized in Figure 2.25. While the simple regression was
able to reduce the error by 74% with the given data set, one can see that if
the switch over time of the national distributor can be negotiated and
planned in advance by a closer supply chain collaboration, this error can
certainly be further reduced.

Figure 2.24: Sample Demand/Shipment Data.


The demand–supply mismatch problem occurs often, especially for
products at the very beginning or end of their life cycle, as well as for
products with strong competition. When sales growth is strong, as may be
the case early in the life cycle, we are likely to experience a shortage in
supply and/or the ability to deliver services to match demand; late in the life
cycle, when demand is weaker, or when we are faced with strong
innovation by our competitors, we are likely to have inventory (supply) in
excess of our demand.

Figure 2.25: The Current Forecasting Errors vs. Forecasting Errors by Simple Regression.

Questions for ACT Case Discussion:


• Why did ACT face such demand–supply mismatch problems?
• What should ACT do to address its current supply chain issues?
• How does the current scenario impact the current performance? Do you
think that long supplier lead times are also a key factor?
• What would you recommend as a forward-moving inventory control
strategy?
• How can forecast accuracy be improved? What methods and models
would you use?
• Define causal events that you would monitor to improve forecast and
replenishment accuracy.

2.9 Exercises
1. Two forecasting methods have been used to evaluate the same
economic time series. The results are shown in Table 2E1.1:
Calculate and compare the MAD and the MSE for the two methods.
Do each of these measures of forecasting accuracy indicate that the
same forecasting technique is best? If not, why?

Table 2E1.1: Comparison of Two Forecast Methods.


Forecast From Forecast from Actual Value of Time
Method 1 Method 2 Series
223 210 256
289 320 340
430 390 375
134 112 110
190 150 225
550 490 525

2. The data in Table 2E2.1 shows demand for an automotive part stocked
at a supply depot last year:

Table 2E2.1: Monthly Demand for Automotive Part.


Determine the one-step-ahead forecast for the demand for January
(a) of the current year using the arithmetic mean, and comment on
the advisability of using this method.
(b) Determine the one-step-ahead forecast for the demand for January
of the current year using the last period value, and comment on
the advisability of using this method.
(c) Determine the one-step-ahead forecasts for the demand for
January of the current year using 3-, 6-, and 12-month moving
averages.
(d) Using a 4-month moving average, determine the one-step ahead
forecasts for July–December of last year.
(e) Using a 4-month moving average, determine the two-step-ahead
forecast for July–December of last year.
(f) Compute the MAD for the forecasts obtained in parts (a)–(e).
Which method gave the best results? Based on forecasting theory,
which method should have given the best results?
3. The Lilly Company produces a solar-powered electronic calculator that
has experienced the monthly sales history shown in Table 2E3.1 for the
first 4 months of the year in thousands of units:
(a) If the forecast for January was 25, determine the one-step-ahead
forecasts for February–May using exponential smoothing with a
smoothing constant of α = 0.15.
(b) Repeat part (a) for a value of α = 0.40. What difference in the
forecasts do you observe?

Table 2E3.1: Monthly Sales of Solar-Powered Electronic Calculators at the Lilly Company.

(c) Compute the MSE for the forecasts obtained in parts (a) and (b)
for February–April. Which value of α gave more accurate
forecasts, based on the MSE?
4. Elizabeth Children’s Outdoor Museum in Illinois has kept records on
the number of visitors since its opening in January of last year. For the
first 6 months of operation, these numbers are shown in Table 2E4.1.
(a) Draw a scatter diagram of the data. What model does this
suggest?
(b) Determine the least squares equation for this data.
(c) What are the forecasts obtained for July–December of last year
from the regression equation in part (b)?
(d) Comment on the results in part (c). Specifically, how confident
would you be about the accuracy of the forecasts that you
obtained?

Table 2E4.1: Attendance at Elizabeth Children’s Outdoor Museum.

5. The Illinois Department of Parks must project the total use of


Elizabeth Children’s Outdoor Museum for this year.
(a) Determine the forecast for the total number of visitors this year
based on the regression equation in problem (4).
(b) Determine the forecast for the total number of visitors this year
using a 6-month moving average.
6. Sales of walking shorts at Maddie’s Department Store appear to
exhibit a seasonal pattern. The proprietor has kept careful records of
several of his popular items, including walking shorts. Table 2E6.1
shows monthly sales of the shorts during the last 2 years:
Assuming no trend in shorts sales over the 2 years:
use the arithmetic mean to obtain the unadjusted forecasts for this
(a)
2 year period,
(b) obtain estimates for the monthly season factors,
(c) calculate the seasonally adjusted forecasts.

Table 2E6.1: Monthly Sales of Walking Shorts at Maddie’s Department Store.

7. There has been a steadily increasing number of smartphone shipments


worldwide between 2009 and 2014, as shown in Table 2E7.1.
(a) Construct a simple regression model to predict global shipments
for 2015–2018.
(b) Let α = β = 0.2, S0 = 1,300.4, and G0 = the slope estimate from
part (a). Predict sales for 2015 using Holt’s trend model.
(c) Actual shipments for 2015 were slightly lower than the forecasts
in parts (a) and (b). Further slowdowns in growth are expected as
the market matures. What are your recommendations for
forecasting future shipments?

Table 2E7.1: Global Smartphone Shipments Forecast from 2010 to


2014 (in million units).
Year Global Smartphone Shipments in Million Units
2009 173.5
2010 304.7
2011 494.5
2012 725.3
2013 1019.7
2014 1300.4
Source: IDC.com, December 2015:
http://www.statista.com/statistics/263441/global-smartphone-
shipments-forecast/

8. The data in Table 2E8.1 show Nike’s quarterly cash dividend between
2000 and 2015. Predict the cash dividend for the four quarters of 2016
using a simple regression model.

Table 2E8.1: Quarterly Dividends for Nike Corporation Between 2000 and 2015.

Source: http://www.nasdaq.com/symbol/nke/dividend-history
Monthly sales volume during 2015 for a seasonal clothing item is
9. shown in Table 2E9.1. Use a regression model with a seasonal
adjustment to predict the monthly sales volumes for 2016.

Table 2E9.1: Monthly Sales


Volume During 2015.
Month Number of Units Sold
1 5,700
2 5,110
3 1,422
4 8,937
5 7,654
6 7,455
7 1,800
8 9,458
9 8,500
10 8,900
11 1,975
12 11,550

Appendix
A2.1: Derivation of Regression Coefficients for the
Simple Linear Regression Model
We wish to find the coefficients a and b in the simple linear regression
model Y = a + bX which will minimize the sum of the squared error terms:

(A2.1.1)

Substituting this can be written as

(A2.1.2)
So (a, b) is the sum of the squares of the distances from the regression line
to the actual data points, or the quantity we wish to minimize.
To minimize a function of two variables, a and b in this case, we must
calculate the partial derivative with respect to each of the variables and set
each partial derivative equal to zero.

Rearranging these equations algebraically, we can solve for a and b in


terms of the data values xi and yi. To simplify the resulting equations, we
denote the computational expressions derived in terms of the xi’s and yi’s by
Sxy and Sxx and derive the set of Equations (A2.1.3):

(A2.1.3)

If we evaluate the second partial derivatives we find that


so that these results
minimize the sum of the squared error terms.
In the special case where the independent variable X is time and the
dependent variable, Y, is demand, or D, the summations can be simplified.
That is
So the regression equations can be written as Equation (A2.1.4), which is
equivalent to Equation (2.11) in this chapter:

(A2.1.4)

Endnotes
1. Kevin Scarpati, “Top 10 Supply Chain Concerns of 2011,” Supply Chain Digital, November 9,
2011. Available at: http://www.supplychaindigital.com/global_logistics/top-10-supply-chain-
concerns-of-2011.
2. “10 Symptoms of Poor Supply Chain Performance,” ARC Insights2002-26E, June 20, 2002.
Available at: http://www.idii.com/wp/arc_sc_perf.pdf.
3. “Stormy Models Foil Bets By Firms Based on Models,” Wall Street Journal, September 27, 2002.
4. ‘Lousy’ Sales Forecasts Helped Fuel the Telecom Mess,” Wall Street Journal, July 9, 2001.
5. “The Minds Behind the Meltdown,” Wall Street Journal, January 22, 2010.
6. “Obesity Study Looks Thin,” Wall Street Journal, August 15, 2008.
7. “A Look at the Globe, 45 Years Out,” Wall Street Journal, March 4, 2005.
8. “In Hurricane Forecasting, Science is Far From Exact,” Wall Street Journal, June 8, 2005.
9. “The Oracle of Oberhausen,” New York Times, July 12, 2010.
10. “Why A Journalist Scammed The Media Into Spreading Bad Chocolate Science,” npr.org, May
28, 2015.
11. “Music and Success,” New York Times, December 20, 2013.
12. Charles C. Holt, “Forecasting Seasonals and Trends by Exponentially Weighted Averages,”
Office of Naval Research Memorandum 52, 1957. Reprinted in Holt, Charles C., “Forecasting
Seasonals and Trends by Exponentially Weighted Averages,” International Journal of
Forecasting 20(1), January–March 2004: 5–10.
13. “Demand Management: Driving Business Value Beyond Forecasting: A Demand Management
Benchmark Study.” ©2004 Aberdeen Group, Inc.
14. Professor Thomas York of Rutgers Business School contributed to the development of this case
study.
Chapter 3

Sales and Operations Planning

PLANS ARE NOTHING; PLANNING IS EVERYTHING.

Dwight D. Eisenhower

3.1 Sales and Operations Planning in Practice


An effective supply chain must have a highly integrated demand and supply
planning process. In practice, this planning process is commonly known as
the Sales and Operations Planning (S&OP) Process. Oliver Wight
Americas, Inc., a leading supply chain management consulting firm, defines
the S&OP process as follows:
Sales & Operations Planning is a process led by senior management
that, on a monthly basis, evaluates revised, time-phased projections for
supply, demand, and the resulting financials. It’s a decision-making process
that ensures that tactical plans in all business functions are aligned and in
support of the business plan. The objective of S&OP is to reach consensus
on a single operating plan that allocates the critical resources of people,
capacity, materials, time, and money to most effectively meet the
marketplace in a profitable way.1
The outcome of the S&OP process offers a business executive
guidelines on how to use resources such as skilled labor, safety stock,
contracted manufacturers, finished goods inventories, facility locations, and
suppliers’ capacity and capability, and business options, such as
outsourcing, subcontracting, third parties, inventory control, and planned
shortages, to meet the expected market demand. This process endeavors to
balance the cost, customer service levels, opportunities, constraints, and
profit of both a business organization and its supply chain partners. A
highly integrated S&OP process consists of two major planning
components: demand planning and supply planning.

Demand planning
As we have seen in Chapter 2, demand planning is driven by sales and
revenue targets, and is a multi-step process used to create reliable forecasts
to guide business decisions. Primary factors that affect the quality of
demand planning include forecasting accuracy, responsiveness of the
business process, effectiveness of customer order management, inventory
management policies, and order cycle time.
Key steps in a demand planning process include analysis of historical
demand data, including sales, request for services, demand for resources,
etc., creating statistical forecasts, collaborating with customers, suppliers,
and other supply chain trading partners to jointly finalize the forecasts, and
sharing the final forecasts with key supply chain stakeholders, such as
contract manufacturers and suppliers.
Effective demand planning can help users to improve the accuracy of
revenue forecasts, align inventory levels with peaks and troughs in demand,
improve customer service levels, and enhance profitability for a given
channel or product. Effective demand planning is also critical to reduce the
risk of supply chain disruptions and satisfactorily match the level of
employees of an organization to the working environment. Figure 3.1 shows
the nurses of a community hospital protesting an unacceptable nurse
(supply) to patient (demand) ratio.
As another example of a demand–supply mismatch, Figure 3.2 shows
US crude oil production and consumption between 1960 and 2015. During
periods of excess supply, prices decline, and during periods of insufficient
supply, prices fall.
Figure 3.1: Demand–Supply Mismatch at a Local Hospital.
Source: http://www.nationalnursesunited.org/news/entry/watsonville-community-hospital-nurses-
picket-in-a-demand-for-more-staff/

Figure 3.2: US Crude Oil Production and Consumption between 1960 and 2015.
Source: http://www.investing.com/analysis/oil-supply-demand-suggests-
pain-not-over-261940

Supply planning
Supply planning is primarily concerned with procurement, supply
capability, and supply capacity. Supply planning is driven by profit, and is
mainly affected by purchasing price, supply quality, supplier’s capacity and
performance, capability, responsiveness, and willingness to collaborate. The
goal of supply planning is to ensure an uninterrupted supply to meet the
demand in the most cost-effective manner.
In the era of the internet, globalization, and e-commerce, the market
has been changing and customers have become increasingly demanding,
expecting ever-higher levels of product and services to meet their individual
needs. Unlike the situation 20–30 years ago, a quality product alone is no
longer sufficient as a competitive advantage. The product quality only
serves as one qualifying factor for a firm to become a choice among many
fairly equal alternatives in the eyes of customers. Customers are evaluating
their options not just in products themselves, but also in the delivery of the
products and services that are packaged together. The new four “C’s”:
Consumer, Cost, Convenience, and Communications, are replacing the
traditional four “P’s” of marketing: Product, Price, Place, and Promotion.
The concept of customer service index has emerged, which is defined as
• On-time delivery: percentage of orders delivered on time;
• Order completeness: percentage of orders delivered complete; and
• Error and damage free: percentage of clean invoices (without
adjustments or credit notes).
Michael Dell has commented “If I’ve got 11 days of inventory and my
competitor has 80 and Intel comes out with a new 450-megahertz chip, that
means I’m going to get to market 69 days sooner. In the computer industry,
inventory can be a pretty massive risk because if the cost of materials is
going down 50% a year and you have two or three months of inventory vs.
eleven days, you’ve got a big cost disadvantage. And you’re vulnerable to
product transitions, when you can get stuck with obsolete inventory.”2
To continuously improve a company’s customer service index value,
supply chain integration, and coordinating the demand and supply planning
are key steps. In this chapter, we focus on the management strategies and
decision support tools that have been widely applied in the integrated
demand and supply planning process.

Demand–supply planning techniques


Whenever there is variability of demand for either products or services, a
pressing issue that each company along the supply chain must face is
demand–supply planning — an important cost management component of
the S&OP process.
Figure 3.3 shows variations in demand over time. For any given
capacity level, we may have:
• loss in efficiency when demand is low,
• loss in sales/unsatisfied demand when demand exceeds the capacity
level.
To catch up on the portion of demand that exceeds the capacity level
and to minimize the waste of resources when the demand is low, we often
face multiple business options:
• build inventories during periods when the utilization is low,
• outsource/subcontract part of the demand when internal capacity is not
sufficient,
• lease capacity from a supply chain partner,

Figure 3.3: Variations in Demand over Time.

• consider a backlog policy (which may result in either lost sales or


higher transaction costs),
• use overtime if labor is the limiting factor (cost implications),
• decline the portion of customer orders that we cannot handle (lost
sales).
Regardless of which option or combination of options is chosen to
handle the capacity planning issues, we need guidelines, planning
strategies, and planning techniques.
A commonly used objective for demand–supply planning is to fulfill
the sales forecast and maximize the profit. Other objectives could include,
for example, minimizing the unsatisfied demand or meeting a specified fill
rate (percentage of orders filled to completion) subject to some practical
resource limits. Sometimes, we may have multiple objectives.
There are three major issues associated with demand–supply planning.
Planning horizon. A decision on the planning horizon must be made. The
length of a planning horizon is typically 3–18 months, for which we make
demand–supply plans based upon the corresponding demand forecasts. The
specification of the planning horizon length is important in balancing high
overhead costs (associated with short planning horizons) with inaccuracies
in demand forecast (the further into the future we try to predict, the less
accuracy we have).
Information gathering. For most demand–supply planning cases, the
following information is needed:
• Demand or sales forecast for the planning horizon.
• Cost data (labor, overtime, and regular time pay rates, inventory, stock-
out, subcontracting, hiring and firing, etc.).
• Constraints that impose limitations on our decisions (overtime, union
contracts, maximum capacity, cycle time to manufacture and ship,
contracts with third parties, contracts with suppliers, service levels
promised to customers, etc.).
Planning techniques. Poor demand–supply planning may result in either
excessive inventory build-up or severe shortages. In business practices,
there are many planning tools used. The four most common ones are
summarized below.
Chase strategy: This approach adjusts the capacity to catch the
demand. In general, this is an expensive strategy, and is usually suitable for
service industries, where capacity cannot be inventoried, or for those
manufacturers whose products either face a high level of risk of technology
obsolescence or have a short shelf-life.
Time flexibility strategy: This approach is used for managing the
demand in those heavy labor-dependent processes, such as manufacturing
or services industries, where employees may be assigned to work overtime
during a busy season, or only 4 days per week when the demand is
relatively slow.
Level strategy: This approach builds a level supply plan that meets the
demand in the aggregate as opposed to following peaks and valleys, by
using inventory as a lever.
Mathematical programming-based optimization techniques: These
techniques are powerful optimization tools to support the solution process
of many business decision-making problems, such as transportation and
shipping network capacity planning, airline and/or call center staffing, nurse
scheduling, and facility locations. These techniques can also be applied to
mathematically optimize supply allocations to support S&OP, which we
will discuss in more detail in Sections 3.2 and 3.3.
For many years, these planning techniques have been discussed in
operations management textbooks under the topic heading aggregate
planning, with a focus on improving the operations efficiency within a
single organization. In this chapter, the discussion will be much broader and
focus more on demand–supply planning in a supply chain environment
where multiple trading partners involved in the business process interact.
Effective decision support tools for demand and supply planning are
important for ensuring a continuous supply of the product and/or services to
the marketplace, and for improving the profit margin of supply chains,
especially for those supply chains involving global operations (due to the
length of shipping time from suppliers to the US markets). As an example,
tens of thousands of employees at more than 30 companies on three
continents work together to make an iPhone possible.3 Any demand–supply
mismatch at any of these companies could lead to supply chain disruptions
and unhappy customers.

3.2 Fundamentals of Linear Programming Modeling


The most commonly applied mathematical optimization technique in many
business applications, and in demand–supply planning in particular, is
linear programming (LP). LP is a technique for solving constrained
optimization problems. Each LP model, which formally defines a business
decision-making problem in mathematical terms, consists of a linear
objective function and a set of linear inequality or equality constraints
which specify the relationships among the decision variables and
parameters. We begin with several easy-to-understand examples of LP
modeling in this section; we will extend our discussion on the applications
of LP modeling for solving the demand–supply planning problems
encountered in several case studies in Section 3.6.

The production planning problem


Medical device example. Consider the weekly productionplanning
problem encountered by the Homecare Division of a California medical
equipment company. The division currently produces five different medical
devices: P1, P2, P3, P4, and P5, to meet market demand and contracts from
hospitals. Assume that all the devices produced during the fourth quarter
can be sold; the revenue is thus mainly limited by the manufacturing
capacity (i.e., the total number of hours available for assembly, testing, and
adjustment by skilled technicians). In addition, because of customer orders
already received, a minimum of 25 units of P1 and 33 units of P5 must be
produced per week during the planning horizon (i.e., the fourth quarter of
the year under study). The division needs to develop a production plan to
guide the resource allocation and the contract preparation with its major
suppliers in order to maximize the quarter’s profit.

Table 3.1: Data and Parameters used in the Medical Device Example.

Table 3.1 shows the parameters and data used in this example. There
are three processing/testing centers involved in the operations. The
production of each unit of the five different products requires resources (in
terms of hours available) from each of the three centers. For example,
producing one unit of product P1 requires 28 hours of work from Center A,
16 hours of work from Center B, and no time at Center C.
Let’s first define our decision variables:
Xi = quantity of product Pi to be produced per week, i = 1, 2, … , 5.
Then our business objective becomes

Because the solution cannot violate the limits of the system, expressed in
terms of the number of available hours per week at each center and the
minimum requirements for P1 and P5, we face a constrained optimization
problem. The optimal values of the decision valuables must satisfy
constraints on Center A’s capacity, Center B’s capacity, Center C’s capacity,
and the demand for products P1 and P5. In addition, all quantities of the five
products must be non-negative.
The LP model for this medical device problem is summarized in Figure
3.4.

Figure 3.4: LP Model for the Medical Device Problem.

The formulation above is an example of the LP modeling process. As


we have seen, the first step is to define the decision variables. The second
step is to use the decision variables to formulate an objective function that
represents cost, profit, or some other quantity to be maximized or
minimized. The value of the objective function measures the quality of the
decisions. The third step is to formulate the constraints of the problem,
which together restrict the values of decision variables. A solution to a LP
problem is feasible if and only if it satisfies all the constraints in the model.
The feasible solution which yields the optimal value of the objective
function is called the optimal solution to the LP problem.
In a linear program, the variables are assumed to be continuous and the
objective function and constraints must be linear expressions. An
expression is linear if it can be expressed in the form a1x1 + a2x2 + · · · +
akxk, where a1, a2, … , ak are constants. For example, 4x1 + 11x2 + · · · + 9xk
is a linear expression, but any expression containing nonlinear terms like x2
and ex would not be. An LP model always has three sections: the objective
function, the constraints, and the non-negativity restrictions on the decision
variables.
Hospital staffing example. Staffing can easily consume up to 60% of the
total operating budget of a hospital, and effective staffing is therefore
critical to cost reduction and service improvement of healthcare systems.
Consider a local hospital in Trenton, NJ that hires both certified nursing
assistants (CANs) and registered nurses (RNs) for its inpatient services. The
anticipated demand on CANs and RNs for next year is given in Table 3.2.
One major operational issue in such day-to-day hospital staffing
processes is how many CANs and RNs per shift the hospital should hire so
that total wages paid are minimized while a desired patient service level is
satisfied.

Table 3.2: Data for Hospital Staffing Problem.

To formulate an LP model for this decision problem, we first define the


decision variables:

We can then formulate an LP model for this staffing problem, as shown


in Figure 3.5.
For this problem, and for our earlier medical device example, we may
want all the decision variables to take on only integer values at the optimal
solution. That is, in this problem, we are only able to schedule whole
number values for the number of CNAs and RNs; in the earlier problem, we
only realize a profit on whole numbers of the five medical devices
produced. In the next section, we consider how to model such integer
requirements.
3.3 Modeling with Integer and Binary Variables
In many applications, it makes more sense if the decision variables, or at
least some of the decision variables, take on an integer value in the optimal
solution; for example, the total number of trucks to be leased, the number of
new sales offices to open in Texas, or the number of Asian suppliers to be
contracted, would all logically be integervalued. When this is the case, we
need to introduce this restriction in the formulation of the model, usually by
indicating the integer requirements with the non-negativity constraints. For
example, in the medical device problem, we would state X1, X2, X3, X4, X5
> 0 and integer.

Figure 3.5: LP Model for Hospital Staffing Problem.

Such integer variables are decision variables that must take on only an
integer value (0, 1, 2, …) in the final solution. If all variables are integers,
then the resulting model is called a pure integer programming model or IP
model. On the other hand, if the divisibility assumption holds for a subset of
variables, in which case some of the variables may take on continuous
values and others must take on integer values, then it is called a mixed
integer programming model or an MIP model. The power and usefulness of
these models to represent real-world situations are enormous. However,
while LP problems with thousands of variables can be easily solved,
mathematical programming models involving integer variables are much
more difficult to solve computationally, unless they are specially structured.
In general, only relatively small integer programming problems can be
solved to optimality in most cases.
A special class of integer variables is binary variables, which may only
take the values 0 or 1; these are often used to formulate a yes or no type of
business decisions. In such applications, a binary variable is assigned a
value 1 for choosing yes, and a value 0 for choosing no. Examples of such
yes/no decisions are “should we relocate the facility from New Jersey to
Indiana?” and “should we cancel the 3rd quarter contract with this foreign
contracted manufacturer?” where the decision variable would be defined, in
each case, as xi = 1 if the decision i is yes, 0 otherwise.

Using binary variables to define interrelationships among the variables


and constraints
Choosing k out of n options. In many applications, we often are faced with
the need to choose k out of n options. For example, as more and more
retired people in the US move from the northeast to southern and
southwestern states4 such as Florida and Texas, a major retailer is planning
to open three new stores to meet the needs of the growing market in Florida.
Suppose that a marketing research firm has recommended seven candidate
locations for the new stores in Florida; then our business decision becomes
choosing three out of seven. In this case, let’s define xi as our binary integer
variable:

Then we can include the following constraint in the model:

Alternatively, if at least three new stores must be considered for the


Florida retail chain, this constraint would become:
Furthermore, assume that the marketing research firm has identified
four candidate locations in Texas, and suppose we wish to open no more
than five new stores in total in both Florida and Texas. Then we can define
another binary integer variable, yi

We can then introduce the following constraints:

If-then decisions. Binary variables can also be used to define if-then type
of decisions. This refers to the situation where we have contingent
decisions, such as if decision A holds, then decision B must hold. For
example, if we relocate the Animal Health Division of a pharmaceutical
company to Allentown, PA, then its facility in Somerset, NJ, must be
closed. To formulate this, let the binary variable x1 refer to the decision of
relocation to Allentown (x1 = 1 if we relocate to Allentown; 0 if we do not),
and x2 refers to the decision of closing the Somerset facility (x2 = 1 if we
close Somerset; 0 if we do not). Then we can add the following constraint:

to ensure this if-then relation between the two decisions.

Using binary variables to define conditional constraints


Another important application of binary variables in modeling is to define
conditional constraints. Sometimes, constraints must become effective only
under certain conditions, and must become ineffective if those conditions
do not hold. For example, suppose that if a 3-year software development
project for the defense industry is contracted (e.g., the binary variable
associated with that project x1 = 1), then we need to retain at least 100 Java
programmers in the Software Architecture Department: that is, 5x2 + 7x3 ≥
100, where x2 and x3 refer to the number of Java modules to be designed for
the respective project, and the constant parameters 5 and 7 estimate the
number of programmers needed for each respective module. However, if
the contract does not come through, then the constraint 5x2 + 7x3 ≥ 100 is
not required as a restriction on the values of the decision variables. That is,
we need to model the following requirement into our model: If x1 = 1, then
5x2+7x3 ≥ 100 must hold; otherwise (if x1 = 0), there is no need for this
additional constraint. To ensure this requirement, we can use the following
constraint:

To see how this would work, note that x1 is defined as a binary variable (x1
= 1 if the project is contracted and x1 = 0 otherwise). During the solution
process (keep in mind that the computations will be done on your computer,
not manually), the computer will evaluate all possible values, either 0 or 1,
to be assigned to variable x1. Whenever x1 is assigned the value 1 (so that
the project is contracted), the effective constraint becomes 5x2 + 7x3 ≥ 100.
On the other hand, whenever the value 0 is assigned to variable x1, the
constraint becomes 5x2 + 7x3 ≥ 0 which is always satisfied because of the
non-negativity assumption of LP variables.
Similarly, we can handle situations involving multiple conditional
constraints, such as

by rewriting them as

Big M method. In the example above, we have a “≥” type of inequality. If


the constraint inequality is of a “≤” type, then the modeling techniques
discussed above are no longer sufficient and we must use the Big M method,
where M represents a very large positive number. Let’s consider the
following example. We are given two conditional constraints and a binary
variable x1 so that

Since M is a very large positive number, the following approach will allow
us to achieve the modeling purpose.

Note that when x1 = 1 (so that the project is contracted), these two
constraints become

That is, when x1 = 1, the second constraint becomes redundant (since M →


∞); it is always satisfied for any non-negative values assigned to x2 and x3)
and the first constraint becomes effective. On the other hand, when x1 = 0,
these two constraints become

which means that the first constraint becomes now redundant (since M →
∞), while the second constraint becomes effective. This is exactly what we
wanted to model.
Returning to the medical device production planning problem at the
beginning of this section, assume that the management team has decided
not to produce product P2 at all if either P3 or P4, or both, are produced for
the given planning period. To include this policy into the model, we can
define two binary variables Z3 and Z4, where
Figure 3.6: LP Model for Medical Device Problem.

Note that only if both Z3 and Z4 = 0 will we produce product P2; in all other
cases, x2 must be 0. We can ensure this by adding the following constraints
to the model

The complete model, including integer restrictions on the decision


variables, is shown in Figure 3.6.
Additional examples of the use of integer variables for modeling will
be discussed in the Case Studies in Section 3.6.

3.4 Using Microsoft Excel Solver for Demand–Supply


Planning
Microsoft Excel Solver is a computer-based optimization tool for solving
LP and integer programming problems. Appendix 3.1 in Section 3.8
illustrates the steps for installing and/or accessing Solver on your computer.
In this section, we demonstrate the use of Microsoft Excel Solver for the
medical device production planning problem.

The medical device production planning problem


Recall our original LP problem in Figure 3.4, which we wish to solve using
Solver.
To do so, let’s follow the steps below:
Step 1. Enter the model parameters and formulas into a Microsoft
Excel spreadsheet, as shown in Figure 3.7.
Step 2. Activate Solver, as shown in Figure 3.8.

Figure 3.7: Microsoft Excel Spreadsheet for LP Problem Input.

Figure 3.8: Microsoft Excel Spreadsheet Showing Solver.


Figure 3.9: Solver Parameters Dialog Box Input.

Step 3. In the Solver Parameters dialog box, as shown in Figure 3.9,


specify the location of the objective function value in “Set Objective”
(called the “Target Cell” in older versions of Microsoft Excel), whether the
problem is a maximization or minimization in “Optimization To,” the
location of the decision variable values in “By Changing Variable Cells,”
and then enter the “Constraints.” Note that it is not necessary to explicitly
include the non-negativity constraints; instead, we can click on “Make
Unconstrained Variables Non-Negative” in the Solver Parameters dialog
box. To ensure that the problem is solved as a linear optimization problem,
for “Select a Solving Method” click on Simplex LP. (In older versions of
Microsoft Excel, this can be done by clicking the “Options” button in the
Solver dialog box and then clicking on “Assume Non-Negative” and
“Assume Linear Model.”) Then click on “Solve.”
Step 4. Review the optimal solution, shown in Figure 3.10.
Note that we can add integer requirements on the values of the decision
variables as additional constraints in the Solver dialog box, as shown in
Figure 3.9. However, because adding the integer requirement produces a
more constrained problem, the optimal objective function value is reduced
from $876,609 to $875,510. That is, a more constrained optimization
problem will always yield a result that is no better, and often worse, than
the less constrained problem.
Figure 3.10: Optimal Solution to Medical Device Production Planning Problem.

An important concept in LP is sensitivity analysis, or the determination


of the effect of changes in parameter values on the optimal solution. In
Appendix 3.2, we review the use of the Microsoft Excel Solver Sensitivity
Report. The shadow price shows how much the objective function value at
the optimal solution may be further improved with each unit of additional
resources (that is, if the right-hand side constant of a binding constraint is
relaxed by one unit). In other words, the shadow price (in most business
applications) represents the maximum price that a business executive is
willing to pay to obtain an extra unit of a scarce, or bottleneck, resource.
For example, if Center A is already operating at its limit of 8,060 hours per
week, then the corresponding shadow price would be the maximum
expenditure that the plant manager would be willing to pay to obtain one
additional hour, based on the benefits he or she would get from utilizing this
one unit of additional resource.
Figure 3.11: Integer Decision Variables for Medical Device Problem.

3.5 Demand and Supply Planning Strategies


In today’s business world, product life cycles have become shorter and
shorter and the list of customization options in product design has become
longer and longer, while the supply chain has little room in inventory. This
challenge requires us to design a highly effective planning process together
with supply chain strategies in place in order to be responsive and agile
with respect to the changing conditions of a highly dynamic market.
Demand and supply planning (i.e., S&OP) is a business process that
serves this need. Utilizing the forecasted market demand for a company’s
products and/or services, this process incorporates the factors that impact
demand, the options that acquire supplies and resources, and the strategies
that make the supply chain trading partners motivated to work together to
improve the accuracy of forecasting and to create opportunities to smooth
out the variability. The major business objectives of this process often
include,
• improving customer service,
• increasing transparency and accountability,
• lowering working capital required and fixed cost.
This planning process is particularly important for service industries,
since service capacity cannot be inventoried. The focus here is to align
available resources, such as labor, service facilities, and subcontractors,
together with industry-specific business options and strategies. We discuss a
few such strategies below.

Making sales match supply: yield management


Yield management refers to adjusting the selling price to maximize the sales
profit and has been a common practice for many industries where products
are extremely perishable or where inventory is not an option, such as
passenger seats on an airline flight, hotel rooms, cruise capacity, highly
perishable products in the food industry, and highly seasonal products like
Christmas trees carried by Home Depot. The objective of yield management
is to make a profitable use of every unit of service capacity or sellable
product.

Making supply match sales: tiered workforce and offloading


Many management strategies in this regard are extended from the idea of
the chase strategy, discussed earlier in Section 3.1. One is called the tiered
workforce by which temporary full time workers or part-time workers are
hired to meet the peak season demand, which is commonly used at call
centers, department stores, hospitals, and airlines. Another is called
offloading, in which part of the work is shifted to customers or supply chain
partners, such as assembly and storage handling; this is the practice of firms
like IKEA, Costco, BJ, and Sam’s Club.
Figure 3.12 shows a typical S&OP process and the common tools used
to support the decisions in this process.
Figure 3.12: The S&OP Process and Decision Support Tools.

3.6 Case Studies


3.6.1 EnergyBoat, Inc.
During the past two decades, there has been increasing demand among
entry-level millennial boaters for light-weight and highspeed personal
powerboats. This demand has helped EnergyBoat, Inc., a North Florida-
based manufacturer which designs, assembles, and distributes various
personal powerboats, to achieve steadily increasing sales since 2009, and
allowed the company to increase its employment level to 200 workers in
2016. Since the demand for personal powerboats is highly seasonal, the
company utilizes several sources for its supply planning:
• Hiring temporary workers. The boat manufacturing process is a
highly labor-intensive process. Extensive labor hours are spent on raw
materials quality assurances, assembling, welding, painting, packaging,
etc. Given that North Florida has a strong labor market and the
company’s willingness to help assisting the state economic/workforce
development, hiring additional temporary workers during peak seasons
has always been an option considered by top management. While the
minimum wage in Florida is only $8.05/hour, the temporary workers
hired by EnergyBoat are usually skilled laborers, and require $30/hour
or $4,800/month. Both regular and temporary workers work 160 hours
per month at EnergyBoat. In addition, the union contract requires the
company to provide to each newly-hired temporary worker a $5,000
health benefit package for the year, regardless of the number of
consecutive months on his/her contract.
• Using contract manufacturers. There are two family-based contractors
in the region, Jenson Brothers and Speedy Sailor. Outsourcing to these
third-party contractors offers EnergyBoat supplemental capacity when
needed. However, the unit prices charged by these local contractors are
relatively high, as both of them usually utilize employee overtime to
fulfill orders from EnergyBoat. For this reason, there has been a dispute
among the management team members at EnergyBoat. Some feel
outsourcing is expensive and should be considered only as a last resort,
while some believe this option is viable given the high fixed cost (health
benefits) for hiring temporary workers.
• Building up inventory during slow months. For years, EnergyBoat
has been using its inventory as a lever to handle peak season demand.
However, its warehouse has a limited capacity of 1,000 boats (e.g., by
the end of each month, at most 1,000 boats can be stored in the
warehouse and carried over to the next month), and it costs EnergyBoat
on average $25/boat for each unit of ending inventory in a given month.
Additional warehousing capacity can be leased from a local car dealer at
$35/boat charged against the ending inventory of each month.
Given this labor-intensive manufacturing process, the production
capacity of EnergyBoat is mainly controlled by its workforce level. While
the productivity of individual employees varies from worker-to-worker,
management feels comfortable assuming, for high-level planning purposes,
that the average productivity of each worker, whether regular or temporary
employees, is about 10 boats per worker per month. For example, for a
particular month with a workforce of 250 workers (i.e., the 200 regular
employees plus 50 temporary workers for that month), a total of 10×250 =
2,500 boats can be produced. The salary of a regular employee is, on
average, $7,000/month, with benefits included. In addition to labor costs,
EnergyBoat pays about $130 for the raw materials and utilities cost to
produce each boat.

Table 3.3: Monthly Order Fulfillment Requirements between May and December 2016.

To evaluate these options through aggregate planning, EnergyBoat’s


Vice President for Operations convened a S&OP session with the Sales and
Finance departments. The first objective was to validate the accuracy of the
customer order quantities and the month of delivery for the total of 23,900
boats required between May and December, which are summarized in Table
3.3.
Additionally, the two local contractors used to supplement internal
manufacturing were contacted to confirm their available capacity for the
given planning horizon. Jenson Brothers has limited spare capacity. It can
supplement at most 100 boats per month, and charges $1,000/boat. On the
other hand, Speedy Sailor has a much larger supplemental capacity for
EnergyBoat, but charges $1,050/boat. EnergyBoat’s unit selling price to its
business customers is $1,200/boat. Table 3.4 summarizes the necessary data
for evaluating the three options.
EnergyBoat has a starting inventory in May 2016 of 1,200 units, and
prefers to end December 2016 with at least 1,800 units to fulfill an
anticipated order from South America in January 2017. Because of the
agreement of EnergyBoat with its major business customers, all the orders
summarized in Table 3.3 must be fulfilled on time (i.e., no shortage or
backlog is permitted for any month in the planning horizon), which yields a
total revenue of $28,680,000 (= $1,200 × 23,900) between May and
December 2016. Since the revenue for this given planning horizon is fixed,
the optimal aggregate plan is the one that results in the minimum cost, or
the highest profit, between May and December.
Table 3.4: EnergyBoat Data.
Options Costs Constraints
Inventory/Warehousing
Internal inventory $25/unit <=1,000 units/month
External inventory $35/unit Unlimited
Outsourcing/Subcontracting
Jenson Brothers $1,000/unit <=100 units/month
Speedy Sailor $1,050/unit Unlimited
Internal Manufacturing
Own workforce $7,000/month 200 regular employees
Hiring temporary workers $4,800/month Unlimited
(plus $5,000 health benefit package)
Raw material/utility cost $130/unit

Table 3.5 shows that the current supply plan, which relies on
EnergyBoat’s regular workforce together with the two local contractors’
supplement capacities and inventories, leads to a total operation cost of
$22,292,000 between May and December, or a profit of $6,388,000 (=
$28,680,000 – $22,292,000). The management team is interested in the
potential of utilizing temporary workers to further reduce this cost and help
with the local employment picture.
The LP model for EnergyBoat is shown below.

Decision variables

TWt = Number of temporary workers available for month t, t = 1, … , 8;


NTWt = Number of newly-hired temporary workers at the beginning of
month t, t = 1, … , 8;
TTWt = Number of temporary workers terminated at the beginning of
month t, t = 1, … , 8;
IIt = Internal inventory level at the end of month t, t = 1, … , 8;
EIt = External inventory level at the end of month t, t = 1, … , 8;
JBt = Quantity produced by Jenson Brothers for month t, t = 1, … , 8;
SSt = Quantity produced by Speedy Sailor for month t, t = 1, … , 8.

Table 3.5: Current Supply Plan without Hiring Temporary Workers.


Objective function
Minimize:

Constraints
subject to:
Table 3.6: Optimal Integer Programming Solution to EnergyBoat, Inc. Case.

where Dt represents the order quantity to be fulfilled for month t, t = 1, 2,


… , 8, as specified in Table 3.6. Table 3.6 shows the optimal integer
programming solution, where RWt represents the number of regular
workers employed each month during the planning horizon.5
Questions for EnergyBoat, Inc. Case Discussion:
• How would you modify the model to take the following situations into
consideration?
The total number of temporary workers to be terminated between
May and October cannot exceed 30% of the total number of
temporary workers hired during the planning horizon.
All newly-hired employees may produce only five units during
their first month of employment.
For any single month, if the total quantity outsourced to Jenson
Brothers and Speedy Sailor exceeds 500, then no temporary
workers can be terminated during that month.
If there is any new hiring of temporary workers in month t, then
regardless of how many are hired, the quantity to be outsourced
should not exceed 1,000 units, and there is an additional
administrative overhead cost of $2,000 to be included into the
monthly cost.

Table 3.7: Air Champion Outsourcing Case Data.

3.6.2 Air Champion outsourcing


As the VP for global operations of a US-based sports shoe manufacturer,
you are currently working on the budget to meet demand for “Air
Champion” basketball shoes (men’s sizes) for its major retailers. Table 3.7
shows the demand, available capacity, and the estimated cost per unit for
each month in the planning horizon (January–June).
The shoes are currently produced in your South Korea facility with the
given monthly production capacities; cost estimates vary due to seasonal
labor, material, and shipping rates (including shipping cost to your main
distribution center (DC) in Hong Kong). Since Air Champion is a new
design to be introduced into market next year, your company does not
expect to have any initial inventory for the planning horizon. To meet the
demand, several suggestions have been made by the management team:

Laura, the Logistics Manager: We can make use of the inventory at the
South Korea facility at a carrying cost of $5 per unit per month;
Steve, the Senior Director for production: We can consider a planned
shortage (backlog) policy where any backlog of customer orders will
be delivered in the following month (that is, with a maximum 1-month
delay limit), at a penalty cost of $15 per unit; Rob, the VP for
procurement and global sourcing: We can subcontract part of this
production to our new partner — an Indonesian manufacturing facility,
with seasonal costs shown in Table 3.8, to make use of their capacity.

Table 3.8: Air Champion Costs in Indonesia.

Figure 3.13: Operating Plan for Air Champion With No Subcontracting.

Since another new design will be in production during the second


quarter of next year, your company prefers not to carry any ending
inventory or backlog for Air Champion by the end of June of next year, and
is interested in an effective operational strategy that will guide the capacity
planning (investment) and purchasing/contracting (spending) for the
product.
Suppose that you have recently allocated an $8.5 million operational
budget for Air Champion (production/shipping, inventory, subcontracting,
and backlog spending). However, Steve, your Senior Production Manager,
who has been involved in production planning at your company for over 30
years, believes this budget is insufficient. His analysis is shown in Figure
3.13.

Plan #1: Production manager’s operational plan


The production manager says “Personally, I don’t like the idea of
outsourcing or subcontracting, which is too expensive. We should be able to
handle this demand by using our own capacity and inventory, using a
planned shortage policy. That is, the total demand over months 2, 3, 4, 5,
and 6 equals 226,000 units, and our total capacity over these months is
220,000 units. If we carry 6,000 units from January (month 1) to February
(month 2), we should be able to get rid of the backlog by the end of June
(month 6). This plan costs $8,569,000, and is almost $70,000 over the
budget allocation of $8.5 million.”
Steve’s analysis, which does not include subcontracting/outsourcing, is
displayed in Figure 3.13, and follows the guidelines described above:
• Do not outsource/subcontract (i.e., Ct = 0, t = 1, 2, … , 6);
• Since we have ample capacity in January, produce an additional 6,000
units and carry the inventory into February, so that by utilizing the
maximum internal production capacity from February to June we will
exactly satisfy the total demand over the remaining months in the
planning horizon. While doing so results in a backlog in some months,
the planning horizon will end without any shortage by next June 30.
Is the budget of $8.5 million really not enough? Or can we do better?
The optimal LP solution may lead to a new operational plan that utilizes the
capacity of the Indonesian partner and reduces the total spending.

Plan #2: Optimal operation plan using LP


To develop the LP model for this operation planning problem, we first
introduce the following decision variables.

Decision Variables
Pt = quantity produced at South Korea facility in month t, t = 1, 2, … , 6;
Ct = quantity subcontracted in month t, t = 1, 2, … , 6;
It = ending inventory of month t, t = 1, 2, … , 6;
St = quantity backlogged in month t, t = 1, 2, … , 6.
The objective function and constraints for our LP model are as follows:

Objective Function

Constraints
subject to:
Demand–Supply (flow) Balance Constraints

Production Capacity Constraints

Non-Negativity Constraints

Questions for Air Champion Outsourcing Case Discussion:


• What are the constraints associated with the LP model to obtain the
solution in Figure 3.11?
• What is the optimal solution to the LP model developed under Plan #2?
• What would you recommend as an operational plan for Air Champion?

3.6.3 PowerZoom Energy Bar


In our fast-paced, high intensity world, with information just an iPhone
stroke away, demand for energy and nutritional supplements has grown
exponentially. The opportunity to meet this insatiable demand has seen
many players enter and exit the market quickly in recent years. One such
successful player has been the PowerZoom Energy Bar Company. Based
upon a unique set of differentiated product features, such as providing a
broad vitamin spectrum and protein with a low carbohydrate content, the
PowerZoom Energy Bar has been quick to capture market share in a very
crowded field of products. This early success has led to significant
investment support for capitalization and advertising that has seen double
digit sales growth over the past 3 years. As the Company has matured
through this growth cycle, the early stage investors have been more active
in challenging the leadership of PowerZoom to not only continue its
growth, but also to begin to optimize its efficiencies and improve
profitability.
The two main sources of investment for PowerZoom have been its
advertising and promotional spend and its supply chain costs. In its early
years on the market, PowerZoom’s growth was measured by increasing
distribution channels and using a network of third party suppliers to meets
its distribution needs. As its customer base has stabilized, the recent growth
strategy has been to increase traffic and purchase in its existing customer
base through awareness driven by jointly sponsoring promotional events
and cooperative advertising. In this more mature customer environment,
advanced demand planning techniques are more easily deployed and supply
planning efficiencies are more readily achievable. Many of these supply
efficiencies can be achieved through consolidating and leveraging
PowerZoom’s supplier base, along with improving service levels and
inventory levels with better customer integration. Through this growth
period, the majority of PowerZoom’s supply chain decisions have been
managed through its procurement function working directly with suppliers
on meeting market demands. A recommendation has been made to the
management of the PowerZoom Energy Bar Company by one of its larger
investors that it consider creating the position of Chief Supply Officer
(CSO). This position would be responsible for leading the efforts of the
senior management team and defining a process for aligning demand and
supply to fuel growth through capturing greater efficiencies in supply to
meet the business plans.
In considering this recommendation, the Senior Team at PowerZoom
investigates the value and business case associated with not only the new
role, but the importance of using this change to introduce the practice of
S&OP. They are able to interview and discuss the critical success factors
with executives from noncompeting companies to support the decision-
making process for moving forward with the new role and associated
practices. They learn that one important factor is the need to ensure that
there is sponsorship from the highest levels of the organization, in the form
of a cross-functional steering committee led by the new CSO that would
include leaders from marketing, sales, supply chain and finance. The
PowerZoom executives also learn from the Senior Team’s investigation that
the S&OP process requires a common view of data and clarity on decision-
making relative to balancing opportunities and risks with constraints in
resources and investments. In some cases, sales opportunities may be
leveraged with profit optimization. This may mean that sales growth
opportunities will be measured against profit contribution where capacity or
resources are limited to support these sales. Most importantly, the
PowerZoom Team learns that S&OP is a practice that requires a culture of
collaboration both internally between functions and externally with
customers and suppliers. In the case of PowerZoom, customer collaboration
should contribute to better targeted promotional investment for growth and
demand management. Supplier collaboration should contribute to better
utilization of capacity and sourcing decisions to improve service and costs
aligned with more predictable demand.
Once the PowerZoom Executive Team pulls all this benchmark
information together, they schedule a meeting with their Board to discuss
the merits of introducing the new CSO role to the organization and the
value contribution of introducing the practice of S&OP to the Company. In
preparation for that meeting, many internal discussions are held to better
understand the positives and negatives of moving forward with this plan.
Concerns are raised about the possible impact on growth from introducing a
process that could possibly “slow down” its ability to innovate, compete in
a fast-paced market, and take risks. Additionally, there are concerns that the
data infrastructure to support these advanced practices would require
significant investments and change that would be a distraction to the
PowerZoom organization. These possible costs are estimated and factored
into the Board presentation along with the benefits related to the efficiency
gains targeted.
Questions for PowerZoom Energy Bar Case Discussion:
• Should the PowerZoom Energy Bar Company pursue the creation of the
CSO role and introduce the S&OP practice into their Company? If yes,
highlight the possible benefits in both demand and supply planning. If
no, highlight other strategies for improving cost efficiencies.
• How would you approach the deployment of the S&OP process to the
Company if you were hired as the CSO?
• Understanding the concerns with competing in a fast-paced
environment, how would you engage customers into the S&OP process?
Can you leverage principles such as the four “P’s” or four “C’s”?
• Beyond supplier consolidation and leverage, what other opportunities
can be utilized to increase efficiency and responsiveness from suppliers
through an S&OP process?

3.7 Exercises
1. Consider the following LP problem:

(a) Find the optimal solution using Microsoft Excel.


(b) If the objective function is changed to 3x1 + 10x2, what will the
optimal solution be?
(c) Solve part (a) as an integer programming problem.
2. A contract manufacturer for infusion monitors in India has 8,000 man-
hours available for the assembly operations in October 2016 for its
three contracts. Management would like to allocate these hours to meet
the following contract conditions:
• contract A requires a minimum of 4,000 man-hours in October,
• the total number of hours allocated to contracts A and B should
be no more than 7,000,
• the total number of hours allocated to contract C should be no less
than that allocated to B.
The unit profit per hour varies among the contracts, as shown in Table
3E2.1
Formulate and solve an LP model to maximize the profit for
October 2016.

Table 3E2.1: Unit Profit Per Hour for Infusion


Monitors.
Contract Unit Profit
A $2/hour
B $2.8/hour
C $1.95/hour

3. The Product Mix Problem. A food manufacturer must purchase


materials for the fourth quarter production of its new snack bar, called
Celebrity’s Favorite (CF). There are four primary ingredients in this
product: organic sunflower seeds, Brazilian walnuts, dark chocolate,
and organic wheat, which will be combined in the production under
following guidelines:
• at least 15% but no more than 30% per unit by weight (one unit =
1 kg) must be sunflower seeds,
• the ratio of Brazilian walnuts to dark chocolate must be 3:2,
• no more 35% per unit can be organic wheat,
• at least 10% per unit should be chocolate.
Table 3E3.1: Unit Purchasing Cost for CF Ingredients.
Ingredients Unit Purchasing Cost
Dark chocolate (imported from Kenya) $20/kg
Brazil walnuts $250/kg
Organic wheat $10/kg
Organic sunflower seeds $30/kg

Table 3E3.2: Purchasing Plan for CF Ingredients.


Ingredients Total Quantity to Purchase (kg)
Dark chocolate 4,000,000
Brazil walnuts 6,000,000
Organic wheat 6,000,000
Sunflower seeds 4,000,000

The purchasing costs for these ingredients are shown in Table 3E3.1.
The snack sells at $35/kg (which is the weight of each box,
containing 10 bars) at gourmet health food supply chains. The
company has received orders for a total of 20,000,000 boxes from its
global customers. The current purchasing plan is shown in Table
3E3.2.
Can you determine a better purchasing plan which would lower
the company’s procurement expenditures while adhering to the
manufacturing guidelines?
4. The Make-or-Buy Problem. The Adam Auto Parts company (AAP)
manufactures two switches used in the audio systems of two models of
automobiles. For the next month, AAP has orders for 300 units of
switch A and 450 units of switch B. Although AAP purchases all the
components used in both switches, the plastic cases for both are
manufactured at an AAC plant in Brooklyn, New York. Each switch A
case requires 12 minutes of assembly time and 18 minutes of finishing
time. Each switch B case requires 9 minutes of assembly time and 24
minutes of finishing time. For next month, the Brooklyn plant has
1,800 minutes of assembly time available and 3,240 minutes of
finishing time available. The manufacturing cost is 30 cents per case
for switch A and 18 cents per case for switch B. When demand
exceeds AAP’s available time resources in Brooklyn, AAP purchases
cases for one or both switches from an outside vendor. The purchase
cost is 42 cents for each switch A case and 27 cents for each switch B
case. Management wants to develop a minimum cost plan that will
determine how many cases for each switch type should be produced at
the Brooklyn plant and how many cases of each model should be
purchased from the outside supplier.
(a) Formulate and solve an linear program for this problem.
(b) Suppose that the manufacturing cost increases to 34 cents per
case for switch A. What is the new optimal solution?
(c) Suppose that the manufacturing cost increases to 34 cents per
case for switch A and the manufacturing cost for switch B
decreases to 15 cents per unit. Would the optimal solution
change? If so, what is the new optimal solution?
5. The Investment Problem. An investment firm has $22 million to
invest for November 2016 with the options shown in Table 3E5.1.
The management team requires that the following conditions and
policies be followed in the investment decision:
• the total amount allocated to the low risk investments should be at
least 65% of the total invested,

Table 3E5.1: Expected Return for Investment Options.


Investment Categories Expected Return %
Construction and government housing (low risk) 4
Healthcare services (low risk) 6
International markets (high risk) 8
Utility industries (low risk) 6
Pharmaceutical new ventures (high risk) 10

• the amount allocated to international markets should be no more


than 20% of the total investment in low risk categories,
the amount allocated to new pharmaceutical ventures should be at
• least twice that invested in international markets,
• the total investment in utility industries should not exceed $5
million,
Let xi represent the amount allocated to investment category i, i = 1, 2,
… , 6. Formulate and solve an LP model that maximizes the expected
annual return.
6. A local bank in North Jersey-offers a variety of loans to meet
customers’ needs and to attract new customers. As the VP for
Investment of the bank, you allocate the available funds on a quarterly
basis to the loan options shown in Table 3E6.1.
For the second quarter of 2016, your budget is $75 million and the
investment decisions must conform to the following bank policies:
• The total amount allocated to the second home loans should be no
more than 45% of the allocation to the first home loans;
• At least 80% of the college loans should be approved;
• At least $4 million should be allocated to veteran start-up loans;

Table 3E6.1: Expected Return for Loan Options and Dollar Value of
Loan Applications Received.

Expected Annual Dollar Value of Loan


Type of Loans Return (%) Applications (million $)
Home loans (1st 3.5 20.7
house)
Home loans (2nd 9 19.23
house)
College loans 2.8 15
Medical care loans 4 8.2
Small business 6 33
loans
Veteran-owned 2 5
start-up loans
Bad credit personal 11 2.8
loans
Bad credit business 15 31
loans
Lawsuit loans 5 4.5
• The total amount allocated to small business loans should be no
more than the total amount allocated to medical care loans.
Develop and solve an LP model to maximize the expected annual
return subject to the given bank policies.
7. The Aggregate Planning Problem. Emily’s Cookie Company makes a
variety of chocolate chip cookies in their plant in Chicago. Based on
orders received and forecasts of buying habits, it is estimated that the
demand for the next 4 months is 850, 1,260, 510, and 980, expressed
in thousands of cookies. Each worker can produce 308 cookies per
day. Assume that the number of workdays in each of the next 4 months
is 20 days. There are currently 100 workers employed, and there is no
starting inventory of cookies. Workers are paid $5,500 per month. The
cost of hiring one worker is $150; the cost of firing one worker is
$200; the cost of holding one cookie in inventory for 1 month is 8
cents. Backlogs (stock-outs) are permitted only in months 1–3, at a
cost of 20 cents per cookie per month, and backorders must be filled
the following month; no backlogs are permitted at the end of month 4.
Emily can subcontract up to 20,000 cookies per month at a cost of
$300 per thousand cookies. At the end of month 4, there must be at
least 10,000 cookies in inventory. Using Microsoft Excel, solve:

(a) as a LP problem,
(b) as a mixed integer programming problem, where all variables
representing the workforce size per period (Wt), the number of
employees hired at the beginning of each period (Ht), and the
number of employees laid off at the beginning of each period (Lt)
must be integers,

Appendix
A3.1: How to install and access Microsoft Excel Solver
Using Microsoft Excel Solver to solve a LP problem requires that you first
have Solver installed on your laptop, and then activate it.
How to find or activate “Solver” (Microsoft Excel 2002/2003) on your
computer:
Step 1. Start Microsoft Excel.
Step 2. Click Tools ⇒ Data Analysis ⇒ Click Solver.
How to find or activate “Solver” (Microsoft Excel 2007 or 2010) on
your computer:
Step 1. Start Microsoft Excel.
Step 2. Click Data Tab (at the top) ⇒ Data Analysis ⇒ Solver.
If, after following the above steps, you still cannot find Solver, then
you will have to install it. The Solver installation process depends on which
version of Microsoft Excel is on your computer.
Install Solver in Microsoft Excel 2007 or before:
Step 1. Start Microsoft Excel.
Step 2. Click the “Office Button” at top left.
Step 3. Click the Microsoft Excel Options button.
Step 4. Click the Add-ins button.
Step 5. At Manage Excel Add-ins, click Go.
Step 6. Check the boxes for Analysis ToolPak and Solver Add-in: if they
are not already checked, then click OK.
Install Solver in Microsoft Excel 2010 or later:
Step 1. Click the File Menu and choose Options.
Step 2. Click the “Add-ins”.
Step 3. Click “Solver Add-in”, and then click “OK”.
Figure A3.1.1 shows the Add-Ins dialog box in Microsoft Excel 2010 or
later.
Now Solver is located under the Data Tab and ready for you to use.
The Solver Parameters dialog box is shown in Figure A3.1.2 It requires
the specification of several quantities associated with the linear program.
Target Cell/Set Objective
The target cell (called “set objective” in Microsoft Excel 2010 and later) is
the cell which will contain the value of the objective function, or goal, to be
maximized or minimized in your LP problem. For example, in the
production planning problem, where we would like to maximize the weekly
profit (see Table 3.1), the value of this weekly profit:

Figure A3.1.1: Installing and Enabling Solver in Microsoft Excel 2010 or Later.

Figure A3.1.2: Solver Dialog Box.


is contained in the target cell G5 (see Figure A3.1.2).

Changing Cells/Changing Variables Cells


Changing cells (called “changing variable cells” in Microsoft Excel 2010
and later) contain the values of decision variables that can be changed or
adjusted to optimize the value of the objective function, such as the weekly
profit of our production planning problem stored in Cell G5 in Figure
A3.1.2.

Constraints
Constraints are restrictions or limitations applied to the system, and must be
satisfied by any feasible solution to a given LP problem. For example, in
the production planning problem, Center A has a total of 8,060 labor hours
available per week. Therefore, any values assigned to decision variables X1,
X2, X3, X4, and X5 must satisfy this condition (also see Table 3.1):

The value of 28X1 + 12X2 + 20X3 + 34X4 + 11X5 is in cell address G12,
under column header (total) Usage, which must be less or equal to the
Center A’s maximum capacity value stored in cell address I12 (see Figure
A3.1.2).
Constraints can be added by clicking on the Add button.

Non-negativity and Linearity


It is not necessary to explicitly include non-negativity constraints when
using Solver. In Microsoft Excel 2003 and 2007, we can click on Options in
the Solver Parameters dialog box and then check the boxes: Assume Linear
Model and Assume Non-negative. In Microsoft Excel 2010 and later, as
shown in Figure A3.1.2, we simply click, in the Solver Parameters dialog
box, on “Make Unconstrained Variables Non-Negative” and on “Select a
Solving Method: Simplex LP.”
Once the Solver dialog box has been configured, click on Solve to run
the optimization program.
A3.2: Fundamentals of LP sensitivity analysis
Often we are interested in what the effect on our optimal solution would be
if one or more parameter values change; we call this sensitivity analysis, or
post-optimality analysis. Microsoft Excel generates a Sensitivity Report, as
shown in Figure A3.2.1, with two sections; the top section, labeled
“Adjustable Cells” refers to the decision variables in the problem, and the
bottom section refers to the Constraints in the problem. The Sensitivity
Report shown in Figure A3.2.1 refers to the medical device problem
formulated earlier in Section 3.2 (without integer constraints on the decision
variables).

Reduced cost
Looking first at the section for the decision variables and the column
labeled Reduced Cost, the absolute value of reduced cost tells us how much
the objective function coefficient of each decision variable would have to
improve before that decision variable assumes a positive optimal value. For
example, since the decision variables X1, X2, X3, and X5 are already
positive, we do not have to improve their profitability to obtain positive
optimal values, and so their reduced costs are 0. Variable X4, however, is 0
in the optimal solution, so we might want to know how much we would
have to improve its profitability before that product would be manufactured
(or how much we would have to increase the objective function coefficient
of X4, currently $2,300 per unit, so that X4 would have a positive value in
the optimal solution).
Figure A3.2.1: Microsoft Excel Sensitivity Report.

From Figure A3.2.1, we see that the reduced cost for X4 is −1,114.66.
Its absolute value is 1,114.66, so if the profit/unit for P4 increases by
$1,114.66 per unit (by either increasing the price by $1,114.66 per unit or
decreasing the cost by $1,114.66 per unit, or some combination of both
resulting in a net increase in profitability of $1,114.66 per unit), the optimal
solution will have a positive value for X4.
Another way to look at the reduced cost for a decision variable which
has a value of 0 in the optimal solution is that it represents the change in the
objective function value when the value of that decision variable is
increased from 0 to 1. Thus, if the value of X4 is increased from 0 to 1, the
objective function will change by −1,114.66. That is, the value of the
objective function will be reduced by $1,114.66; since this is a
maximization problem, it will no longer be the optimal, or profit-
maximizing, solution.

Shadow price
Looking at the bottom section of the Sensitivity Report in Figure A3.2.1,
the column labeled Shadow Price represents the change in the optimal value
of the objective function when the right-hand side of the constraint is
increased by one unit. For example, for the first constraint on Center A, the
shadow price is 29.38. This tells us that if the number of hours per week
available in Center A were increased by 1 (from 8,060 to 8,061), the profit,
or the objective function value, would increase by 29.38. By increasing the
right-hand side of a ≤ constraint, we are relaxing the constraint (making it
less restrictive), and hence the new solution would have a better objective
function value, or in the case of a maximization, a higher objective function
value.
Similarly, if we look at the fourth constraint, demand for P1, we see
that the shadow price is −245.60. Thus, if the demand for P1 were increased
by 1 unit (from 25 to 26), the objective function would decrease by
$245.60. Here, we are increasing the right-hand side of a ≥ constraint; in the
current optimal solution, the constraint holds as an equality. That is, in the
current optimal solution, we make only 25 units of P1; it is not profitable to
make more even though the ≥ constraint would have allowed us to do so. So
if we increase the demand (or the right-hand side) for P1, which competes
for resources against other (potentially more profitable) products, we are
actually tightening the fourth constraint and would have to use additional
resources to make the additional quantity of P1 and hence our profit will be
lower.

Post-optimality analysis
As part of the sensitivity analysis of the solution to a LP problem, we are
interested in answering two questions:
• Over what range of parameter values will the current optimal solution
remain optimal (or, over what range of parameter values will the
variables which are positive remain positive)? and
• How will specific changes in parameter values (in particular, the
objective function coefficients and right-hand side values) affect the
optimal solution?
We have been assuming that all objective function coefficients and
right-hand side values (the input parameters of a given optimization
problem) are known with certainty. In reality, these may change as a result
of changes in the cost of production, market demand, selling price,
customer requirements, and the availability of resources. We would like to
know, first, over what range of values the current solution will remain
optimal, and second, the effect of specific changes in one or more parameter
values.
Objective function coefficient ranges. To find the range of objective
function coefficients for each decision variable for which the current
solution remains optimal, we see from the top section of Figure A3.2.1
(“Adjustable Cells” or Decision Variables) and the first line, corresponding
to decision variable X1, that the (current) objective function value is 2,000,
and, from the next two columns, that the allowable increase is 245.60 and
the allowable decrease is 1E + 30 (or ∞). Hence, the upper limit of the
desired range is the current objective function value plus the allowable
increase, or 2,000 + 245.60 = 2,245.60; the lower limit is 2,000 − ∞ (or
−∞). That is, the range for c1, the objective function coefficient of X1, over
which the optimal solution is unchanged, is

So if the profit/unit for P1 is any value within this range, the optimal
values of all the variables will remain the same, although the value of the
objective function will change. However, if the profit/unit for P1 is above
$2,245.60, X1 = 25, X2 = 105.6, X3 = 281, X4 = 0, X5 = 42.87, max profit =
$876,608.55 will no longer be optimal. In particular, with P1 so profitable,
it will probably be optimal to make more of that product at the expense of
one or more of the other products. We would have to resolve the problem
with the new objective function coefficient. For example, if we were to
increase the objective function coefficient c1 to 2,500, X1 increases from 25
to 118 and X2 decreases from 281 units to 0 (that is, P2 will no longer be
produced). The values of the remaining decision variables will change as
well.
Similarly, we see from Figure A3.2.1 that the range for the objective
function coefficient of X2, c2, is given by:

So if the profit per unit for P2 is either below 1,569.55 or above


2,124.11, the current optimal solution will no longer be optimal and the
values of the variables and profit will change.
Right-hand side ranges. The right-hand side ranges give us the range of
feasibility, or the range of right-hand side values for each constraint for
which current shadow price is valid. This is also the range over which the
current optimal solution remains feasible (that is, the optimal solution will
have the same set of positive variables as the current optimal solution, but
not necessarily with the same numerical values).
Looking at the bottom section (“Constraints”) of Figure A3.2.1, we see
in the first line, corresponding to the constraint for Center A, the (current)
right-hand side is 8,060, the allowable increase is 173.12, and the allowable
decrease is 3,099.52. Hence, the upper limit of the desired range is the
current right-hand side value plus the allowable increase, or 8, 060 + 173.12
= 8, 233.60; the lower limit is 8,060–3,099.52, or 4,960.48. That is, the
range of feasibility for parameter b1, the right-hand side value of the
constraint for Center A, is

This is the range over which the respective shadow price remains
unchanged. Within this range, the variables that are positive in the current
solution will remain positive; however, the optimal numerical values of
these variables will change and must be found by resolving the problem.
If the availability of hours in Center A is reduced to 4,500 hours, the
change in the optimal value of the objective function when the right-hand
side of the constraint increases by one unit will no longer be equal to the
current shadow price of 29.38. In addition, the values given by the current
optimal solution will no longer be feasible for the new problem; a different
set of positive variables will be optimal, which can be found by resolving
the problem with the new availability of hours in Center A.
Similarly, we see from Figure A3.2.1 that the range for b4, the right-
hand side of constraint 4, which corresponds to demand for P1, is given by:

It is important to note that all of these ranges assume all other


parameters of the model remain unchanged. If more than one parameter
changes, the ranges do not give us any information about the effect on the
problem.
Specific changes in problem parameters. To answer our second question,
how will a specific change in one or more parameter values affect the
optimal solution?, we make the desired changes directly on the Microsoft
Excel spreadsheet; we may make these changes on the optimal spreadsheet;
there is no need to go back to original problem formulation to do this.
As an example, suppose the profit per unit of P1, c1, is increased to
$2,500 (from the current value of 2,000) and the number of hours available
in Center B, b2, is increased to 6,900 hours (from the current value of
6,000). What is the optimal solution? We cannot look at the ranges for
either the objective function coefficient of x1 or the right-hand side of b2
since, as we mentioned above, these ranges are only valid when all other
parameters remain constant. Since we are changing two parameters
simultaneously, we must make these changes directly on the spreadsheet
and use Solver to find the new solution. Figure A3.2.2(b) shows the
changes made (circled) to the original spreadsheet in Figure A3.2.2(a).
Figure A3.2.2: (a) Medical Device Example (Original Problem). (b) Both Model Parameters c1 and
b2 are Changed.

Figure A3.2.3: New Optimal Solution to Medical Device Example.

Starting at this spreadsheet, we click on Solver as before, and find the


new optimal solution shown in Figure A3.2.3, where the optimal objective
function value is now $1,009,501.
In the new optimal solution, the objective function has increased to a
profit of $1,009,501 (from the previous optimal value of $876,609), and we
are no longer making either P3 or P4. Note also that all variables are non-
negative, so Solver has converged on a feasible optimal solution.

Endnotes
1. “S&OP gives Caterpillar a Competitive Edge,” Oliver Wight Case Study Series: Available at:
www.oliverwight.com/client/features/caterpillarna.pdf.
2. Joan Magretta, “The Power of Virtual Integration: An Interview with Dell Computer’s Michael
Dell,” Harvard Business Review, March–April 1998. Available at: https://hbr.org/1998/03/the-
power-of-virtual-integration-an-interview-with-dell-computers-michael-dell.
3. “iPhone: Who’s the real manufacturer? (It isn’t Apple), Textyt.com, June 29, 2007: Available at:
http://texyt.com/iphone+manufacturer+supplier+assembler+not+apple+00113.
4. Michelle Jamrisko, “The Best and Worst of the U.S. Economy in 2015,” bloombergbusiness.com,
December 30, 2015. Available at: http://www.bloomberg.com/news/articles/2015-12-30/the-best-
and-worst-of-the-u-s-economy-in-2015.
5. The optimal solution in Table 3.6 was provided by Chingxin Fan.
Chapter 4

Inventory Management

INVENTORY IS MONEY SITTING AROUND IN ANOTHER FORM.

Rhonda Adams

4.1 Introduction to Inventory Management


In this section, we introduce some of the important concepts, definitions,
and terminology associated with the development of an optimal inventory
policy, or the determination of how much, and how often, to order goods.
A supply chain consists of complicated and intertwined networks with
links and nodes. The links refer to the physical movements of goods or
resources needed for downstream customers and the nodes refer to the
stocking points where inventories are built as buffers. While in a perfect
world shipment arrives at the right time, to the right place, with the right
quantity and quality, in the real world, inventory is that critical tool that
buffers true demand from the time to manufacture and deliver. Supply chain
excellence consists of an optimum set of links and nodes that minimize the
cost of inventory in balancing supply with demand.
Therefore, inventory is a necessity for the proper operation of
manufacturing and service industries throughout the world and as a result,
an enormous amount of inventory is carried by US companies. For instance,
as of November 15, 2015, the total dollar value of inventories carried by
manufacturing and trade industries in the US (adjusted for seasonal
variations but not for price changes), were estimated to be $1.81 trillion,
accounting for about 138% of the monthly sales for those industries.1
Figure 4.1 shows the breakdown of inventory by industry.

Figure 4.1: Breakdown of Inventory by Industry.


Source: US Census Bureau News, Manufacturing and Trade Inventories and Sales November 2015,
http://www2.census.gov/mtis/historical/mtis1509.pdf

Companies typically tie up a significant portion of their assets (often as


high as 40%) in inventory. While insufficient levels of inventory and the
resulting inability to meet demand are associated with significant costs in
terms of lost sales, loss of goodwill, and special handling of late deliveries,
too much inventory diminishes the company’s financial performance in
terms of high cost and high risk, and indicates inefficiencies in operations.
There are three types of inventory for a manufacturing company:
inventory of raw materials and supplies; inventory of inventory
management!semi-finished products, also called work-in-process (WIP);
and inventory of finished goods. Given the breakdown of inventory by type,
it is equally important to manage each of these three types of inventory.
In this chapter, we will examine the trade-off between too little and too
much inventory, and discuss optimal inventory planning techniques,
including the Economic Order Quantity (EOQ) Model, quantity discount
analysis, and the Newsvendor Model. We will also consider safety stock
management techniques and other supply chain strategies for inventory
control.
Figure 4.2: US Logistics Cost Breakdown in 2012.
Source: http://www.shipxpress.com/blog-article?r=8ZCEG00N27

Inventory vs. other logistics costs


The cost of carrying inventory is the second most significant component of
logistics costs after transportation cost, as shown in Figure 4.2. As
companies continuously expand their global supply chains to reach
potential markets, inventory costs are becoming a more and more
significant portion of total logistics costs.

Efficient vs. inefficient inventory management


By efficiently managing inventory, companies like Walmart, Dell
Computer, and Procter & Gamble stand out from their competition and
provide significant returns to shareholders. If inventory is not managed
efficiently, companies can get into serious trouble through lost sales and a
compounded impact on lost profits by committing cash resources to non-
working inventory and generating a high risk exposure to future obsolete
inventory. An example of lost sales occurred in 1994 to IBM, due to under-
planning production capacity.2 An example of impact to profits occurred
when Liz Claiborne was adversely impacted by higher than anticipated
excess inventory.3 Another best practice leader, Dell, was impacted by a
mismatch between supply and demand, resulting in inventory writedowns.4
Table 4.1: Types of Inventory Stock.
Type of Inventory Business Needs
Cycle Stock To take advantage of the scale economy
Safety Stock To protect against uncertainty
Pipeline Stock To ship over a long distance
Prebuilt Stock To meet anticipated demand
Forward-buy Stock To hedge against inflation

Types of inventory
In analyzing the functions of inventory control systems, we consider five
types of inventory stock and the business need that each type addresses, as
shown in Table 4.1.
Cycle Stock. The objective of carrying cycle stock is to take advantage of
economies of scale that can result in the production and distribution of
larger quantities. Economies of scale widely exist in operations in the form
of reduced costs and/or costs which are independent of number of units:
setup costs and/or changeover times (in manufacturing), fixed shipping cost
(in transportation), fixed materials handling costs (in warehousing), and
fixed ordering costs (in procurement). Cycle stock allows firms to order,
produce or distribute a large batch of products at one time and thus incur the
fixed cost once per batch rather than once per unit of product. In such cases,
we balance the unit cost elements identified against the total cost of
inventory. The total cost includes elements such as monetary assets tied up
in the inventory, the cost to handle and store the inventory, and the risk
exposure of having the “wrong” inventory. This balance of costing is true
for the other examples discussed relative to the value of an optimized
inventory strategy.
Safety Stock. Recall from Chapter 2, Section 2.1, that the first bullet under
the laws of forecasting was “Forecasting is always wrong”; that is,
uncertainty exists across the entire supply chain network, whose
components are illustrated in Figure 4.3.
Figure 4.3: A Hypothetical Supply Chain.

Figure 4.4: Area of Demand Surge from Hurricane Irene.


Source: www.noaa.gov

The fact that a forecast cannot always be 100% accurate reflects the
reality that uncertainty exists everywhere in a supply chain network. In
general, both demand and supply cannot be fully predicted; for example,
demand surge, price variation, and random lead times will impact the
demand to supply balance. As an example, Figure 4.4 shows the path of
2011’s Hurricane Irene, causing an unexpected demand surge at the retail
level as people prepared by stocking up on necessities. The objective of
safety stock is to buffer against uncertainties so that one can almost always
satisfy demand. The accuracy of the forecast, the quality of the supply
execution and the level of risk tolerance are important factors in
determining the level of safety stock required.
Pipeline Stock. Production can be subject to lengthy and unexpected
delays, particularly in global and complex supply chain networks that also
include shipping over a large distance, where operations are subject to, for
example, strikes, mechanical breakdowns, adverse weather conditions,
security inspections. For instance, the shipping time from a factory in South
East Asia to a distribution center in the US is at least 4–6 weeks. The
shipping cycle time includes inland transportation on both ends, ocean
transportation (around 10 days), and port operations. The shipping time
from Europe to the US is slightly shorter because ocean transportation takes
about 5 days, but the other elements, including customs clearance and any
duty requirements, are similar. As the shipping time increases, shipments
spend longer time in transit; pipeline stock provides a buffer against
shortages due to anticipated delayed transit and production times, and
factors in the transportation cycle time around the inventory modeling. The
longer the transportation cycle time for replenishment, the higher the
inventory.
Prebuilt Stock. Seasonal demand is common and most companies do not
have the production capacity to meet demand during the peak season. For
this reason, most retailers build up inventory well before, say, the Christmas
season demand (which can represent over 60% of annual sales for many
companies) and optimize capacity efficiency with inventory costs
throughout the calendar year. Other examples of seasonal inventory build-
up, or prebuilt stock, include barbecue grills for the summer season, snow
blowers for the winter season, candy for Halloween, flu vaccines for early
autumn, and power generators for hurricane seasons. Products are produced
and delivered to the stock-keep locations based upon the forecasts and
business plans. Success is measured against meeting the true demand at
each location with the correct inventory to support that demand.
Forward-buy Stock. The procurement cost of inventory can vary
significantly over time due to temporary discounts, capacity utilization, and
price promotions offered by suppliers. To hedge against future cost
inflation, some companies may buy more supplies than what is needed to
satisfy short-term demand and carry them in inventory. Stock carried for
such reasons is called forward-buy stock or investment-buy stock. It is also
called speculative-buy stock if the cost inflation is not completely
predictable.
As an illustration, virtually everyone has managed an inventory of
groceries. One can think of the refrigerator as a warehouse, with all
groceries bought as stock-keeping units (SKUs), and family members as
consumers who generate demand for this inventory. It is easy to understand
why we prefer to go to the supermarket only once a week, for example,
unless it is next door or we don’t have enough capacity (refrigerator size),
because of the time and fixed/overhead cost (e.g., fuel) spent on the road,
which is a constant regardless of how much we buy. The quantity that we
buy to last until the next shopping trip is the cycle stock. We may want to
buy a little more than our average consumption because occasionally we
may consume more (e.g., with visitors); the extra buffer is the safety stock.
Finally, if some items are on sale this week, we may want to buy more than
what we usually do and carry them over, which is the forward-buy stock.
Many of us belong to warehouse clubs, where forward buying at the
consumer level has become quite a science.
Excessive inventory can be viewed as a waste of working capital gone
wrong and as a risk. Carrying inventory requires additional space and
material handling. Money invested in inventory cannot be used elsewhere,
and thus there is an opportunity cost of capital. Carrying can be risky if the
inventory has a short shelf-life and/or short life cycle. Fashion driven items
are a good example of short life cycle inventory, where it is difficult to
predict what will be the new hot item or this season’s bust. Finally,
excessive inventory in the form of semi-finished or work-in-process stages
on a manufacturing floor can sometimes hide problems in manufacturing
processes.

Inventory performance measures


Companies often find inventory turnover rate (or number of turns) and
average flow time to be useful measures for assessing inventory
performance. These quantities are calculated in Equations (4.1) and (4.2).

(4.1)

(4.2)

Table 4.2 illustrates savings that can be achieved by increasing the


turnover rate, or number of turns per year, assuming that the annual holding
cost, or the cost of carrying inventory for a year, is 30% of the cash value of
the stock.
Inventory turnover rates vary significantly across companies in the
same industry, as well as between industries, as shown in Figure 4.5.
To manage any inventory system, or specify an inventory policy, two
fundamental issues must be addressed: (1) when to order and (2) how much
to order. The first issue is about the timing of ordering or production; the
second refers to the ordering or production quantity.

4.2 Characteristics of an Inventory System


The approaches used to determine when to order and how much to order
depend upon the characteristics of the inventory system, which can be
categorized as: demand, lead time and review cycle, cost structure, and
service requirements.

Table 4.2: Inventory Turnover Rate.


Figure 4.5: Inventory Turnover Rates by Industry.
Source: http://www.waspbarcode.com/buzz/what-is-inventory-turnover/

Demand characteristics
Demand is one of the most important factors that determines how the
inventory system should be controlled. We can classify demand by two
dimensions.
Stable vs. seasonal. For stable demand, the statistics do not vary
significantly over time; for seasonal demand, the reverse is true. Small
variation is always unavoidable in practice, and thus stable demand is often
only an approximation of real-world behavior.
Predictable vs. random. Recall from Chapter 2 Equation (2.1) that
forecasting error is defined as the difference between the actual demand and
the predicted demand. The predictability of demand can be measured by
forecast errors. For example, many grocery items tend to have very low
forecast errors (e.g., less than 5%) because the behavior of consumer
demand for such items is well understood by retailers. On the other hand,
the forecast error for a new fashion item can be very high (as much as
200%) since no historical data is available. While we can approximate
demand for the former by a constant, doing so will yield an unacceptable
error for the latter.

Figure 4.6: Phases of the Product Life Cycle.

Demand statistics change not only because of seasonality, but also due
to different phases of the product life cycle. For example, when a product is
in its introduction and growth phases, as shown in Figure 4.6, sales are
accelerating and demand is climbing. However, when a product reaches the
end of its life cycle, demand declines. Thus, stable demand can best
describe non-seasonal products when they reach steady-state in their life
cycle. The predictability of demand also depends on product life cycle.
During the introduction phase, the product is new and thus demand can be
highly unpredictable. During steady-state, sufficient data has been
collected, and thus the demand is much more predictable than in the early
phases.

Lead time and review cycle


Another important factor in the analysis of an inventory system is its supply
characteristics, such as the lead time, which is defined to be the total time
from order placement to order receipt. The lead time typically includes (but
is not limited to) order processing time at the supplier, stock-out delays if
on-hand inventory is insufficient to fill the order, and shipping time.
Review cycle refers to the time period in between two consecutive
inventory reviewing activities. It measures how often the inventory is
reviewed and an ordering decision made. In the extreme, inventory can be
reviewed continuously over time, using modern information system and
scanning technology. Continuous review places a high requirement on
information systems and automation, and is usually applied to expensive
items with slow demand, such as jewelry and spare parts. Periodic review,
such as reviewing inventory once every day or week, is still widely used in
practice.

Cost structure
For a typical inventory system, the following costs must be considered in
decision-making: inventory holding cost, ordering cost, and penalty cost for
stock-out. This is also referred to as the “total cost of inventory.”
Inventory holding cost. Inventory holding cost, or carrying cost, includes
inflation, cost of capital, cost of storage, taxes and insurance, as well as
breakage, spoilage, and obsolescence. The inventory holding cost is often
calculated as a percentage of the product value (the holding cost rate), and
can vary significantly over product categories. Product shelf-life and life
cycle have an important impact on the holding cost, as shown in Table 4.3,
which compares typical holding cost rates for long vs. short life cycle
products. Note that short life cycle products, such as computers, lose 1% of
their value for each week held in inventory (or 52% for each year) due to
obsolescence. Another significant component of inventory holding cost is
the storage requirement; for example, drugs that require strict temperature
control incur higher inventory holding costs.

Table 4.3: Comparison of Inventory Holding Cost Rates.


Product Type Long Life-Cycle Short Life-Cycle
Products (%) Products (%)
Inflation 3 3
Cost of capital 8 8
Cost of storage 6 6
Taxes and insurance 2 2
Spoilage and 1 52
obsolescence
Total annual holding 20 71
cost
Figure 4.7: Fixed and Variable Cost of Ordering or Production.

Ordering (or production) cost. Ordering (or production) costs typically


consist of a fixed and a variable cost, as shown in Figure 4.7, where C(x)
represents the total cost to order (or produce) x units, k is the fixed cost, and
c (the slope) is the variable cost. The fixed cost, including setup cost,
changeover cost, or fixed ordering and shipping costs, is independent of the
order (or production) quantity. The variable cost is the marginal cost per
unit of item ordered (or produced). In the earlier example of grocery
shopping, the cost of time and fuel for traveling to the store is a fixed cost
while the money spent on groceries in the store is a variable cost.
Penalty cost. A penalty cost is incurred when a stock-out, or shortage,
occurs, making it impossible to meet demand when it occurs. If the
unsatisfied demand for a product is backordered (which often happens
between a manufacturer and its suppliers), then the supplier generally has to
pay a penalty for delaying the fulfillment of the order. Many supplier
agreements include penalty clauses; for instance, Boeing has to pay a fine
for each week of late delivery of aircrafts to airlines because late delivery
may result in changes of the airlines’ business plans. Chrysler also requires
its suppliers to pay a penalty cost that is equivalent to stopping or
rescheduling a production line. If the unsatisfied demand is a lost sale
(which often happens in a retail setting), then the penalty cost is the loss of
profit that could have been made if inventory had been available, as well as
a loss of customers’ goodwill, because stock-outs discourages customers’
future visits and product loyalty is lost.
Service requirements. Because it is difficult to quantify the loss of
customer good will, we sometimes specify a target service requirement or
standard. Such a service requirement consists of two components: required
maximum service time and required minimum service level. The service
time requirement specifies how long it takes to fulfill the demand. The
service level requirement measures the percentage of time that an order is
fulfilled within the target service time.
As an example, if you order from Amazon.com, you will notice that the
service time is different under different shipping options. Nevertheless, the
choice of a quicker shipping option does not guarantee an earlier shipment
because of potential delays in order processing and transportation; no
company has 100% service levels since this would require an unrealistic
and inefficient amount of inventory.

Trade-offs
Managing inventory involves balancing conflicting trade-offs and seeking
to minimize total cost. We explain some of these trade-offs and the
questions we wish to address below, and will explore these in depth later in
the chapter.
Economies of scale vs. inventory holding cost. Let’s say we live about
half an hour driving distance from our grocery store. If we shop every day,
we eat fresh products and we probably do not need a refrigerator, but we
have to spend 1 hour on the road each day traveling between home and the
grocery store. If we shop every month, then we only spend 1 hour each
month travelling for groceries, but we have to buy a month’s worth of
supply and need a huge refrigerator to store it. Given the need to balance
the costs of traveling and storage needs, how often should we go shopping?
Similar issues exist in industry, when companies locate their production
bases in low-cost countries and therefore have to ship their products over
long distances to high-income markets. In such cases, the fixed cost per
shipment (via ocean, rail, or truck) can be thousands of dollars, and the
company must determine the shipping frequency which will balance the
high fixed shipping costs vs. the high inventory cost.
We will address the determination of an inventory policy (how much to
order and how often) for balancing economies of scale vs. inventory
holding costs under EOQ models in Section 4.3.1. These models apply to
long life cycle products with predictable demand.
Lost sales vs. markdown cost. Many fashion items such as apparel,
handbags, and toys have a short product life cycle of months or even weeks.
To reduce the cost-of-goods sold, companies often outsource production to
low-cost countries in Asia. The long production and transportation lead
times mandate early production, well in advance of the selling season. Sport
Obermeyer, a highend skiwear designer and distributor, has to initiate
production 10 months ahead of the retail season, and 5 months prior to
receiving any retailers’ orders. Each year, there are about 700 SKUs, and
95% of them are new, which makes it hard to forecast demand. In fact, the
forecast error can be as high as 200%. The question is how many units to
produce without knowing exactly what the demand is? To answer this
question, one must balance the risk of ordering too little and thus lose sales
vs. the risk of ordering too much and incurring the resulting markdown
costs.
We will address the determination of an inventory policy (how much to
order and how often) for balancing lost sales vs. markdown costs under the
Newsvendor Model in Section 4.4.1. This model applies to short life cycle
products with unpredictable demand.
Safety stock vs. service requirements. Suppose in our grocery shopping
example we have decided to shop once a week. We now have to determine
how much to buy. Ideally, if we know exactly how much we will consume
in the next week, we can buy the exact amount needed. However, this is
often impossible because random events occur, such as unexpected visitors.
Thus, our consumption per week is probabilistic, and not completely
predictable. As a safety measure, we may buy a little extra just in case. How
much extra should we buy so as to balance the risk of running out of stock
vs. the cost of carrying high safety stock levels?
We will address the determination of an inventory policy (how much to
order and how often) for balancing safety stock vs. service requirements
under the Safety Stock Models in Section 4.5. These models apply to long
life cycle products with unpredictable demand.

4.3 Economies of Scale — Cycle Stock


In this section, we consider long life cycle products with predictable
demand and address the trade-off between economies of scale vs. inventory
holding cost in managing inventory and determining the optimal inventory
policy (how much to order and how often to order).

4.3.1 Classical EOQ model


Suppose a store sells an antioxidant vitamin at a constant rate of 60 bottles
per week. The store spends $4 to purchase a bottle from the supplier and
sells it for $10. It costs the store $12 to initiate an order, and the inventory
holding cost is based on an annual interest rate of 25%. Demand must be
met when it occurs (i.e., no backorders are permitted). Assuming that
delivery is virtually instantaneous from the time the order is placed, how
many bottles should the store order each time it places an order (the optimal
order quantity) and how often should an order be placed (the optimal cycle
time)?
The relationship between ordering frequency and inventory level is
illustrated in the inventory buildup diagram in Figure 4.8; the more
frequently we order, the smaller the order quantity and the lower the
inventory level, and vice versa.
Let
λ = the annual demand rate,
h = the unit holding cost/unit/year,
k = the fixed ordering cost, in dollars/order, independent of the order
quantity.

Figure 4.8: Inventory Buildup Diagram.

Figure 4.9: The EOQ Model.

Figure 4.9 shows, for the EOQ model, the demand rate, λ, as the slope
of the constant demand; the average inventory, Q/2; and the time between
orders, or cycle time, as T = Q/λ (so that number of orders per year = λ/Q).
From this, we can derive the cost function, G(Q), the optimal order
quantity, Q∗, and the optimal cycle time, T∗.
Let G(Q) be the total annual inventory cost, or the sum of the annual
fixed ordering cost and the inventory holding, or carrying, cost.
or, from Equation (4.1),

(4.1)

Thus, the total cost function, G(Q), is a function of one variable, Q, the
order quantity each time an order is placed.
To minimize this function, we set the derivative of G(Q) equal to 0 and
solve for Q, in Equation (4.2).

(4.2)

Q∗ is the order quantity which minimizes the sum of the annual fixed
ordering and inventory holding costs. This order quantity is called the EOQ.
If we replace Q in Equation (4.1) by Q∗ from Equation (4.2), we obtain
the minimum cost of the inventory policy, given by Equation (4.3).

(4.3)

At Q∗, the annual fixed ordering cost becomes and the


annual holding cost becomes Note that the two cost
components have identical values at the optimal EOQ Q∗.
Finally, the optimal order cycle time, or the time between orders, is
given in Equation (4.4).

(4.4)
The Office Supplies, Inc. and Mountain Tent Case Studies in Section
4.6 illustrate the analysis of various options under the EOQ model.

Sensitivity of the EOQ model


To understand intuitively how the EOQ model works and how the optimal
order quantity changes with changes in the parameter values, let’s examine
the order cycle time T∗ in more detail. As we see from Equation (4.4), a
higher fixed ordering cost k, or a lower inventory holding cost h, or a lower
demand λ, all result in a longer order cycle, or longer time between orders.
We see, however, that in each case T∗ increases not linearly but as a square
root. Returning to our earlier example, we have

If, for example, warehouse insurance costs increase so that the holding
cost h is doubled, so that hnew = $2/unit/year, then the optimal cycle time
will change to

Thus, although our holding cost has doubled (or multiplied by 200%),
which would lead us to carry less, on average, in inventory and hence order
more frequently, the time between orders has been reduced by less than
30%, and the total cost of the new policy has increased by just over 40%.
We see that the EOQ model is relatively insensitive to changes in the
system parameters, so that even a large change or error in any one of the
parameters will not have as large an effect on the overall policy.
Reorder point for non-zero lead time
Let L be the order lead time. If L = 0, we will order Q∗ units only when the
inventory drops to zero. If L > 0, we will order when the inventory level
drops to the level given in Equation (4.5).

(4.5)

where R is called the reorder point, or the inventory level at the time we
place an order so that it arrives when our inventory level reaches 0, as
shown in Figure 4.10.

Limited shipping capacity


If, because of limitations on shipping capacity, there is a limit of M on the
quantity we may order, then the order quantity is the minimum of M and
Q∗.

Figure 4.10: Non-Zero Lead Time L and Reorder Point R.

The power of collaboration


As an illustration of the power of collaboration in identifying an optimal
inventory policy, consider the following example. David manages a large
retail network for a consumer electronics company. One of the products he
carries is a palm-size, light weight, and high resolution digital camera. The
camera is purchased under an Freight on Board (FOB) contract from a well-
known Japanese camera manufacturer who has a flexible production line
and does not carry inventory for any of its camera models in order to
minimize the risk of obsolescence. Whenever a customer order arrives, the
manufacturer sets up the production line, produces the batch, and packs the
order into an air freight container for the designated destination, Memphis,
Tennessee in this case. The manufacturer’s fixed setup cost to produce the
cameras plus the shipping cost is about $8,500/order. The production
variable cost is already covered in the purchasing cost from this
manufacturer. The contracted logistics partner in Memphis handles the
importation of the shipment and stores the cameras in its local warehouse,
with a holding cost of about $20/camera/year and a fixed cost of $3,000 per
shipment, which are all covered by the annual service fee paid. The main
distribution center for the east coast is located in Allentown, Pennsylvania,
which incurs a holding cost for each camera of $50/unit/year. In this FOB
contract with the manufacturer, David’s fixed cost per order is essentially
the roundtrip trucking cost between Memphis and Allentown, or
$1,800/trip. Assuming the anticipated annual demand for the coming year is
280,000 units, David wishes to determine the optimal EOQ per shipment.
The total supply chain cost under Qr (which will be paid by David’s
customers) is the cost for the EOQ model based on an order quantity Qr
from Equation (4.3), plus the sum of the fixed ordering and holding costs of
his logistics partner, plus the sum of the fixed ordering and holding costs of
his manufacturing partner, as given by Equation (4.6).

(4.6)

where kl and km represent the fixed ordering cost of the logistics partner,
and the manufacturer, respectively. Similarly, hl and hm represent the
respective holding costs. So kl = $3,000/order, hl = $20/unit/year, km =
$8,500/order and, since the manufacturer does not carry inventory, hm = 0,
so that Equation (4.6) becomes
Now, let’s consider the fact that every time an order is placed, not only
David, as the retailer, but also the manufacturer and the logistics partner,
will have to pay the fixed ordering cost. As long as there is a supply
contract between the retailer and the manufacturer for the product, the
logistics partner will have to hold the inventory for this supply network.
Therefore, from a supply chain point of view, the global, or joint, EOQ
should factor in the total spending of all the trading partners in this supply
chain, where the total fixed cost is kr + kl + km and the total holding cost is
hr + hl + hm (in this example, just hr + hl since the manufacturer holds no
inventory), yielding a joint EOQ, from Equation (4.2), of:

From Equation (4.3), this joint EOQ reduces the total supply chain
annual costs associated with delivering this product from the supplier to the
market to:

The savings that may be achieved by implementing this policy is


$264,496/year, which reduces the annual cost of the camera supply chain by
more than 25% per year.

Table 4.4: Net Savings Analysis of the Three-Partner Channel.


However, a successful implementation of this global optimal operation
policy requires some collaboration among the supply chain partners. The
reality of this policy is that some partners lose profitability while others
gain significantly. Table 4.4 shows the details of these changes in
profitability.
So we need to determine the strategies that will ensure a successful
implementation of this global optimal operating policy. Supply chains
should not be managed in silos. Effective supply chain leadership requires
innovative approaches to share the benefits of a globally optimized supply
chain among trading partners. In this case, agreements should be
renegotiated to achieve the optimal ordering quantity for the supply chain as
a whole and ensure positive gains for each participating company.

4.3.2 The mixed SKU strategy — joint ordering strategy


Many companies carry multiple SKUs of their products, including different
packaging for the same product, such as one gallon vs. half gallon bottles of
organic milk. For example, McMaster-Carr is a supply chain leader
specializing in next day delivery of maintenance, repair, and operations
(MRO) materials, carrying and delivering over 480,000 parts to industrial
and commercial facilities worldwide. The company also has a heavy daily
transaction of orders with its many suppliers. Rather than placing
independent orders for each SKU, the joint ordering strategy requests
multiple SKUs each time an order is placed, and thus synchronizes the
timing of orders for multiple products. The joint ordering strategy can be a
cost-effective operating policy because it saves the fixed ordering cost; that
is, jointly ordering multiple SKUs can be less expensive than ordering each
SKU separately. However, we must monitor the total inventory cost,
because joint ordering forces multiple SKUs to share the same ordering
cycle, which may be less effective than individualizing the ordering cycle
for each SKU.
If k0 is the fixed cost of ordering and shipping under a joint ordering
strategy, then

where ki = the product-dependent shipping cost, and total cost


where n = the frequency of synchronized orders per year.
To find the optimal ordering frequency n∗ and the optimal order
quantity for each product, let

so that the optimal ordering frequency is given by Equation (4.7)

(4.7)

and the optimal order quantities are given by Equation (4.8).

(4.8)

As another example of the impact of joint ordering, suppose a chain


store has a contracted supplier in Hong Kong for its shoes (S), women’s
clothing (W), and office supplies (O) departments. The three departments
have, in the past, placed their orders independently, at a cost of $4,000 per
order, from the company’s main distribution center (DC) in San Francisco.
Department-dependent shipping costs from the supplier to the main DC are:
kS = $900/order, kW = $1, 329/order, and kO = $1, 041/order. Assume that
the respective holding costs are hS = hW = hO = $400/unit/year and
anticipated annual demand for the three product lines are λS = 12, 100, λW =
2, 500, and λO = 1, 600. We wish to determine the cost advantage if the
chain store switches from its current independent departmental EOQ policy
to the mixed SKU, or joint ordering, policy.
For the independent departmental EOQ policy, we can calculate the
optimal order quantity and cost for each of the departments from Equations
(4.2) and (4.3).

which results in a total annual cost of $401,353/year.


For the joint ordering policy (mixed SKU policy), from Equations (4.7)
and (4.8),

The annual fixed ordering cost is n∗ · k0 = n∗(k + kS + kW + kO) = $153,


467.70/year,
and the annual holding cost is

So the total supply chain cost of a mixed SKU policy is $306,950/year,


yielding a net savings of $94,404/year when compared with the
departmental independent EOQ policy.

4.3.3 Quantity discount model


A common practice in corporate purchasing is to take advantage of quantity,
or volume, discounts offered by suppliers. One type of discount is the all-
unit discount policy, in which a buyer’s unit purchase price, c, for every unit
purchased, is reduced if the order quantity, Q, exceeds a target volume q.
Very often a supplier may offer multiple levels of discount, such as,
If Q < q1, then the unit purchase price is $c1.
If q1 ≤ Q < q2, then the unit purchase price is reduced to $c2 < c1.
If Q ≥ q2, then the unit purchase price is reduced to $c3 < c2.
In this section, we demonstrate how to take advantage of a quantity
discount by determining the order size, Q, which minimizes the total annual
cost of purchasing, inventory holding, and ordering, or

(4.9)

Gi(Q) is the total cost when c = ci. Clearly, G1(Q) > G2(Q) > G3(Q), which
is shown in Figure 4.11.
Let q0 = 0. For each discount category ci, we compute hi and
One of the following three cases will hold.
Figure 4.11: Quantity Discount Model.

Case 1: qi−1 ≤ Qi < qi.


In this case, the discount category ci is feasible, so let = Qi, and,

Case 2: Qi < qi−1.


In this case, the discount category ci is infeasible, so let = qi−1
and

Case 3: Qi ≥ qi.
In this case, the next level of discount category, with unit price
ci+1, applies, so let

Now let ( , Gi) be the corresponding order quantity and annual spending
for discount category ci. The optimal order quantity Q∗ is defined by

As an example, suppose the discount scheme is c1 = $1/unit, c2 =


$0.95/unit, c3 = $0.90/unit, for q1 = 500 and q2 = 1,000. The fixed ordering
cost is $50/order and annual demand is 1,500 units. The inventory holding
cost is 20% of the product cost, so h1 = $0.20/unit/year, h2 =
$0.19/unit/year, and h3 = $0.18/unit/year.
For each discount category i, with unit price ci, we calculate the order
quantity. Qi, from Equation (4.2). If it is feasible (i.e., Case 1 above), then
that value is and we calculate the corresponding Gi( ). If it is not
feasible, then is either qi−1 for Case 2, or qi for Case 3, and we can
similarly calculate the corresponding Gi( ).
So for our example, we first calculate Qi for each discount category.

For discount category 1, Q1 > q1 = 500, so from Case 3, = 500 and

For discount category 2, q1 ≤ Q1 ≤ q2, so from Case 1, = 888 and

For discount category 3, Q3 < q2, so from Case 2, = 1,000 and

So our three options are:


The minimum cost is achieved when = 1,000, so the optimal inventory
policy given these discount opportunities is to order 1,000 units each time
an order is placed, resulting in a total annual cost (the sum of procurement,
ordering and inventory holding costs) of $1,515.

The Impact of Quantity Discounts


Why would suppliers want to offer a quantity discount? The quantity
discount is a special case of the general discounting strategies used widely
in practice. The objective of a quantity discount is to encourage large
orders, which take advantage of the economies of scale in production,
shipping and material handling. Other discounting strategies have been used
to promote sales, to reduce variability in demand and smooth out
fluctuations, to shift inventories to downstream partners and eliminate
excess inventories, or to protect a brand from a competitor’s effort.
How do retailers react to quantity discounts? Retailers may order a large
quantity to take advantage of a supplier’s discount offer, and either pass
some or all of the discount to their customers (which may also create
forward-buy stock, as discussed in Section 4.1 of this chapter, where extra
stock is purchased now as a hedge against higher prices in the future), or
pass little or none at all to their customers. The latter case will aggravate the
bullwhip effect, or increase the variability in the demand pattern, as we
discuss later in this chapter in Section 4.5.
How should suppliers manage the discount process for a supply chain?
Options include
• Offer a discount only when the competition does. However, note that
when all competitors offer a discount to customers, there is no real
increase in market share to any of them.
• Build a better discount management policy, such as limiting the amount
of forward-buy to the retailer. This will limit the amount of overstock in
the network.
• Make sure that the discount is based on actual sales to ultimate
customers, and that the retailer actually passes through a discount to
these customers by means of, for example, manufacturer’s coupons.

4.3.4 EOQ model with planned shortages


When inventory holding cost is relatively high as compared to the shortage
penalty cost, as in, for example, short shelf-life products or products with a
high risk of technology obsolescence, and when customers are willing to
wait for delayed delivery of a stock-out, an inventory planning technique
that builds in planned stock-outs, or backorders, or shortages, can lead to
cost savings.
Let
Q = order quantity,
θ = maximum amount of shortage allowed per order cycle,
π = penalty cost per unit of shortage per year,
= average inventory level.

Figure 4.12 shows, for the EOQ model with planned shortages, the
demand rate, λ, as the slope of the constant demand; the average inventory
during the period when inventory is positive, (Q − θ)/2; and the time
between orders, or cycle time, T = Q/λ (so that number of orders per year =
λ/Q). Note that “negative” on-hand inventory represents backorders; the
actual inventory on hand while backorders are accruing is 0.
Figure 4.12: EOQ with Planned Shortages.

From this, we can derive the cost function, G(Q), the optimal order
quantity, Q∗, and the optimal cycle time, T∗. G(Q) is the total annual
inventory cost, or the sum of the fixed ordering cost, the holding, or
carrying, cost, and the shortage penalty cost.

Examining each term separately, we know that there are λ/Q orders per
year, so annual fixed ordering
We also know from Figure 4.12 that the average inventory level is (Q −
θ)/2 for the portion of each cycle when inventory is positive (T′), and 0 for
the portion of each cycle when backorders are being accrued (T − T′). The
portion of time that inventory is positive, T′, can be expressed as (Q − θ)/Q,
so that
Similarly, the average shortage level is θ/2 for the portion of each cycle
when shortages are accruing (T − T′), and 0 for the portion of each cycle
when there is inventory on hand. The portion of time that backorders are
accruing can be expressed as θ/Q, so that

So the total annual cost for the EOQ model with planned shortages is
the sum of the annual fixed ordering, holding, and shortage costs, as given
by Equation (4.10).

(4.10)

Note that G(Q, θ) is a function of two variables, Q, the order quantity


each time an order is placed, and θ, the number of backorders to be filled
each time an order is delivered.
To find the values of Q and θ which minimize this function, we set the
partial derivatives of G(Q, θ) with respect to each of the variables equal to
0.

and solving for Q and θ, we find the optimal values in Equation (4.11).

(4.11)

Given θ∗ and Q∗, the maximum inventory level in a cycle is,

(4.12)

The cycle time remains the same as in Equation (4.4), Q∗/λ, and the
maximum amount of time that a customer waits for a backordered item is
given in Equation (4.13).

(4.13)

where T is the cycle time, and T′ is the portion of time in a cycle that we
have a positive on-hand inventory.

Special cases in the planned shortage model


In general, the fraction of demand satisfied by on-hand inventory (or the fill
rate) is π/(h + π).
From the analysis, we can see that:
• When the shortage penalty cost π is much larger than the holding cost h,
θ∗ approaches zero, which implies no shortage, and Q∗ approaches the
optimal EOQ result in Equation (4.2).
• When the holding cost h is much larger than the shortage penalty cost π,
θ∗ approaches Q∗, meaning no inventory is carried: at the end of each
cycle, there are Q∗ units on backorder, all filled when the next order
arrives, bringing the inventory level to 0, a special case of make-to-
order production-inventory management, where production is initiated
or an order placed only after demand occurs.

Comparison of the classical EOQ model with the planned shortage model
Consider, for example, a demand rate of λ = 100,000/year, holding cost h =
$24/unit/year, fixed order setup cost k = $1,800/order, and shortage penalty
cost π = $30/unit/year, a maximum tolerable customer waiting time of 7
days, and 300 working business days in a year; should we use a classical
EOQ model or an EOQ model with planned shortages?
Classical EOQ approach. In this case, assuming no shortages are
permitted and demand must be satisfied when it occurs, Equations (4.2) and
(4.3) give us,
Note that the inventory holding cost alone is $46,476/year.
EOQ with planned shortage approach. In this case, we will allow
backorders to accrue if such a policy results in a lower annual cost.
Equation (4.11) yields

The resulting maximum inventory level is, from Equation (4.12): Q∗ − θ∗


= 2,887, which implies that the amount directly satisfied from stock is about
56% of the cycle demand, λ, and the average cycle inventory is
The total annual cost, the sum of the annual fixed ordering,
holding, and shortage costs, is, from Equation (4.10),

which results in a net saving of $23,670/year over the classical EOQ model
results; using the EOQ model with planned shortage saves about 25% of the
annual operating costs. Furthermore, with this new policy, the resulting
inventory cycle time becomes

and the maximum customer waiting time (in the worst case) is, from
Equation (4.13),

Note that the EOQ model with planned shortages increases the
flexibility of the classical EOQ model; that is, it is a less constrained system
than the classical EOQ model and allows planned shortages if that will
result in a lower cost; the latter prohibits backorders and hence is less
flexible, or more constrained. Thus, at worst, the minimum total cost for the
planned shortage model will be the same as the classical EOQ model (i.e.,
the solution will have θ∗ = 0, or no backorders); however, if we are able to
reduce the total cost by incurring shortages, the optimal solution to the
planned shortage model will reflect that by resulting in a positive θ∗ and a
higher Q∗ than in the classical model.

4.3.5 EOQ model with finite delivery rate


This mathematical model is suitable when units are available as soon as
they are produced by a production line with a finite capacity (where there is
a limit on throughput, or the production volume per unit of time), or,
equivalently, where the logistics partner for the inbound shipment has a
limited transportation capacity per shipment.
Let
PR = production or shipping rate, where PR > λ,
Q = order size or production batch size,
T1 = up-time during the production cycle, or production batch time,
T2 = down-time during the production cycle,
T = production cycle time, where T = T1 + T2,
H = maximum on-hand inventory level.
Figure 4.13 shows, for the EOQ model with finite delivery rate, the
demand rate, λ, as the slope of the constant demand; the inventory buildup
rate, PR − λ; the average inventory during each cycle, H/2; and the time
between orders, or cycle time, T = Q/λ (so that number of orders per year =
λ/Q).
Figure 4.13: EOQ Model with Finite Delivery Rate.

From a knowledge of geometry, we can show that the following


mathematical relationship holds:

and from this,

This allows us to represent our total annual cost as the sum of the fixed
ordering and holding costs:

The total cost for this model is a function of one variable, Q, so to minimize
G(Q), let and solving for the optimal order quantity Q∗,

(4.14)

where Then the minimum cost, G(Q∗), can be written as:

(4.15)
Retailer vs. manufacturer considerations
From Equation (4.14), we can see that everything else being equal, a
smaller production rate P leads to larger production batch size Q. This is
true because a slower production rate results in a lower rate of inventory
buildup and thus lower inventory holding costs. While Equation (4.2) refers
to a retail setting where the entire order is delivered in one shipment,
Equation (4.14) represents a manufacturing setting where inventory is built
up gradually over time. Comparing the two equations, we can see that the
batch sizes used by manufacturers are typically greater than the order
quantities placed by retailers.
The De-Icier Case in Section 4.6 considers various options for the EOQ
Model with Finite Delivery Rate.

4.4 Managing Uncertainty for Short Life Cycle Items


So far, all the inventory models we have introduced assume that demand is
known and occurs at a constant rate. In this section, we take demand
uncertainty into account in managing inventory for products with a short
life cycle. We assume that the life cycles of such products are so short and
the supply chain set up in such a way that no replenishment is possible
during the selling season. Examples include Christmas trees, turkeys for
Thanksgiving, roses for Valentine’s Day, and fashion apparel. We address
the tradeoff between lost sales vs. markdown costs, introduced earlier in
Section 4.2, in managing inventory and determining the optimal inventory
policy (how much to order and how often to order).
The fashion apparel industry is highly seasonal and demand is
unpredictable for several reasons: short life cycle products designed for
only one selling season, a large number of SKUs, and long replenishment
lead times. To reduce the cost of goods sold (often referred to as COGS),
many US distributors and retailers in this industry buy products from Asia
and other low-cost manufacturing countries, and only maintain domestically
the functions of design, distribution and sales. Because of lengthy
transportation, the replenishment lead time can be very long, and typically a
majority of goods are purchased before the season starts, with minimum or
zero replenishment during the season.
The combination of high demand uncertainty and long lead time leads
to significant risk in the inventory planning process. Victor Fung, CEO of
Li & Fung, Hong Kong’s largest export trading company, commented: “As
far as I am concerned, inventory is the root of all evil,”5 for excessive
inventory at the end of the season must be marked down and usually sold at
a loss. On the other hand, if demand exceeds available inventory, the sales
are lost, and gone with them is profit.

Fashion apparel vs. the IBM Thinkpad


To compensate for lost sales, fashion apparel companies tend to overstock,
which often results in 50–70% price markdowns at the end of the season.
However, in 1992, IBM used the opposite strategy when it introduced its
Thinkpad notebook. IBM could not accurately predict the demand for a new
product with the innovative design of the mouse button, so it under-planned
the production capacity, and the outcome was that they ended up with
backorders for more than one year.6 Why were different strategies used for
fashion apparel and IBM Thinkpads? Was it related to value and cost of
inventory? Our analysis of the Newsvendor Model will help us to answer
this question.

4.4.1 The Newsvendor Model


The Newsvendor, or Newsboy, Model refers to an inventory system in which
the product has a short life cycle (like perishable items) and demand is
uncertain, analogous to the problem faced by a newspaper vendor who must
determine how many copies of a daily newspaper to stock each day, given
that unsold copies (that is, the amount of inventory remaining at the end of
the day) are worthless (or have a very small salvage value).

Example: Christmas trees


Suppose a chain store sells Christmas trees each holiday season. The cost is
$2.50 per unit and each tree sells for $7.50. Leftover trees are recycled by
the supplier, who offers $1.00 each. The store maintains a record of
historical sales data for similar Christmas trees sold in previous years, and
an analysis of the data shows that the demand follows a normal distribution
with mean 22,390 units and standard deviation 4,620 units. How many trees
should the store order for this coming season?
Let’s consider three simple strategies: (1) order size = maximum
possible demand; (2) order size = minimum possible demand; and (3) order
size = average demand.
Strategy (1) is aggressive, and like the fashion apparel strategy, aimed
at meeting demand under all demand scenarios. The likely outcome is
excess inventory at the end of the season, which incurs overage, or
markdown costs. In this example, the overage cost is $2.50 −$1.00 =
$1.50/unit.
Strategy (2) is conservative, and like IBM’s strategy for the Thinkpad,
aimed at eliminating excess inventory under all demand scenarios. The
likely outcome is lost sales during the selling season, which incurs an
underage cost (or loss of profit). In this example, the underage cost is $7.50
− $2.50 = $5.00/unit.
Strategy (3) is plausible, but it only takes demand into consideration,
ignoring the cost structure, and thus is suboptimal.
A profit-maximizing strategy can be found from the Newsvendor
Model, which applies to short life cycle products which cannot be
replenished during the season, and whose demand is uncertain, or
probabilistic (like daily newspapers); at the end of the season, whatever is
left in inventory cannot be carried over to the next season. This model is
also known as the one-period model. We wish to determine the optimal
order quantity Q∗. To specify the cost function for the Newsvendor Model,
let
D = probabilistic demand with cumulative distribution function F(x)
and probability density function f(x),
p= selling price,
c= cost of goods sold,
s= salvage value after the season,
Co = overage cost or markdown cost (loss per unit of surplus), Co = c −
s,
Cu = underage cost or loss of profit (loss per unit of shortage), Cu = p −
c.
Q= order quantity,
G(Q) = expected cost of a mismatch between supply and demand,
TP = optimal expected total profit.
Suppose we order Q units, and demand turns out to be D units; then

with an expected cost of a mismatch G(Q) = E[G(Q, D)]. The optimal order
quantity Q∗ which minimizes the expected cost must satisfy the following
relationship:

(4.16)

where, is called the critical ratio. If the critical ratio is 70%, Equation
(4.16) implies that one must order enough inventory to satisfy demand 70%
of time. When the uncertain demand is known to be distributed normally
with mean µ and standard deviation σ [D ∼ Normal(µ, σ)] then

(4.17)

where z is the standard normal random variable corresponding to a critical


ratio of
The optimal expected mismatch cost is

(4.18)

where
is the standard normal density function f(x).
Then the optimal total profit is

(4.19)

Returning to the Christmas tree example, Co = $1.50 and Cu = $5.00.


From Equation (4.16), the critical ratio is 5.00/(5.00+1.50) = 0.77. A table
of the standard normal distribution gives a value of z = 0.74 for a critical
ratio of 0.77. Since demand is distributed normally, the optimal order
quantity is, from Equation (4.17),

Alternatively, one can obtain Q∗ in Excel using the function


NORMINV So from Equation (4.18), the expected
mismatch cost is 4, 620 · 6.50 · φ(0.74) = $91,360. From Equation (4.19),
the expected profit is 5 · 22,390 − 91,360 = $102,810.

Fashion apparel vs. the IBM Thinkpad


From Equation (4.16), we see that the optimal order quantity depends not
only on the demand distribution but also on the cost structure. In the case of
extremely high Cu from loss of profit, an aggressive order quantity aimed at
meeting virtually all demand is optimal. In the fashion apparel industry,
where the retail profit margin, (p − c)/c, is often more than 100%, a
markdown of 70% yields an overage cost of 40% of the product cost. So the
critical ratio in this case is at least 0.70. If the profit margin is 150%, then a
markdown of 70% yields an overage cost of 15% of the product cost, which
results in a critical ratio of 0.91. This explains why fashion apparel retailers
are aggressive in ordering and why they often have significant markdowns
at the end of the season.
The IBM Thinkpad notebook represents the opposite situation. In 1992,
IBM enjoyed strong brand loyalty for its personal computers; the underage
cost was not significant because most customers would choose to wait for
stock-outs to be delivered. On the other hand, building a new production
line incurs significant capital costs, leading to high overage cost if the
predicted demand is not realized. Hence, it was indeed better for IBM to
adopt a different strategy from the fashion apparel industry by being
conservative in capacity planning for its Thinkpad notebook.

4.5 Managing Uncertainty for Durable Items — Safety


Stock Model
In this section, we consider durable items, or items with a long life cycle
and long shelf-life, with probabilistic demand, and we address the trade-off
between safety stock vs. service requirements introduced earlier in Section
4.2, in managing inventory and determining the optimal inventory policy
(how much to order and how often to order).
For durable items, unsold inventory at the end of one ordering cycle
can be carried over to the next cycle. Durable items account for a majority
of commodities sold in retail and world trade markets, such as dry
groceries, electronics, mechanical equipment, and their components/raw
materials. With globalization, supply chains continue to expand
internationally and product movement becomes more complex, in order to
reach the best possible suppliers and diverse markets over the world; as a
result, lead times and risk continue to increase. It is a challenging task to
manage inventory for durable items because of the long replenishment lead
times, probabilistic demand, and high service level requirements.
The bullwhip effect refers to the magnification in demand variability as
orders move upstream through the supply chain due to forecasting errors,
batch ordering, panic orders and price fluctuations. The bullwhip effect is a
major contributor to demand uncertainty/variability in supply chains, and is
amplified by lengthy lead times.

Safety Stock
Safety stock provides protection in the event of a demand surge exceeding
the amount forecasted and planned for; it is essentially a buffer against
forecasting errors. Another reason for holding safety stock is uncertainty in
supply, such as availability of suppliers’ inventory, shipping time, quality
control issues, and delays at ports that may impact the suppliers’ ability to
deliver an agreed-upon quantity at the agreed-upon time. In this section, we
will focus on optimization techniques that balance the safety stock cost vs.
the service requirements under demand uncertainty.

4.5.1 The continuous-review batch size — reorder point (Q–R) model


This system reviews the inventory status continuously in time, typically by
means of a computer system. Whenever the inventory position (on-hand
inventory + on-order inventory — backorders) drops to or below a reorder
point R, the system orders multiple batches, each of size Q, to raise the
inventory position to the smallest possible level above R.
The continuous-review batch size model is often applied in managing
durable items with probabilistic demand and a non-zero replenishment lead
time. The system pays a fixed order cost plus a variable cost for each item
ordered, and an inventory holding cost for each item carried per unit of
time. If demand occurs and there is no inventory available, the demand is
backordered.
The (Q−R) system is an extension of the EOQ model in Section 4.3
from predictable demand to unpredictable demand. When demand is
predictable, we know exactly when on-hand inventory drops to zero. Thus,
we can place an order in advance so as to match the delivery time of the
order to the time when we run out of inventory. When demand is
probabilistic, we cannot tell exactly when inventory will reach zero. The
question, then, is when to order, or, more specifically, what is the reorder
point, R which will trigger the placement of an order?
Figure 4.14: The Safety Stock Model.

To answer this question, we must determine the demand during lead


time, as shown in Figure 4.14. The reorder point, R, is the inventory level
necessary to protect the company from a stock-out during lead time. With
constant demand and constant lead time, R is exactly the amount that will
be sold during the lead time. If demand is probabilistic, R is the expected
demand during lead time plus the safety stock.

The Service Level Model


The objective of the (Q−R) model here is to determine the batch size Q and
the reorder point R so as to minimize the system-wide cost, or the sum of
the inventory holding cost and ordering cost, subject to meeting a specified
customer service requirement.
We typically measure service levels (stock availability) in two ways:
the Type 1 and Type 2 service levels.
Type 1 service level α. The Type 1 service level measures the probability of
having no stock-out during one ordering cycle. For a target service level α,
0 < α < 1.00, the inventory control policy (Q, R) can be determined by the
following equations:
Order quantity: Q = EOQ value given by Equation (4.2).
Reorder point: Probability (D ≤ R) = α.
Suppose D ∼ Normal(µ, σD), then

(4.20)

where zα is the standard normal random variable corresponding to a tail area


of α under the standard normal curve.
Type 2 service level β. The Type 2 service level measures the fill rate —
the percentage of demand fulfilled upon arrival by on-hand inventory. For a
target service level β, 0 < β < 1.00, the inventory control policy (Q, R) can
be determined by the following equations:
Order quantity: Q = EOQ value given by Equation (4.2).
Reorder point:
Suppose D ∼ Normal(µ, σD); then just as in Equation (4.20) above

where z can be found by σD · LS(z) = n(R) = Q(1 − β).


As an example, the daily demand for a one-time use medical product
held by a distributor follows approximately a normal distribution with a
mean of λ = 2,500 units and a standard deviation of σd = 500 units. The
holding cost for safety stock is h = $200 per unit per year. The distributor’s
target service level for this product is α = 0.95. It is known that the current
carrier under contract with the distributor has a guaranteed lead time for
delivery (from the manufacturer to the distributor’s main warehouse) of L =
7 days. Suppose that the distributor is currently evaluating a proposal to
switch to a new carrier who has been aggressively looking for business in
the region. If the distributor switches from its current carrier to this new
carrier, a potential saving of $250,000/year can be achieved. However, the
new carrier’s shipping lead time, L′, is random. L′ has an average E(L′) = 7
days with a standard deviation of σL = 1 day. Should the distributor switch
its carrier?
With the current carrier and its 7-day guaranteed lead time for shipping,
we can estimate the annual holding cost for the safety stock.

Because α = 0.95, zα = 1.65, SS = σD · zα = 1,322.88 · 1.65 = 2,183 units


(less than one day’s supply). The annual holding cost for the safety stock is
h × SS = $436,600/year.
If we switch to the new carrier, we would have to face uncertainty in
the lead time. That is,

The annual holding cost for the safety stock = h × SS = $933,400/year,


which leads to an additional cost, due to uncertain lead time, of
$496,800/year. A switch to the new carrier would result in an increased cost
that is nearly double the savings of $250,000/year, and hence the distributor
should not switch carriers.

4.5.2 The periodic-review base-stock model


In many real life situations, such as vending machines or shelves in
department stores, inventory levels are not reviewed continuously over time
either for the suppliers’ convenience or due to the need for consolidated
shipments. In such cases, inventory levels are reviewed periodically at a
fixed interval T, such as a day, a week, or a month; this interval is called the
period. An order is placed at the beginning of each period to raise the
inventory position up to a fixed level, called the base-stock level, or the
target inventory level.
The objective of a periodic-review base-stock model is to determine the
base-stock level which minimizes the total operational cost while meeting
the service level requirement.
Let
λ = the expected demand per unit of time,
σd = the standard deviation of demand per unit time,
L = the lead time,
T = the review period,
zα = the standard normal variable for a target Type 1 service level α.
The base-stock level S and safety stock SS are given by

(4.21)

where, the quantity λ · (T + L) represents the expected demand during the


review period and lead time, and the quantity represents the
safety stock level that guarantees a target Type 1 service level α.
Given the base-stock level, S, the quantity to order at the beginning of
each review period is

When α approaches 1, the average on-hand inventory


Let n(S) be the expected number of backorders per period.

(4.22)

where and LS(z) is the standard loss function.


The total operating cost is:

(4.23)

The Type 2 fill rate is:

(4.24)
As an example, consider a periodic review inventory system with λ =
30 units/day, α = 0.99, σd = 3 units, T = 7 days, lead time L = 2 days, and an
inventory position of 85 units at the beginning of the review period. For this
inventory process, the base-stock level should be, from Equation (4.21),

The order quantity for the review period is then,

4.5.3 Risk pooling effect


Risk pooling is an important concept for managing inventory under
uncertainty in supply chain operations. The idea is to pool inventory among
multiple locations to meet the aggregated demand; this results in less
variability than a separate inventory policy for each individual demand
stream. This is because an unexpected high demand from one location may
be offset by a very low demand from another location. With smaller
demand variability, less safety stock is required to achieve the same target
service requirement. The effectiveness of risk pooling, of course, depends
on many factors, such as the correlation between demands at different
locations and the cost of implementing a risk pooling policy.
When we consolidate multiple inventory-holding locations, such as
distribution centers, into one, we can apply the square root law to quantify
the impact of risk pooling on inventory savings, assuming the demand
streams at different locations are independent and follow identical
probability distributions, and all the lead times are the same.
Let
X1 = the total safety stock in a distribution network,
X2 = the target level of total safety stock in the network,
n1 = the number of distribution centers in the network,
n2 = the number of distribution centers required after network
consolidation.
From the square root law:

(4.25)

For example, a company currently carries 400,000 units of safety stock


in its northeast regional distribution network through 8 sales points. If the
company wants to reduce this safety stock inventory by 25%, how many
sales points should be eliminated?
In this case, X1 = 400,000 units, X2 = 300,000 units, n1 = 8, and we
need to find the value of n2 for the new network which allows us to reduce
the current safety stock by 25%.
From Equation (4.25),

So the system should be consolidated from 8 to 5 sales points, or 3


sales points should be eliminated.
The ImportHome LLC Case Study in Section 4.7 illustrates the Safety
Stock Model for Managing Uncertainty for Durable Items.

4.6 Case Studies — Economies of Scale — Cycle Stock


4.6.1 Office Supplies, Inc.
Steve, the Chief Supply Officer of Office Supplies, Inc., a major retailer of
high quality office supplies in tri-state area, must decide on the order
quantity per shipment on a contract with the EastPen Company,
headquartered in South Korea, which produces a longlasting and easy-grip
ball pen exclusively for Office Supplies. The product was introduced
recently and the sampled product has been well received by the contracted
business customers. Based on solicited early orders, the anticipated demand
for the coming year is 312,000 cases. Assume that the purchase cost is
$180/case, or c = $180/unit, and the unit holding cost is h = 35% · c, or h =
$63/case/year. The decision on the order quantity per shipment, however, is
restricted by the capacity of the logistics partner who is responsible for the
international shipping and handling for Office Supplies. Two options are
being considered.
Option 1: Stay with the current logistics partner. The transportation of the
goods can be done through company’s current contracted carrier (for all the
existing office supplies), who charges a fixed cost of $2,200/order for the
ball pen regardless of the order size, but has a limited shipping capacity of
2,000 cases per shipment due to its ocean container size.
Option 2: Switch to a new logistics partner. Williams may consider signing
a contract for shipping the ball pens with another carrier who charges a
fixed cost of $2,900 per order, but allows a maximum of 6,000 cases per
shipment.
Steve wishes to weigh all interdependent factors of the supply chain in
optimizing the total cost associated with the process of delivering product to
customers in order to determine which logistics partner he should use and
the optimal EOQ and annual cost.
Questions for Office Supplies, Inc. Case Discussion:
• What is the optimal order quantity and cost, ignoring the shipping
capacity limit, based on the parameters for the current logistics partner
(Option 1)?
• If this order quantity exceeds the shipping capacity of the current
carrier, what is the cost of ordering the maximum amount possible, Q =
2,000 from this partner?
• If we utilize two containers per shipment under Option 1, the fixed
ordering cost would double to k′ = $4,400, but we will be able to order
up to Q = 4,000 cases/order. Will this modification improve the
operational performance?
• What is the optimal order quantity and cost using the new logistics
partner (Option 2)?
• What would you recommend to Steve?

4.6.2 Mountain Tent Company


The Mountain Tent Company orders one of its main components from three
suppliers. The total quantity contracted to supplier A is 144,000/year, to
supplier B is 86,400/year, and to supplier C is 12,000/year. Last year,
Mountain Tent used an independent EOQ policy for ordering from each of
the three suppliers with a fixed ordering cost of $800/order plus a supplier-
dependent shipping cost of $200/order with supplier A, $60/order with
supplier B and $120/order with supplier C. The holding cost h =
$30/unit/year. We wish to compare the total annual operating cost (that is,
the ordering plus inventory holding costs) for the independent EOQ policy
with the cost of consolidating the supply base by sourcing the total amount
only from the least costly supplier, supplier B, to determine how much the
company may save annually in operating costs if the consolidation is
implemented.
Questions for Mountain Tent Company Case Discussion:
• What is the annual cost under the independent EOQ policy for each of
the three suppliers?
• What is the total annual operating cost under the independent EOQ
policy?
• If Mountain Tent consolidates the supply base and sources the entire
yearly demand from supplier B, what is the total annual operating cost?
• Based on a comparison of the two costs, what is your recommendation
to Mountain Tent?
• What is the risk associated with consolidating the supply base to
achieve savings in the total ordering plus carrying cost in the event of an
unexpected breakdown in the supply chain?
4.6.3 De-Icier
A New Jersey based chemical company producing De-Icier at a rate of
2,500,000 lbs/year is anticipating an annual demand of 600,000 lbs/year.
Assume the production line setup cost is $1,500, the variable cost is
$3.50/lb, the selling price is $4.40/lb, the unit holding cost is $1.19/lb/year,
and the company operates 250 working days per year.
Questions for De-Icier Case Discussion:
• What is the optimal batch size for De-Icier, and what is the associated
inventory cycle time and annual profit?
• Suppose 2% of De-Icier production is sold to Garden Depot. During
each production cycle, Garden Depot sends a truck (at a cost to Garden
Depot of $800/truck) to pick up the supplies. The company’s selling
price to Garden Depot for these supplies is $3.90/lb, and Garden Depot
has complained and requested that the company lower the selling price
to $3.80/lb (i.e., 10 cents lower than originally agreed). As a sales
manager, how would you handle this request?
Consider Garden Depot’s trucking cost/year to be part of the cost of
each pound sold by Garden Depot. How much does this add to
Garden Depot’s cost?
Instead, suppose the company incorporates the $800/cycle truck
shipping cost into its calculation of the optimal order quantity. What
is the new optimal order quantity and the new total annual operating
cost? What is Garden Depot’s annual shipping cost under this
policy?
• Consider a collaborative strategy, whereby Garden Depot pays
the increase in the company’s total annual operating cost; what
is Garden Depot’s net savings in this case in dollars per pound?
• What would your recommendation be?
4.7 Case Study — Managing Uncertainty for Durable
Items — Safety Stock Model
4.7.1 ImportHome LLC7
Jack is the owner of ImportHome LLC, a small family B2C e-Business
company, which imports kitchenware from China and sells to worldwide
markets. Jack and his partner use their home in northern New Jersey as an
office and their garage as a warehouse for this business. The annual revenue
averages around $500,000, and varies from year-to-year.
Based on product features that are particularly relevant to online
selling, Jack has carefully selected three main product lines: soymilk maker,
automatic pressure cooker, and frying pan. The regular price for each of
these products is $100–$150. The shipping cost is $10 in North America.
The purchase costs of these products range from $30 to $40 per unit. The
target markets are the Chinese communities in the US and Canada.

Logistics
The products are shipped from manufacturers in China to Jack by ocean.
The lead time for an order of a full container is 90 days. The shipments are
routed first from factories in mainland China to Hong Kong, and then to the
New York/Newark port by ocean, and finally from the port to the owners’
home in northern New Jersey.
Jack has two options for ordering: either order a full container (1,400
units for the soymilk maker) or half a container. The first option costs
$3,000 for ocean transportation and $300 for truck shipping from the
NY/NWK port to Jack’s garage. If Jack orders half a container, he pays
$2,000 for ocean freight. After clearing customs, the goods are shipped to a
temporary warehouse near the port, where they are unloaded, stored, and
then uploaded, for the purpose of freight consolidation, before they are
transported to Jack’s garage. The inland transportation fee is the same as for
a full container, and Jack has to pay for the temporary storage and materials
handling, which is about $300. Also because of freight consolidation, the
lead time of ordering will be at least 10–20 days longer. So Jack always
orders a full container for each product.

Inventory
Inventory is carried in the Jack’s garage, with no other costs except the cost
of capital. Jack places orders and manages inventory by experience. He
sometimes ends up with excessive inventory, which sells very slowly; for
instance, he currently has $10,000 tied up in items that probably will not
sell. For other items, he may be out of stock — each month, about 10% of
the orders cannot be fulfilled immediately from on-hand inventory.
Customers expect next day shipping; in case of a stock-out, Jack typically
informs the customer immediately and offers a discount of at least 10% to
encourage the customer to wait for the delayed delivery.

Demand Statistics
Demand fluctuates randomly over time. Table 4.5 shows the average
demand per week and weekly demand standard deviation for calendar year
2016.
Questions for ImportHome LLC Case Discussion:
• Inventory represents the primary risk in B2C businesses, because the
worst thing that Jack can anticipate is sitting on huge amounts of unsold
inventory. Jack wonders how he can optimize the order quantity and
reorder point based on the available sales data. In particular, should he
use a full container or a half container?
• When should he reorder? It is not trivial to make the correct decisions
because of the intricate trade-offs; for example, a half container reduces
inventory levels and therefore reduces inventory holding costs, but
increases shipping frequency and the shipping time, and therefore
increases fixed ordering and transportation costs.
• What would you recommend to Jack?
Table 4.5: Average Weekly Demand and Standard Deviation for
2016.
Item Weekly Average Weekly Demand Standard
Demand Deviation
Joyoung soymilk 47 units 24 units
maker
GM soymilk maker 25 8
GM pressure 35 10
cooker

4.8 Exercises
1. A manufacturer charges its distributor $18 for each unit of product,
and spends $18,000 to set up each batch of production, regardless of
the batch size. Due to the cost of recent Research and Development
(R&D) spending on product quality improvement and the use of a new
and more expensive coating technology in the manufacturing process,
the manufacturer intends to increase the selling price to the distributor
from $18/unit to $18.08/unit starting in January of the coming year.
The distributor has a contracted demand of 4,290,000 units on the
product for the coming year and incurs a holding cost of $3.6 per unit
per year. The fixed cost of placing an order plus the shipment cost is
$3,200/order. The manufacturer’s price increase is of significant
concern to the distributor, who must justify the future distribution of
this low profit margin product. As the Chief Supply Officer for the
distributor, what is your recommendation to address this issue, based
on an analysis using the EOQ model?
2. Consider a low profit margin item in a highly competitive market. The
retailer’s purchasing cost cr = $40/unit, demand λ = 5,000,000
units/year, fixed setup cost kr = $900/order, and the holding cost rate is
25% of the unit cost of the item, so that the holding cost hr = 0.25 · Cr
= $10/unit/year. The manufacturer’s purchasing cost cm = $20/unit,
fixed setup cost km = $65,000/order, and the holding cost rate is 20%
of the unit cost, so that the holding cost hm = 0.20 · Cm = $4/unit/year.
Compare the total cost of independent EOQ policies by the retailer and
manufacturer with the total cost using a collaborative supply chain
approach.
3. In the planned shortage model in Section 4.3.4, suppose the maximum
time that a customer is willing to wait for delivery is three business
days. How must the policy be modified to handle this change? What is
the resulting annual cost of operation?
4. The anticipated annual demand for a chemical product distributed by
the Seanna Chemical Group is 25,000 tons/year for the coming year.
The company is currently producing the product with a capacity of
80,000 tons/year and a production line setup cost of $4,000/production
run. Assume that the variable cost for the production is $400/ton and
the unit holding cost is estimated as 25% of the variable cost. The
company operates 250 working days per year.
(a) The current production run size is 5,000 tons/run. How much can
Seanna save by optimizing the ordering policy (that is, by using
the EOQ model with a finite delivery rate)?
(b) Suppose that Seanna recently implemented the optimal policy in
part (a), and the Engineering Department is now proposing an
investment of $2 million to renovate its facility for this product,
which would reduce the variable cost to $380/ton and the line
setup cost to $3,000/production run. What will be the annual
savings that Seanna may achieve from this (in terms of the sum of
production and fixed ordering and holding cost)? As the Chief
Operating Officer of the company, would you consider this
proposal?
5. Consider again the Christmas tree example in Section 4.4.1. Suppose
that this year, everything remains the same except that the supplier will
not take back the unsold trees at the end of season and so the store has
to dump any leftover at a cost of $1 each. What is the optimal ordering
quantity in this case? What is the optimal mismatch cost and the
optimal expected profit?
Suppose we are selling skiwear at a price of $250. The demand is
6.
random but has a mean of 350 units and a standard deviation of 100
units. The production is outsourced to southeast Asia and the COGS, c,
is $100. At the end of the season, we can sell leftover stock to a
national liquidator for a flat rate of $80.
(a) What are Co and Cu?
(b) What is Q∗?
(c) What is the optimal mismatch cost and the optimal expected
profit?
7. Meditech Surgical produces endoscopic surgical instruments for the
US market. The company uses the (Q, R) model to control its finished
goods inventory at its central warehouse in the US. The average
weekly demand for an item is 1,360 units and the weekly demand
standard deviation is 284 units. The replenishment lead time from its
plant to the central warehouse is 1 week. The production lot size (the
batch, Q) is 300 units.
(a) Determine the reorder point, R, to meet a target Type 1 service
level of 99%.
(b) What is the corresponding safety stock and average on-hand
inventory?
(c) What is the corresponding Type 2 service level?
8. JAM is a Korea-based company producing electronic components. All
products are manufactured in Asia. The company has a central
warehouse in Chicago to serve the US market. The central warehouse
places an order each month and the replenishment lead time is 2
months. Demand in the US market fluctuates significantly from month
to month. For instance, the monthly demand for one of their main
products has average sales of $99,800 and a standard deviation of
$55,500.
(a) How should we set the base-stock level for this item at the
Chicago warehouse to meet a 98% Type 1 service level?
(b) What is the corresponding safety stock, average on-hand
inventory, and Type 2 service level?

Endnotes
1. “Manufacturing and Trade Inventories and Sales,” US Census Bureau News, November 2015.
Available at: https://www.census.gov/mtis/www/data/pdf/mtis_current.pdf.
2. “IBM continues to struggle with shortages in their ThinkPad line,” Wall Street Journal, October
7, 1994.
3. “Liz Claiborne said its unexpected earning decline is the consequence of higher than anticipated
excess inventory,” Wall Street Journal, July 15, 1993.
4. “Dell Computers predicts a loss; stock plunges, Dell acknowledged that the company was sharply
off in its forecast of demand, resulting in inventory write downs,” Wall Street Journal, August
1993.
5. Joan Magretta, “Fast, Global, and Entrepreneurial: Supply Chain Management, Hong Kong
Style,” Harvard Business Review, September–October 1998. Available at:
https://hbr.org/1998/09/fast-global-and-entrepreneurial-supply-chain-management-hong-kong-
style.
6. “IBM continues to struggle with shortages in their ThinkPad line,” Wall Street Journal, October
7, 1994.
7. L. Wang and Y. Zhao, “ImportHomes LLC — A B2C Small Business Model” Rutgers Business
School Case Series, 2009.
Chapter 5

Project Scheduling and Management

TRYING TO MANAGE A PROJECT WITHOUT PROJECT MANAGEMENT IS


LIKE TRYING TO PLAY A FOOTBALL GAME WITHOUT A GAME PLAN.

K. Tate

5.1 Introduction to Project Management


In this chapter, we will introduce techniques and strategies for planning,
scheduling, and managing projects, including critical path method (CPM),
which identifies critical activities (those which must be completed for the
entire project to be completed on time); time–cost analysis (TCA), which
balances the trade-off between time and cost; program evaluation and
review technique (PERT), which incorporates contingencies and
uncertainties; critical chain project management (CCPM), which includes
the impact of human factors; and an illustration of project management
software (Microsoft Project). We will begin with a case study to illustrate
the practical issues and challenges.

5.1.1 Project management — basic concepts


To introduce the basic concepts of project management, we will consider
the example of American Royal Financial Inc.,1 a financial services firm
offering an array of financial products and services: life insurance, mutual
fund, annuities, pension and retirement services, asset management,
banking and trust services, and real estate services. American Royal is best
known for its life insurance products and is one of the largest insurers in the
world. The life insurance industry is highly competitive, with a majority of
similar and substitutable products. Development of diverse and innovative
life insurance products is the key to beat out competition.
Mia is a project manager at the Life Insurance Division of American
Royal Financial. She is facing a challenging assignment: introducing a new
life insurance product — Amrolife Return Of Premium Term (Amrolife
ROPT) to the US market within the upcoming year to outperform possible
new entrants.
In planning for this project, Mia must balance the speed to market with
the resources available at her company. She must also take all contingencies
into account, because any unpredictable event may delay the project and
cost her the job.

Work breakdown structure


To plan for this project, Mia has worked with her team to develop a list of
key tasks or activities that must be completed (in this chapter, we use the
terms task and activity interchangeably) in the product launch. In
collaboration with the relevant departments, she has obtained detailed
information on the duration and cost of completing many of these activities.
Where such information is not available, she uses the best estimates from
industry benchmarks. Table 5.1 shows the activities, their durations and
their immediate predecessors (activities which must be completed
immediately before the current activity can start).
In Section 5.2, we will use the CPM to determine the minimum project
duration and the starting and ending schedule of each activity.

Cost and expediting (crashing)


The durations in Table 5.1 are estimated under normal conditions, i.e., given
the company’s current available resources, such as manpower, for this
project. Based on this information, Mia can determine the normal cost
associated with each activity, as shown in Table 5.2.
Table 5.1: American Royal Financial Project Activities, Immediate Predecessors, and Duration.

Table 5.2: Normal and Expedited Durations and Cost.

There may be a reward for an earlier project completion. However,


expediting the project requires that some or all tasks be completed more
quickly than the normal times, or crashed, and to do so requires additional
resources to expand the current workforce, or to offer overtime pay, or to
hire temporary workers. In this case, Mia can expedite the project by hiring
temporary consultants at a higher pay than the normal workforce. However,
even with additional manpower, there is often a limit on the amount of time
each task can be expedited, or crashed. Table 5.2 shows the expedited
duration, or minimum amount of time to complete the task. The cost to
complete a task within the normal time and the cost to complete a task in
the minimum possible time are also shown in Table 5.2. From this, we can
compute the expediting cost per week, shown in Equation (5.1):
ti = normal activity time for activity i,
= expedited activity time (at maximum crashing) for activity i,
Mi = maximum possible reduction in time for activity i due to crashing =

Ci = normal activity cost for activity i,


= expedited activity cost (at maximum crashing) for activity i,
Ki = crashing cost per unit time for activity i,
where

(5.1)

In Section 5.3, we will discuss methodologies to balance time vs. cost


in order to expedite the project duration to optimize the tradeoff between
additional revenue for reduced project duration and the additional cost of
expediting.

Contingencies and the three-estimate approach


The durations of the tasks of the project are based on estimates, but their
actual values can vary due to unpredictable and uncontrollable events, such
as illness and bad weather preventing staff from coming to work. Mia wants
to incorporate these random factors into project planning in advance in
order to ensure, with a high degree of confidence, that the project will meet
the target deadline.
With the help of her team, Mia identifies the minimum duration (under
the best case scenario), the most likely duration (based on experience), and
the maximum duration (under the worst case scenario) for each task given
the current resources (i.e., under normal conditions). These estimates are
shown in Table 5.3.
Table 5.3: Activities and the Three-Estimate Approach.

When an activity is expedited, it is difficult to reassess these minimum


and maximum durations but as a rule of thumb, they are reduced in the
same proportion as the most likely duration. Taking activity A as an
example, the most likely duration is 8 weeks and the minimum duration is 4
weeks under normal conditions. If expediting reduces the most likely
duration to 6 weeks, then its corresponding minimum duration can be
estimated via the nearest integer to 4 × 6/8 = 3 weeks.
In Section 5.4, we will discuss the statistical assumptions under which
we are able to use estimates like those in Table 5.3 to quantify the expected
task durations and the likelihood of completing a project within a given
period of time.
In this chapter, we will introduce techniques and tools to answer the
following questions raised in this case:
• If the project starts on September 1, can Mia complete the Amrolife
ROPT project by September of next year using the normal resources of
her company? If yes, what is the action plan?
• Task F, “filing contracts,” is likely to be delayed. If this task is delayed
by 3 weeks, what is the impact on the project duration?
• If it is not feasible to complete the Amrolife ROPT project within 1 year
under normal conditions, how should Mia expedite the project to make
it feasible? What is the additional cost?
If additional revenue of $16,000 can be obtained for each week earlier
• Amrolife ROPT is launched in the US market, how should Mia expedite
the project? What is the cost and benefit?
• Taking uncertainty in activity times into account, what is the probability
that the Amrolife ROPT project will be finished by the following
September?
We begin with an overview of project management. A project is a
unique process, consisting of a set of coordinated and controlled activities
with start and end dates, undertaken to achieve an objective subject to
specific requirements, including constraints on time, cost and resources.2
Often the projects under consideration are large-scale, consisting of
numerous tasks to be scheduled and coordinated. Such projects have existed
throughout history, including, for example, the great pyramids of Giza and
the Manhattan Project. Today, many companies are project-driven and are
organized in a matrix organizational structure, in order to facilitate project
execution.
In practice, large and complex projects are often difficult to plan,
schedule and execute. The American Production and Inventory Control
Society (APICS) has made the following observations on the reality of
project management:
• No major project is ever finished on time and within budget. Yours
won’t be the first to be delayed.
• Projects progress quickly up through 90% completion; then they remain
that way forever.
• Murphy’s Law: if there is anything that can go wrong, it will.
• A sloppily-planned project will take three times longer than planned. A
well-planned one will take two times as long.
Figure 5.1: An Overview of Project Management Processes.

Despite the gloomy practice, project management can save projects


from the worst failures. The objective of project management is to complete
a project in the least amount of time at a minimum cost and achieve the
highest quality. An overview of project management processes is provided
in Figure 5.1, called the project management temple.
Project planning has two phases: planning and execution. The planning
phase begins with defining the goals and scope of work, followed by careful
planning. Once the project starts, unexpected events may disrupt the
progress and result in scope changes to ensure that we continue to meet
time, cost, and quality constraints. Constant monitoring and control will be
needed to respond to disruptions and keep the project on track. To plan and
manage a project, we follow the primary steps below:
(1) Project definition: define the objective of the project and the scope of
work.
(2) Activity breakdown (work breakdown structure): define the tasks
necessary to complete the work.
(3) Activity network diagram: define the dependencies among tasks in
terms of the precedence relationships.
(4) Project planning and scheduling: plan the duration, cost and resource
requirements, and schedule the starting and ending times of all tasks.
Project monitoring: keep track of the project and its progress, placing
(5) controls on its milestones.

Steps 1–4 are done in the planning phase, Step 5 is conducted in the
execution phase. A project can be viewed as a set of tasks or activities to be
executed over time. These activities are ordered by precedence (their order
of execution in time). Activity execution may require resources, such as
labor, machine, and materials, representing additional constraints. In what
follows, we shall discuss methods and techniques to optimize the project
schedule, balance time and cost, incorporate uncertainty, and include human
factors.

5.1.2 Network representation


Given the tasks, their durations, and their immediate predecessors, we can
create a graphic representation to visualize the logical relationship among
the tasks.

Activity-on-node representation
A project can be represented by a network, which consists of a set of nodes,
in squares, which are joined by directed lines (arrows) called arcs or
branches. As shown in Figure 5.2, there is a START and FINISH node;
each node between them represents an activity (task) required by project,
and each directed arc shows the precedence relationships among the
activities. An activity cannot be started until all immediate predecessor
activities are complete.
Let us now return to the Amrolife ROPT project, whose tasks, their
duration, and the precedence relationships are shown in Table 5.1. The
network representation of the project is shown in Figure 5.3.
Figure 5.2: Network Representation of a Three-Task Project.

Figure 5.3: Network Representation for Amrolife ROPT Project.

5.2 Critical Path Method


For any project, the following questions are of interest to management:
• What is the minimum time needed to complete the project?
• Which activities, if delayed, will delay the entire project?
• Which activities can be delayed without delaying the entire project?
• What is the schedule for the starting and ending time for each activity in
order to complete the project in the minimum time?
For ease of understanding, imagine the three-task project in Figure 5.2
is a “Saturday Night” project, with two people going to dinner first and then
watching a movie together afterwards. Let A be one person having dinner,
B be another person having dinner, and C be watching movie together.
Clearly, the minimum duration of this project is 3 hours, which is the
duration of the longest path that runs from START to FINISH.
In the Amrolife ROPT project, if we add up all the activity durations
shown in Table 5.1, they total 82 weeks. However, we do not need 82
weeks to complete this project, because many tasks can be done in parallel,
or simultaneously. To speed-up project completion, we maximize the
concurrency of activities where possible, subject to precedence
requirements and resource constraints.
To determine the minimum duration of a project, we may define a path
as a sequence of connected activities from the START node to the FINISH
node; the path whose activities take the longest total time is called the
critical path, and its total duration is the minimum time to complete the
entire project. Thus, if we can enumerate all paths that run from the START
node to the FINISH node and identify the path(s) with the longest duration,
we have determined the minimum project duration and the critical activities
(those activities on the critical path). A delay in completing any of the
critical activities will result in a delay in completing the project. Activities
not on the critical path are called non-critical activities, and are
characterized by positive slack time between the earliest possible and latest
possible completion date; hence delaying their completion may not delay
the project.
We will see that identifying all the paths through a network is a
daunting exercise for any large-scale project, so we will develop a different
approach to identifying the critical path, called the CPM, but for the
relatively small Amrolife ROPT project, we can easily identify all the paths
and their durations, as shown in Table 5.4.
The longest path is A→C→D→E→G→I→J with a duration of 58
weeks. Thus, this path is the critical path and the minimum duration for the
entire project is 58 weeks.
For projects with a small number of activities, the critical path can be
found, as above, by enumerating all paths through the network and
identifying the longest path. However, for large-scale projects, it is very
difficult and time-consuming to identify each path, and there is no
assurance that all have been found. Thus, we employ a network approach
that guarantees that we will find the critical path without enumerating each
and every path.

Table 5.4: Amrolife ROPT Project Paths and Durations.

Path Duration Critical?


A–J 12 weeks No
A–C–D–E–F 50 weeks No
B–C–D–E–F 46 weeks No
A–C–D–E–J 42 weeks No
B–C–D–E–J 38 weeks No
A–C–D–E–G–I–J 58 weeks Yes
B–C–D–E–G–I–J 54 weeks No
A–C–D–E–H–I–J 56 weeks No
B–C–D–E–H–I–J 52 weeks No

For any project, the CPM achieves three goals: (1) identification of the
critical activities, which cannot be delayed without delaying the entire
project; (2) identification of the non-critical activities and their slacks,
representing the maximum delay in completing that activity without
delaying the completion of the entire project (assuming all other activity
times remain constant); and (3) an optimal project schedule of earliest start
time, earliest finish time, latest start time, and latest finish time for each
activity subject to the precedence constraints.
If the duration of activity i is ti and we denote the earliest starting time
for activity i by ESi, then the earliest finish time for activity i is,

(5.2)

If we denote the latest finish time for activity i by LFi, then the latest
start time for activity i is,

(5.3)
To find the critical path and schedule for all activities, we make a
forward pass and a backward pass through the network:
• Forward pass: we compute the earliest start and earliest finish times for
each activity (where the earliest start for the first activity is 0) working
forwards from the start node to the finish node, using the following rule:
the earliest starting time for an activity is the maximum of the earliest
finish times for all its immediate predecessors.
• Backward pass: we compute the latest start and latest finish times for
each activity (where the latest finish time for the last activity is equal to
the earliest finish for that activity), working backwards from the finish
node to the start node, using the following rule: the latest finish time for
an activity is the minimum of the latest start times for all activities
immediately following (its immediate successors).
For each activity i, the slack is,

(5.4)

The critical activities are those that cannot be delayed, or those whose
slack Si = 0. The critical activities always form one or more connected paths
from the start node to the finish node, and constitute the critical path(s).
Figure 5.4 illustrates the forward pass through the Amrolife ROPT
Project, where for each activity, the task duration is shown below the
activity name and the earliest and latest start times are shown in the upper
half of each activity box.

Figure 5.4: Amrolife ROPT Project Earliest and Latest Start Times.
Figure 5.5: Amrolife ROPT Project Earliest and Latest Finish Times.

Table 5.5: Schedule and Critical Path for Amrolife ROPT Project.

Figure 5.5 illustrates the backward pass through the Amrolife ROPT
Project, where the latest finish time for activities F and J are set equal to the
earliest time we can reach the finish node, 58, and the earliest and latest
finish times are shown in the lower half of each activity box.
Table 5.5 summarizes the schedule of earliest and latest start and finish
times and slack for each activity. The critical activities are those with 0
slack: A: Market Research, C: Product Design, D: Development Analysis,
E: Developing Product Model, G: Product Testing, I: Sales/Marketing
Planning, and J: Sales Force Training. Non-critical activities and their
associated slack are B: Cost Analysis (4 weeks), F: Filing Contracts (8
weeks), and H: Pricing (2 weeks). We can identify the critical path as the
sequence of activities with 0 slack: A→C→D→E→G→I→J with a
duration of 58 weeks.
Gantt chart
The Gantt Chart is a graphic tool to illustrate the sequential relationship and
the starting and ending time of each activity on a time table. The Gantt
Chart of Amrolife ROPT is shown in Figure 5.6, where the bars represent
tasks and the arrows represents the sequential relationship.
Recall that when we introduced the Amrolife ROPT Project in Section
5.1.1, we listed several questions we wished to answer:
• If the project starts on September 1, can Mia complete the Amrolife
ROPT project by September of next year using the normal resources of
her company? If yes, what is the action plan?
We now know that under normal conditions, the minimum time to
complete the project is the length of the critical path, or 58 weeks, so it is
not possible to complete the project in the required time. We will see in the
next section that we will have to determine which activities to crash in order
to complete the project within 52 weeks.
• One of the key activities is task F, “filing contracts,” which is likely to
be delayed. If this task is delayed by 3 weeks, what is the impact on the
project? Task F, “filing contracts,” is likely to be delayed. If this task is
delayed by 3 weeks, what is the impact on the project duration?

Figure 5.6: Gantt Chart for Amrolife Project.

From Table 5.5, we know that the slack for task F is 8 weeks, so a 3
week delay will have no impact on the completion of the project, assuming
all other activities are completed on time.
• If it is not feasible to complete the Amrolife ROPT project within 1 year
under normal conditions, how should Mia expedite the project to make
it feasible? What is the additional cost?
In Section 5.1.1, we introduced the concept of crashing activity times
in order to reduce the project duration to a required duration. In Table 5.2,
we showed the normal activity times and costs as well as the expedited, or
minimum duration of each activity and the associated cost to complete the
activity in that minimum time. We also calculated, using Equation (5.1), the
cost per week to reduce the activity time for each task. In the next section,
we will use this information to construct a linear programming (LP) model
whose solution will tell us how much to reduce each activity time in order
to complete the project by a required deadline at a minimum additional
“crashing” cost.

5.3 Time–Cost Analysis


We discuss below two approaches to TCA: (1) LP for crashing activity
times when the deadline for the entire project is shorter than the length of
the critical path; and (2) an algorithm for optimizing the trade-off between
the cost of expediting activity times and the benefit of completing the
project prior to its due date.

5.3.1 Crashing activity times — a linear programming model


We determined in the prior section that the minimum duration for the
project is 58 weeks, longer than the desired 52 weeks to finish with the
current available resources. One way to expedite a project when the
duration must be reduced below the duration along the critical path, T, is to
add resources such as labor or overtime to some of the activities to reduce,
or crash, their completion time. Given the cost of crashing, or expediting,
activity times, we want to determine which activities to crash and by how
much in order to meet the required deadline at a minimum cost.
In our introduction to this chapter, we showed, in Table 5.2, the normal
and crashed times and costs for each activity, and calculated, using
Equation (5.1), the cost per week to crash each activity. From this
information, we can formulate an LP model with the following decision
variables:

Our objective is to minimize the crashing cost, which can be expressed as:

We have a set of linear constraints that must be satisfied:


(1) for each activity i, the earliest finish time, xi, must be ≥ earliest start
time for activity i+ the time to complete activity i,
(2) the project must be completed by the due date,
(3) for each activity i, the amount of crash time used cannot exceed the
maximum,
(4) all variables must be non-negative.
To illustrate, for the first set of constraints, for activity A, the earliest
finish time, xA, must be ≥ the earliest start time for activity A, which is 0, +
the time to complete activity A, which is the normal time, 8 weeks, less the
number of weeks crashed, yA, or:

Similarly, for activity B,


For activity C, since there are two immediate predecessors, we will
require two constraints; for the first, looking at the transition from activity
A to activity C, the earliest finish time, xC, must be ≥ the earliest start time
for activity C (which is the earliest finish time for activity A) + the time to
complete activity C, or:

Looking at the transition from activity B to activity C, we have

Similarly, we can generate constraints of this form for each activity, as


shown in the complete formulation in Figure 5.7.
For the second constraint set, the project must be complete by week 52,
so:

For the third constraint set, each variable yi is constrained by the


difference between the normal and expedited activity times, or
from Table 5.2:

as shown in Figure 5.7.


The fourth constraint set expresses non-negativity for all variables,

Figure 5.7 shows the entire formulation.


Figure 5.7: Linear Program for Crashing Amrolife ROPT Project.

Using Microsoft Excel and Solver, as covered in Chapter 3, to solve


this LP problem, the solution is shown in Figure 5.8.
So in order to complete the project within 52 weeks, we must crash
activity D by 2 weeks, activity G by 2 weeks, and activity I by 2 weeks, at
an additional cost of $49,600 over the normal cost of completing the project
in 58 weeks.

5.3.2 Cost vs. benefit of expediting activity time(s)


Another common situation arises when there is a “reward” or “bonus” for
each week that the project is completed prior to the expected deadline.
Here, we wish to determine whether the reward justifies the expenditure of
additional resources to reduce the activity time(s) to accomplish an early
completion. In Section 5.1, our fourth question in the Amrolife ROPT
Project was

Figure 5.8: Optimal Solution to Crashing Amrolife ROPT Project.

• If additional revenue of $16,000 can be obtained for each week earlier


Amrolife ROPT is launched in the US market, how should Mia expedite
the project? What is the cost and benefit?
In this section, we will introduce an algorithm which generates an optimal
solution by considering additional costs and revenues.
We wish to determine whether the additional revenue of $16,000 per
week reduction in the project duration from the normal time of 58 weeks
outweighs the cost of additional resources (workers, equipment, material,
etc.) to accomplish this reduction. In Table 5.2, we presented the normal
and expedited times for each activity.
The cost of an activity typically includes the direct cost of labor,
equipment and material, as well as indirect costs, such as rent and utilities.
In general, as the activity duration decreases, the direct costs increase while
the indirect costs decrease. Reducing the duration of an activity below its
normal duration is assumed to always increase its total cost. Assuming the
project revenue increases as project duration decreases, we wish to answer
the following questions:
• How do we balance the gain from expediting the completion time with
the cost of additional resources necessary?
• What is the optimal expediting plan that maximizes overall profit?
• Which activities should be expedited and by how much?
Intuitively, it makes no economic sense to expedite activities that cost
more than the additional revenue per week. While we have computed the
cost per week to expedite each activity, Ki, in Table 5.2, it is clear that there
is no economic justification for expediting all activities of the project,
because expediting non-critical activities (which already have slack time)
does not reduce the project duration and hence only adds to the cost. We
must note, however, that the critical path is the longest path from START to
FINISH. As we expedite activities on the critical path, it will become
shorter and other paths may become critical in the process. Should this take
place, we must consider all paths that are critical.
Based on this intuition, we provide a simple iterative search algorithm
to identify the optimal expediting plan:
• reduce the duration for those tasks on the critical path(s) whose
expediting cost per week, Ki < $16,000, beginning with the lowest such
Ki,
• update the critical activities and critical path(s) at each iteration,
• continue reducing the critical activity time(s) until expediting costs per
week on all segments of the critical path(s) with remaining expediting
time are larger than $16,000.
Note that the critical path may change while implementing this process,
and that sometimes there are multiple feasible critical paths.
Table 5.6 illustrates our initial iteration, identifying the lowest cost
critical activity to expedite. Recall that the critical path was found, in
Section 5.2, to be A→C→D→E→G→I→J with a project duration of 58
weeks. From Table 5.2, we have the cost per week to crash each of these
activities, Ki. The lowest such cost per week is $8,000, to reduce the
duration of activities G and I.

Table 5.6: Cost-Benefit Analysis for Amrolife ROPT Project — Iteration 1.

Hence, our first step is to expedite activity G by the maximum possible


time, 2 weeks, and activity I by the maximum possible time, 2 weeks, so
that the duration of activity G is 8 weeks and the duration of activity I is 4
weeks, and the total duration of path A→C→D→E→G→I→J is now
reduced by 4–54 weeks. Referring back to Table 5.4, note there is now a
new longest, or critical, path, A→C→D→E→H→I →J, with a duration of
54 weeks. In Table 5.7, we update the durations of activities G and I and
identify the next task(s) on this new critical path whose cost per week, Ki <
$16,000. If we begin with the lowest such Ki, $7,000, to reduce the duration
of activity H, note that if we expedite H, we would only expedite the second
critical path, but not the first. To expedite the project, all critical paths must
be expedited, so the choice of only expediting activity H is ruled out.
Identifying other activities with Ki < $16,000, we note, from Table 5.7,
that activity G has already been expedited by the maximum amount
possible. We need to identify critical activities on both critical paths which
are cheaper to expedite than $16,000/week, and there are two such
activities, D and E.
Expediting D and E to their minimum possible durations of 7 weeks for
activity D and 6 weeks for activity E and a total project duration of 49
weeks does not create new critical paths since no other paths in Table 5.4
have longer durations given the expedited times. Table 5.8 shows the
updated durations. There is now only one critical path activity which has a
Ki < $16,000 and remaining time to expedite, activity A, which can be
reduced by 3 weeks.

Table 5.7: Cost-Benefit Analysis for Amrolife ROPT Project — Iteration 2.

Table 5.8: Cost-Benefit Analysis for Amrolife ROPT Project — Iteration 3.

Table 5.9 shows the updated activity times, with a project duration of
46 weeks and introducing no new critical paths.
Since expediting any remaining critical task will not be cost effective,
we have reached the optimal solution — a reduction of the project duration
from 58 to 46 weeks, with an added cost of:
Table 5.9: Cost-Benefit Analysis for Amrolife ROPT Project — Iteration 4.

The added profit resulting from the reduction in the project duration of 12
weeks (from 58 to 46 weeks) is:

The net gain accrued from the optimal expedited project is:

In summary, it is typical that the earlier a project can be completed, the


higher the reward (savings on indirect costs, less delay penalty, additional
revenue, etc.). However, the sooner an activity needs to be completed, the
higher the cost (of labor, equipment, materials, etc.). The time costing
method can help project managers to determine the optimal balance
between reward and cost.

5.4 Program Evaluation and Review Techniques


In Section 5.1, we posed five questions to be answered, the last of which
was
• Taking uncertainty in activity times into account, what is the probability
that the Amrolife ROPT project will be finished by the following
September?
Our computations thus far have assumed that the activity times are
known with certainty. As we discussed in this chapter’s introduction,
contingencies, or unpredictable and/or uncontrollable events like illness and
bad weather can lead to uncertainties in task times, and thus must be
planned for in advance. In such cases, project managers often wish to
answer following business questions:
• What is the probability that the project can be finished in x weeks?
• What is the probability that the project will take more than y weeks?
• By when can the project be finished with 95% confidence?
PERT can answer these questions by generalizing the CPM to handle
uncertain activity durations.
PERT assumes that activity durations, ti, are independent random
variables. For each activity i, we are able to make, usually by consulting
experts in the particular project, the following three estimates for
probabilistic activity times:
• The minimum activity duration (the best case, or optimistic, scenario,
sunshine estimate), ai.
• The maximum activity duration (the worst case, or pessimistic,
scenario), bi.
• The most likely activity duration (the most frequent scenario), mi.
Based upon empirical research and practical studies, task times can
often be reasonably approximated by a beta probability distribution, with
parameters ai, bi, and mi (the minimum, maximum, and mode of the
distribution). Given these three estimates, the mean, µi, or the expected
duration of activity i, can be calculated as the mean of a beta distribution
with these parameters:

(5.5)

We know that for many probability distributions, most of the area is


contained within three standard deviations of the mean (or a width of six
standard deviations). This is true for the beta distribution, so the standard
deviation, σi, of each activity time can be calculated as:

(5.6)

The variance of each activity time can then be calculated as:

(5.7)

Table 5.10 shows the computation of mean activity times and variances
for the Amrolife ROPT Project before expediting.
PERT assumes the path with the longest average duration to be the
critical path. Under this assumption, using the methods developed in
Section 5.2, we can determine the critical path based upon the mean activity
times calculated from Equation (5.5). Suppose activities 1, 2, … , k
constitute a critical path. These activities have random durations. The
expected project duration, µ, is the sum of the expected durations of the
activities along the critical path:

Table 5.10: Amrolife ROPT Project Mean Activity Times and Variances.
(5.8)

The variance of the project duration, σ2, is the sum of the critical path
variances:

(5.9)

Since the activity times are assumed to be independent, from the


Central Limit Theorem, the distribution of project duration, or the sum of
the activity times on the critical path, is approximately normal if k is
sufficiently large, with mean µ given by Equation (5.8) and variance σ2
given by Equation (5.9).
To identify the critical path for the Amrolife ROPT Project, we utilize
the mean duration of each activity, as shown in Figure 5.9, and apply the
methods of Section 5.2.
Applying the CPM and using the mean durations in Figure 5.9, the
critical path is A→C→D→E→G→I→J. The mean project duration is, from
Equation (5.8), the sum of the mean durations of the tasks on the critical
path:
Figure 5.9: Amrolife ROPT Project Network with Mean Activity Durations.

The variance of the project duration is, from Equation (5.9), the sum of the
variances of the tasks on the critical path:

The standard deviation of the project duration is:

The assumption of independent activity durations implies that the


distribution of the project duration is approximately normal with mean µ =
59.33 weeks and standard deviation σ = 3.13 weeks. Thus, the probability
of the project being completed in under 52 weeks is:

That is, the chance of completing the project within 1 year (i.e., 52 weeks)
is extremely small, or less than 1%. Correspondingly, the probability that
the project duration exceeds 58 weeks is:

The minimum number of weeks required to complete the project with 95%
probability is:

where z0.95 = 1.65 is the standard normal score such that the probability that
z is less than this value is 0.95.
In conclusion, contingencies (uncontrollable risks, including, for
example, technical challenges and natural disasters) are inevitable in project
execution. To actively manage the contingencies, we must estimate their
impact on project duration. PERT provides a simple statistical tool to
determine the project duration under such circumstances.
To summarize the Amrolife ROPT project, we have answered all five
questions raised in Section 5.1:
• The launch of Amrolife cannot be completed on time, i.e., within 1 year,
using current available resources.
• Delaying the “filing contracts” will have no impact on the total project
duration.
• Mia can expedite the project so it is completed in 1 year (52 weeks) by
allocating an additional $49,600 to crash activities D, G, and I.
• If there is a bonus reward for reducing the project duration, we can
expedite the project to meet the deadline, with an additional profit of
$61,400.
• Considering the contingencies, without expediting, we only have about
a 1% chance that the project will be completed within 52 weeks, i.e.,
ahead of the competition; hence, with a 52 week deadline and uncertain
activity times, we will have to expedite.

5.5 Human Factors


CPM and PERT are widely used in practice due to their many clear
advantages:
• Conceptually straightforward and not mathematically complex.
• Graphical networks help to perceive relationships among project
activities.
• Critical path and slack time analyses help pinpoint activities that need to
be closely monitored as potential bottlenecks.
• Project documentation and graphics point out who is responsible for
various activities.
• Especially useful for scheduling and controlling large-scale projects.
• Applicable to a wide variety of projects.
However, in practice, not all projects planned and scheduled by CPM
and PERT are successful. Indeed, about 44% of projects finish on time. On
average, project completion requires 222% of the duration originally
planned, and at 189% of the original budgeted cost. 70% of projects fall
short of their planned scope (technical content delivered), and 30% are
cancelled before completion.3
One important limitation of both CPM and PERT is their inattention to
human factors such as:
• Student syndrome — people only begin to apply themselves when a
deadline is near.
• Parkinson’s law — work expands to fill the time available for its
completion.
• Multi-tasking and lack of prioritization — poor multi-tasking can delay
the start of successor tasks.
Under CPM and PERT, it is generally estimated that about 30% of lost
time and resources are due to such human factors.3 In fact, managing people
is also critical to project success. Although the student syndrome is
straightforward, Parkinson’s law is subtle. A more intuitive interpretation is:

A well-known example of Parkinson’s law in business applications is


the so-called “curse of mandatory overtime.” Overtime may increase
productivity, but usually not if overtime is mandatory and expected! This is
true because when someone works overtime regularly, s/he will build the
expected overtime into the week. Therefore, his/her productivity per hour
drops and the work expands to fill the new time available. Thus, employees
will accomplish the same amount of work even with overtime. Parkinson’s
law provides an important lesson to many employers: mandatory overtime
does not necessarily increase work output but only increases cost and
weakens morale.
To address issues associated with human factors, CCPM was
introduced in 1997 by Eliyahu M. Goldratt.4 With the application of CCPM,
project completion times are often 10–50% faster and/or cheaper than when
using CPM and PERT. Mabin and Balderstone,5 in their meta-analysis of 78
published case studies, found that implementing CCPM resulted in a mean
improvement in due date performance of 60%, and mean increases in
revenue/throughput of 68%.

5.6 Project Management Software — Microsoft


Project
There is abundant off-the-shelf software available to automate project
planning and scheduling techniques like CPM and help to monitor and track
project progress. Many of these software packages share similar features
and interfaces. Below, we shall use Microsoft Project to demonstrate the
features.
Figure 5.10 shows that project information, such as tasks, duration, and
predecessors, are entered on the left-hand side, much as in Microsoft Excel.
On the right-hand side, the Gantt chart is generated automatically with
starting and finishing dates for all tasks. Holidays and weekends are left out
as non-working days.
Microsoft Project can also generate the activity-on-node project
network, as shown in Figure 5.11, where the critical activities are marked in
pink.
Each node can be expanded to show more details about the task, as
shown in Figure 5.12.
Finally, Microsoft Project can provide progress information to facilitate
project tracking and monitoring. In Figure 5.13, the vertical line (in July
between columns E and M) shows today’s date, and the darkened portion
inside each task bar indicates the progress towards completing that task. By
comparing the vertical line and the position of the completed portion of the
task bar, we can see the progress for each activity and whether it meets the
schedule. For instance, task F, Filing Contracts, is behind schedule, but may
not cause a project delay because of its slack task time. However, tasks H
and I are behind schedule, and because they are on the critical path, this
may have a direct impact on timely project completion.

Figure 5.10: User Interface of Microsoft Project.

Figure 5.11: Project Network Generated by Microsoft Project.


Figure 5.12: Task Information by Microsoft Project.

Figure 5.13: Progress Tracking by Microsoft Project.

5.7 Case Study — Product Launch Process


5.7.1 PDS Company
The PDS Company was started in the late 1940’s as a family run business
by a dentist who had an idea for all-natural ingredient toothpaste. The idea
was far ahead of its time, and the business grew rapidly as the world
became more environmentally friendly in the 1960’s and 1970’s. Today, the
PDS Company is a public company with none of the founding family
members involved in running the day-to-day operations. Having expanded
the product line to include multiple lines of toothpastes and mouth rinses,
PDS is now operating in both North and Latin America. Operating with
multiple products in multiple markets has become a significant challenge at
PDS. Unfortunately, not all of its product launches have been very
successful; many have been late to the market or have not been able to
achieve the desired business objectives. Critical to maintaining its growth,
the ability to launch new and innovative products has been identified by the
Management Committee as an important area in need of improvement.
An external consultant with expertise in the area of product launch
practices has been hired to review the current situation at PDS and to
identify areas of opportunities for improvement. The consultant has
reviewed all the current launch management practices in place and
benchmarked these practices with those competitors who were identified as
best in class. An area of investigation was the project management practices
in place at PDS for managing the product launch process. The consultant
has found that during PDS’s rapid growth period into new product lines and
markets, various practices were deployed in managing these projects. Some
brands and markets were better than others. Some practices were very
effective in identifying and monitoring tasks and some were not as effective
in understanding the impact of missed deadlines and mitigating risk. When
comparing PDS to its competitors, this lack of a common approach to
project management created a disadvantage in understanding the true
resource investments required to execute its projects, which directly
impacted the business case. In effect, many of the projects were not only
late to market, but had a very poor business case right from the start. The
consultant is preparing a presentation of her findings for the PDS
Management Committee along with a discussion guide on possible go-
forward options. It is clear that a fundamental need at PDS is a more
harmonized and disciplined approach to managing launch projects. The
challenge is how to present the current state of PDS’s practices and present
the value of a new approach to project management practices at the
company.
Questions for PDS Company Case Discussion:
• How would you position PDS’s current project management practices to
the Management Committee?
• Do you believe an improvement in project management practices will
help PDS better achieve their growth and business targets? Why?
• What are the practices that should be recommended and how will they
benefit PDS?
• Should PDS consider establishing a project management Center of
Excellence to ensure practices are followed and support available for
training?

5.8 Exercises
1. The West Side Appliance Company is designing a management
training program for employees at its corporate headquarters. The
company wants to design the program so that trainees can complete it
as quickly as possible, but important precedence relationships must be
maintained between tasks in the program. For example, a trainee
cannot serve as an assistant to a store manager until the trainee has
obtained experience in the credit department and at least one sales
department. The following activities must be completed by each
trainee in the program:

Construct a project network for this problem.


2. Consider the following project network and activity times (in weeks):

(a) Draw the project network.


(b) Identify the critical path and the time needed to complete the
project.
Can activity D be delayed without delaying the entire project? By
(c) how many weeks?
(d) Can activity C be delayed without delaying the entire project? By
how many weeks?
(e) What is the schedule for activity E?
3. Drug Development and Commercialization — Scheduling. A
proposed project encompasses activities needed to develop and market
a new drug for a pharmaceutical company. The activities, their
duration and the precedence structure are shown in Table 5E3.1.

Table 5E3.1: New Drug Development.

(a) Generate a graphical representation for this project.


(b) What are the critical path, critical activities, non-critical activities
and their slacks?
(c) What is the duration of this project?
(d) Specify the schedule (earliest and latest times) for all activities in
the project.
4. Consider the following project network and activity times (in days):
The crashing data for this project are shown in Table 5E4.1.
(a) Draw the project network.
(b) Find the critical path.
(c) What is the expected project completion time?
(d) What is the total project cost using the normal times?

Table 5E4.1: Crashing Time and Costs for Exercise 4.

5. Assume, for Exercise 4, that the project must be completed within 12


days.
(a) Formulate an LP model that can be used to make the crashing
decisions.
(b) Which activities should be crashed?
(c) What is the total project cost for the 12-day completion time?
6. Drug Development and Commercialization — Expediting. Some of
the activities in Exercise 3 can be expedited. For instance, clinical
trials can be accomplished more quickly by opening more sites around
the world, but this will incur additional costs for both clinical research
and drug supply. The drug has expected sales of $10 million per year.
After subtracting the cost of goods sold, one week’s average profit is
about $140,000. Normal and expedited times and costs are shown in
Table 5E6.1.
(a) What is the optimal expediting plan for this project?
(b) How much more profit can be obtain with the plan in (a)?
7. Consider the project shown in Table 5E7.1, where if the project takes
more than 32 days to complete, the penalty is $375/day.
(a) If we do not crash the project, what is the total penalty cost?

Table 5E6.1: New Drug Development — Expediting Activity Times.

Table 5E7.1: Project Activity Times for Crashing.


(b) If the project must be completed within 32 days, and we assume
the cost to expedite by a second day is the same as the cost to
expedite by 1 day for each activity, use an LP model to determine
how many days each activity should be crashed to minimize the
total crashing cost?
8. Consider the project shown in Table 5E8.1, where a, m, and b are the
best case, most likely, and worst case completion times for each
activity, respectively.
(a) Find the expected completion times and variances for each activity.
(b) Find the critical path using the expected completion time for each
activity.

Table 5E8.1: Uncertain Activity Times.

(c) What is the total expected completion time for the path found in
(b)?
(d) What is the variance of the completion time for the path found in
(b)?
(e) What is the probability that the project can be finished within 15
weeks?
9. Building a backyard deck and swimming pool consists of nine major
activities. The activities, their immediate predecessors, and the activity
time estimates (in days) for the construction project are shown in Table
5E9.1.
(a) Draw the project network.
(b) Calculate the expected time for each activity.
(c) What are the critical activities?
(d) What is the expected time to complete the project?
(e) What is the probability that the project can be completed in 25 or
fewer days?

Table 5E9.1: Backyard Deck and Swimming Pool Project.

10. Drug Development and Commercialization — Uncertainty


Estimation. Table 5E10.1 below shows the best estimates of the
activity durations for the new drug development project in Exercise 3.
(a) What is the likelihood that the project can be completed within 4.5
years (234 weeks)?
Table 5E10.1: New Drug Development — Uncertain Activity Times.

(b) What is the likelihood that the project will take longer than 5 years
(260 weeks)?
(c) If the project manager wishes to promise a completion date to top
management with 95% confidence, how many weeks should she
specify?

Endnotes
1. Yao Zhao, “American Royal Financial — The Launch of Amrolife Return of Premium Term”
Rutgers Business School Case Series, 2010.
2. ISO Standard 8402, 1994.
3. Siddesh K. Pai, “Multi-Project Management using Critical Chain Project Management (CCPM)
— The Power of Creative Engineering,” International Journal & Magazine of Engineering,
Technology, Management and Research (IJMETMR), Vol 1, No 1, January, 2014.
4. Eliyahu M. Goldratt, Critical Chain, A Business Novel, North River Press Publishing Corp.,
1997. Great Barrington, Massachusetts.
5. Vicky Mabin and Steven Balderstone, “A Review of Goldratt’s Theory of Constraints — Lessons
from the International Literature,” Operational Research Society of New Zealand 33rd Annual
Conference, Auckland, pp. 205–214, 1998.
Chapter 6

Service Management

THE MAIN DIFFERENCE BETWEEN SERVICE AND MANUFACTURING IS


THE SERVICE DEPARTMENT DOESN’T KNOW THAT THEY HAVE A
PRODUCT.

W. Edwards Deming

6.1 Introduction to Service Management


In this chapter, we introduce strategies and techniques for planning,
scheduling, and managing service operations. In Section 6.1.1, we provide
an overview of the operations economics in service industries, and discuss
the key issues and challenges. In Section 6.2, we study waiting line
management by identifying causes for congestion, providing tools to assess
waiting times and queue lengths, and discussing demand management
techniques for services organizations. In Section 6.3, we elaborate on
capacity management strategies, and provide quantitative tools for staff
planning and scheduling. In Section 6.4, we present a brief case study in
which the linear and integer programming techniques of Chapter 3 are
applied to a staffing and scheduling problem, and a more general case study
on the advantages and disadvantages of a centralized customer contact
center.

6.1.1 Service management economics


In this section, we present the unique economics in service operations, and
provide some statistics on the importance of the service industry; we follow
with an example of a Community Hospital to illustrate the key issues and
challenges.
Unlike manufacturing industries, one cannot stock up services in
inventory for future use, but must match demand with capacity. More
specifically, in the context of manufacturing, in wholesale and retail
industries, managers can hold inventory of tangible or physical goods in
response to demand fluctuations. However, goods offered in service
industries are often intangible, or non-physical, such as diagnostics and
treatment of a disease, flights, and hotel nights. Thus service capacity (e.g.,
physician/staff time, bed-days) and goods cannot be stored in inventory.
Capacity must, therefore, be matched to demand on a real time basis. This
unique feature leads to new issues and challenges in service operations, and
requires new strategies and techniques for effective management.
The service sector represents the lion’s share of the gross domestic
product (GDP) in many developed countries around the world. In the US,
for example, statistics show that in 2014, services (including government
activities, communications, transportation, finance, and all other private
economic activities that do not produce material goods) accounted for
77.7% of the GDP; industry (including mining, manufacturing, energy
production, and construction) accounted for 20.7%; and agriculture
(including farming, fishing, and forestry) accounted for 1.6%.1 Figure 6.1
shows employment growth in the service sector and manufacturing sector
between 1939 and 2014.
The issues and challenges in service operations may be best illustrated
by an example of a community hospital. A typical facility in the US has
multiple service lines, or departmental groupings, such as emergency (ED),
oncology, inpatients (beds), and ambulatory care (outpatients). These
interdependent service lines have distinct features, as shown in Table 6.1.
They all share common ancillary services with limited capacity, such as
imaging, laboratories, operating rooms, and all supporting services. Most
service lines typically serve both scheduled and unscheduled “walk-in”
patients.

Figure 6.1: Employment Growth in Service Industries vs. Manufacturing Between 1939 and 2014.
Source: Doug Short, “A Labor Day Perspective: The Growth of our Services Economy,” Advisor
Perspectives, September 1, 2014, http://www.advisorperspectives.com/dshort/commentaries/From-
Manufacturing-to-Services.php

Table 6.1: Community Hospital Service Lines and Their Features.


Service Features
Line
ED Large service variety; mostly walk-ins result in
unpredictable demand; feeder to other services
Oncology Chronically ill patients; limited market size but high
revenue per patient
Inpatients High revenue but also heavy investment
Ambulatory Appointment system leads to more predictable demand,
care can be highly profitable but competitive
Figure 6.2: Key Issues and Challenges for a Community Hospital.

The multiple goals of the hospital are safety, cost efficiency, patient
satisfaction, and quality of services (e.g., acceptable waiting times). Clearly,
to increase the quality of service and patient satisfaction, the hospital may
need more quality equipment and experienced staff, which leads to higher
investment and cost, as shown in the efficiency frontier (before) in Figure
6.2. Thus the objectives of quality of service and patient satisfaction often
work against the objective of cost efficiency.
How can we achieve both? How can we do more with less? The
objective of this chapter is to introduce techniques and strategies to design
and manage service operations in order to push the efficiency frontier to the
right (after) so as to simultaneously achieve both cost efficiency and better
quality of service. These techniques and strategies are centered around
waiting lines, demand, and capacity management.

6.2 Waiting Line Management


Almost every service organization must cope with waiting lines, or queues,
such as customers waiting to be served in a bank, patients waiting be served
in a hospital, drivers waiting to renew their licenses, customer calls waiting
to be answered at a call center, and cars and trucks waiting at highway tolls.
Figure 6.3: Anatomy of a Queueing System in a Community Hospital.

A queueing system typically consists of three components, as shown in


Figure 6.3:
• The input, or customers: people or objects that enter the system,
requiring service.
• The servers, or channels: people or machines that perform the required
service.
• The queue: an accumulation of entities that have entered the system but
have not yet begun to receive the required service.
Hardly anyone enjoys waiting in a queue: “Waiting is a form of
imprisonment. … Aside from boredom and physical discomfort, the subtler
misery of waiting is the knowledge that one’s most precious resource, time,
a fraction of one’s life, is being stolen away, irrecoverably lost.”2
Waiting times can be long for customers, particularly in a health care
setting. In 2014, the average waiting time before an emergency patient saw
a doctor was 24 minutes, with a minimum of 16 minutes in Utah and a
maximum of 54 minutes in Washington D.C. Average total time, or the
average waiting time plus the average service time, before an emergency
patient was sent home, was 135 minutes (2 hours and 15 minutes), with a
minimum of 105 minutes in Kansas and a maximum of 191 minutes in
Maryland.3
Waiting can lead to much more harm than just psychological
discomfort: it can result in delays in receiving treatment when patients
“board,” or wait in hallways or other emergency room areas for an inpatient
bed to become available, or leave without being seen.

6.2.1 Causes of congestion


Intuitively, long waiting times and heavy congestion (long queues) result
from surging demand and limited service capacity. Queueing theory
provides a tool to explore the relationships between customer demand,
system capacity, and customer waiting. Specifically, queueing theory takes
input parameters, such as arrival pattern, number of servers, service rate,
and the order of service, and generates system performance measures, such
as average waiting time, average queue length and the probability of delay.
Queueing theory can help answering questions like the following:
• Capacity planning: How many beds does a unit need to ensure that at
most only 5% of patients will encounter a delay in bed placement?
• Staffing: How many ED physicians are needed during the peak/non-
peak hour to ensure that patient average wait time is less than 30
minutes?
• System design: Should we assign a queue to each server or have all
servers serving one queue?
In many situations, the time between consecutive customer arrivals (the
interarrival time) is often random, as shown in Figure 6.4.
A sample histogram of the interarrival times of customers at a bank
with random arrivals and an average arrival rate of λ = 1 customer per
minute is shown in Figure 6.5. The average interarrival time, or average
time between arrivals, is 1/λ = 1.00 minutes.
Service times may also vary from customer to customer. A sample
histogram analysis of the service times at a bank with an average service
rate of µ = 1.05 customers per minutes is depicted in Figure 6.6. The
average service time is 1/µ = 0.95 minutes.
Figure 6.4: Random Interarrival Times.

Figure 6.5: Histogram of Interarrival Times at a Bank.

Assume that rather than a bank (where arrivals are likely to be random,
and service times vary from customer to customer), this is an automated
process where there is a single server, and both the arrivals and the service
times are constant. Figure 6.7 shows the timeline of customer arrivals and
service times. Clearly, no customer has to wait because the interarrival time
is longer than the service time.
In the case of non-constant interarrival and service times, Figure 6.8
shows that one reason for customers having to wait is the surging demand
(i.e., shorter interarrival time than the average). Specifically, the second
customer arrives before the first one completes the service and so she/he
must wait. Figure 6.8 confirms our intuition: what leads to queues (or
waiting) is a demand surge and limited capacity. When demand plummets
(i.e., longer interarrival time than the average), the server may be idle,
resulting in unused capacity.

Figure 6.6: Histogram of Service Times at a Bank.

Figure 6.7: Constant Interarrival and Service Times.

In summary, the variability in arrival and service times cause


congestion because they create surged (or plummeted) demand that cause
systems to oscillate between congestion (long waiting time and long queue
size) and server idleness.

Figure 6.8: Non-constant Interarrival and Service Times.


6.2.2 Characteristics of waiting lines
Figure 6.9 illustrates various types of queueing systems commonly
encountered in practice. The single server queue is the simplest form,
typical of a system like a small grocery store. Parallel single server queues,
where the customer waits on a line for one of many servers, all performing
the same task, are typical of supermarkets or department store cashiers. A
multiple server queueing system, where customers wait in one line for the
next available server, is typical of banks, post offices, and fast food
restaurants. Multi-stage queues, also called multiple servers in series, where
the customer waits on line for one service, then joins other lines for
different subsequent services, are typical of assembly lines, hospital
emergency rooms, and motor vehicle offices.
In a queueing system, customers can follow various sequences to be
served (order of service):
• First-come-first-served (FCFS), as in most retail operations.
• Last-in-first-out (LIFO), as in an elevator, or homework or exam
grading.
• Priority service (tagged service), as in a hospital emergency department,
or multi-processing on a computer.
In response to waiting, customers may exhibit various behaviors:
• Balking — customers refuse to join the queue.
• Reneging — customer initially joins the queue but leaves without
receiving service.
Figure 6.9: Types of Queueing Systems.

• Jockeying — customer leaves one queue and joins another in the


expectation that he/she will be served more quickly.
A queueing model needs to take all system parameters and all
behaviors into account. Specifically, the inputs for queueing models are:
• The arrival process: the (average) arrival rate to the system; the pattern
of variation in arrivals over time, or the probability distribution of
arrivals.
• The service process: the (average) time to perform a service; the pattern
of variation in service time, or the probability distribution of service
times; whether service depends on other factors, like system congestion
(i.e., the server will work more rapidly when more people are waiting).
• The number of servers that are providing service.
• The order of service: FCFS, LIFO, etc.
• Customer behavior: reneging balking, jockeying.

6.2.3 M/M/s queueing models


In this section, we will focus on an important class of queueing models, the
M/M/s queues, which apply to many consumer systems where arrivals and
departures occur randomly. In Kendall Notation, the first symbol (here M)
refers to the distribution of arrivals; the second symbol (here M) refers to
the distribution of departures; the third symbol (s) refers to the number of
servers.
M/M/s systems operate under the following assumptions:
• Poisson, or random arrivals (equivalent to exponential interarrival
times) → first M (for Markovian).
• Exponential Service Time (equivalent to Poisson, or random,
departures) → second M (for Markovian).
• s = Number of servers, or channels.
• Arrival process is independent of the service process.
• FCFS.
• Infinite Waiting Room.
• No balking, reneging, or jockeying permitted.
M/M/s queueing models are particularly useful because they apply to a
wide range of practical applications, such as an emergency department or
supermarket or fast food restaurant; and because of their simplicity, as they
require only three input parameters: the arrival rate, the service rate, and
number of servers. In addition, system performance measures can be
calculated in closed form. This class of queueing models clearly illustrates
the relationship between demand, service capacity, and waiting line
congestion.
Figure 6.10: Model Input and Output.

Table 6.2: System Performance Measures.


Notation Interpretation
W The average time an “entity” spends in the system, e.g., the average time from admission to
discharge
Wq The average time an “entity” spends in the queue, e.g., the average waiting time before
service
L The average number of “entity” in the system
Lq The average number of “entity” in the queue
ρ Utilization, i.e., the fraction of time a server is busy
pD The change that a customer waits before receiving service

A general queueing model is graphically illustrated in Figure 6.10, with


the notation explained in Table 6.2.
To calculate the system performance measures:
• Identify λ, s, and µ.
• Calculate the utilization Note that ρ must be less than 1 for the
performance measures to be finite. When the utilization ρ ≥ 1, the
system does not reach steady state (equilibrium) and the average queue
length (Lq), average number of customers in the system (L), average
time in the queue (Wq), and average time in the system (W) will all grow
without limit over time; i.e., all measures → ∞.
• Evaluate system performance, W, Wq, L, Lq, PD by comparing to
established acceptable service standards.
Equation (6.1) allows us to calculate the probability that there are no
customers in an M/M/s queueing system, or the likelihood that all servers
are idle. This information, together with Equations (6.2)–(6.6), enables us to
calculate the system performance measures.

(6.1)

(6.2)

(6.3)

(6.4)

(6.5)

(6.6)

While the manual computations are somewhat complex, there are many
queueing software packages available which facilitate both the calculations
and the examination of alternative scenarios. For example, the Queueing
ToolPak (QTP) 4.0 — http://queueingtoolpak.org — is a free software
package which can be embedded within Microsoft Excel as an add-in. It
provides functions to calculate system performance metrics including the
following:
• Average number of customers in the system, L.
• Average number of customers in the queue, Lq.
• Average time a customer spends in the system, W.
• Average time a customer spends in the queue, Wq.
• Probability that the system is empty of customers, or probability that the
server is idle.
• Probability that the system is full (if the capacity is limited), or
overflow.
• Probability that a customer has to wait before receiving service, PD.
Little’s Laws express several important relationships between the
average time in the queue, Wq, and the average number in the queue, Lq,
shown in Equation (6.7), and between the average total time in the system,
W, and the average number in the system, L, shown in Equation (6.8),
which apply to all queueing systems:

(6.7)

(6.8)

We illustrate below a number of insights for M/M/s systems.


Traffic and Congestion. Consider a nurse-led clinic with four nurse
practitioners. Each of them can see 20 patients per day. We are interested in
determining the following, where arrivals are assumed to be random and
service times are assumed to be exponential:
(1) How many patients are, on average, waiting to see nurses if average
daily arrival is 40 patients?
(2) What is the average number of patients waiting to see nurses when the
average daily arrival rate is 8, 16, 24, 32, 40, 48, 56, 64, and 72
patients? Plot the average length of the waiting line Lq against ρ. How
does Lq change as ρ increases?
(3) What happens to the average queue length if the system experiences a
run, that is, ρ goes from 0.4 to 0.8?
Since arrivals are random and service times are exponential, and there are 4
nurse practitioners providing service, this is an M/M/4 system. For
Question (1), if the average daily arrival rate is 40 patients and the average
daily service rate per busy server is 20 patients, then λ = 40/day, µ = 20/day,
s = 4, and ρ = λ/sµ = 0.5. Using the QTP software and the function
QTPMMS_Lq, we can find the average queue length, Lq = 0.17 patients, as
shown in Figure 6.11.
For Question (2), for daily arrival rates of 8, 16, 24, … ,72 patients,
Figure 6.12 shows the following table and graph show the average number
of patients waiting to see a nurse, Lq, as a function of ρ.

Figure 6.11: QTP Software Calculation of Lq.

As ρ increases, Lq increases sharply. The change in Lq is highly


nonlinear when ρ is large, approaching 1.0, which implies that the system is
very sensitive under high utilization.
For Question (3), if ρ increases from 0.4 to 0.8, Figure 6.12 shows that
the average queue length increases from 0.06 to 2.39, or about 40 times.
This example demonstrates an important insight: it is not a good idea to
design systems where the service capacity, sµ, is close to the average
demand, λ, when interarrival times and service times are random, because
doing so results in a utilization ρ close to 1 and very heavy congestion.
Effect of Customer Routing. A physician and a nurse practitioner (NP)
run a student care clinic where roughly 40 student patients arrive per 8-hour
day. Both the physician and the NP work equally fast, at an average rate of
four patients per hour.

Figure 6.12: Average Queue Length, Lq vs. Utilization, ρ.

(1) Since the physician, as the senior partner, has a larger clientele, only
30% of the patients come to see the NP. How long, on average, will a
patient wait in the clinic?
(2) A year later, the students, now more familiar with the NP, are equally
likely to see the NP and the physician. How long does the average
patient wait now?
(3) What would be the average patient wait time if patients didn’t care
who provided care to them?
For Question (1), where the arrival rate is 40 patients per 8-hour day, or five
patients per hour, and the average service rate is four patients per hour for
each of the two service providers (the physician and the NP), we have two
M/M/1 systems, one for the physician and one for the NP.
For the physician, λp = 0.7×5 patients/hour = 3.5 patients/hour; µp = 4
patients/hour; ρp = λp/µ = 3.5/4 = 0.875; the average number of patients
waiting is Lqp = 6.125; and the average waiting time is Wqp = 1.75 hours.
For the NP, λN = 0.3 × 5 patients/hour = 1.5 patients/hour; ρN = λN /µ =
1.5/4 = 0.375; the average number of patients waiting is LqN = 0.225; and
the average waiting time is WqN = 0.15 hours.
Thus, the average waiting time is 0.7 × 1.75 hours + 0.3 × 0.15 hours =
1.27 hours ≈ 1 hour and 16 minutes.
For Question (2), the split is now 50–50, but students who prefer the
physician will not see the NP, and vice versa, so again we have two M/M/1
systems, one for the physician and one for the NP. For each of them, λ = 0.5
× 5 patients/hour = 2.5 patients/hour; µ = 4 patients/hour; ρ = λ/µ = 0.625;
the average number of patients waiting is Lq = 1.05; and the average
waiting time is Wq = 0.42 hours ≈ 25.2 minutes (down from 1 hour and 16
minutes!).
For Question (3), the split is 50–50, but patients are willing to be seen
by either the physician or the NP, so we now have an M/M/2 system instead
of two M/M/1 systems, where λ = 5 patients/hour; µ = 4 patients/hour; ρ =
λ/µ = 0.625; the average number of patients waiting is Lq = 0.8; and the
average waiting time is Wq = 0.16 hours ≈ 9.6 minutes (down from 25.2
minutes!).
This example demonstrates another important insight: Given identical
demand and service capacity, the pattern of customer routing may
significantly affect the customer waiting time, as shown in Figure 6.13.
The right-most system performs the best because it pools the capacity
to serve demand.
Efficiency. As a hospital manager, you are responsible for hiring clinicians
for your ED. The average daily arrival rate is 40 patients every 8 hours. You
have two options:
(1) Hiring two junior clinicians who can each treat an average of four
patients per hour, and patients have no preference between the two
clinicians.
Figure 6.13: Waiting Time for Different Customer Routing Patterns.

(2) Hiring one senior clinician who is twice as efficient, able to treat an
average of eight patients per hour.
Suppose the salary of one junior clinician is less than that of a senior
clinician, and the salary of the senior clinician is less than the sum of two
junior clinicians’ salary. Which option would you prefer from the
perspective of both the total time in the system (waiting plus service time)
and the total cost?
For option (1) with two junior clinicians, we have an M/M/2 system,
with λ = 5/hour; µ = 4/hour; ρ = 0.625; the average number of patients in
the system is L = 2.05; and the average total time in the system is W = 0.41
hours.
For option (2) with one senior clinician, we have an M/M/2 system,
with λ = 5/hour; µ = 8/hour; ρ = 0.625; but here the average number of
patients in the system is L = 1.67; and the average total time in the system is
W = 0.33 hours.
Thus, from both a total system time and cost perspectives, one senior
clinician is a better choice.
Again we have an important insight: One fast server is better than two
slow servers from the perspective of average total time in the system, W.
However, it is important to note that option (2) has a longer average waiting
time in queue, Wq, than option (1). The shorter total time in the system
results from the shorter service time.
No Waiting Room Capacity. Patient parking is often a problem at hospitals.
Suppose a hospital has two parking lots:
(1) a small lot with 10 spots and an arrival rate of 40 cars per 8-hour day,
(2) a massive lot with 250 spots and a daily arrival rate of 1,000 cars.
Suppose that on average, each car parks for 2 hours in an 8-hour day.
Assuming that cars arrive and depart randomly, in which lot is an arriving
patient more likely to find an empty space?
This example illustrates queueing systems with random arrivals and
departures and finite space with no waiting room. For option (1), the
smaller lot, we have an M/M/10 system, with average arrival rate λ = 40
cars/day; average service time 1/µ = 2 hours, so average service (departure)
rate is µ = 4 cars/day; and ρ = 1. Since the lot has limited space and no
waiting room (queue capacity = 0), arrivals when the lot is full are
overflowed, or rejected. Using the function QTPMMS_PrFull, we see from
Figure 6.14(a) that the probability of a full lot is just over 21%.
For option (2), the massive lot, s = 250; λ = 1,000/day; µ = 4/day; and ρ
= 1, as in option (1). In this case, as shown in Figure 6.14(b), the likelihood
of a full lot is under 5%.
Another important insight: The odds favor you at the bigger lot! Even
with identical utilization ρ, a system with zero waiting room but more
servers (spaces) is less likely to be full than a similar system with fewer
servers (spaces).
It is important to note that all these insights apply to M/M/s systems,
which assume independent exponential interarrival times (or random
arrivals) and exponentially distributed service times (or random departures).
The assumption of random arrivals makes sense when customers arrive one
at a time; that is, each customer’s arrival is independent of all others, and
the average arrival rate is constant (e.g., no lunch and rush hours during
which arrival patterns may be different). An example of such arrivals would
be unscheduled patient arrivals, or a process in which most arrivals are
unscheduled patients, such as emergency departments and clinics which do
not require appointments, but not a primary care physician, where
appointments (arrivals) are scheduled every 15 minutes. The assumption of
random departures makes sense when there is a relatively large diversity of
services, and the variation in service times may be large, such as an
emergency department, but not a primary care physician who sees every
patient for exactly 15 minutes. When these assumptions do not hold, we
refer to a GI/G/s queueing system, where the letter “G” refers to generally
distributed arrival and service times, and I refers to independent arrivals.
For such systems, we can either use QTP functions like QTPGGS_Lq, or
Monte Carlo simulation, discussed in the next section.
Figure 6.14: QTP Software Calculation of Probability of Overflow. (a) Option (1): 10 Parking
Spaces, 40 Arrivals/Day. (b) Option (2): 250 Parking Spaces, 1,000 Arrivals/Day.

6.2.4 Monte Carlo simulation


The results of the preceding section are based on closed form analytic
solutions for the queueing systems considered; that is, a set of equations for
the performance measures we are interested in can be obtained from a
mathematical model of the system. An alternative, for queueing systems
which do not conform to the assumptions of the M/M/s queues, or are
otherwise too complex in structure to derive closed form solutions, is
Monte Carlo simulation, a sampling experiment performed on a computer.
In effect, we are generating data by computer simulation which behaves like
the corresponding stochastic system, and from an analysis of that sample
data, we can measure not only the expected system performance but also
the associated risk (tail probability, or volatility).
Monte Carlo simulation evaluates and/or optimizes a system under
uncertainty, and thus can be applied to a variety of service operations
problems, such as overbooking in airline yield management, as well as
queueing systems. It has important applications beyond services operations,
such as options pricing and portfolio selection, project risk assessment and
evaluation, and inventory management. The objective of this section is to
illustrate how to build simulation models and how to develop the associated
problem-solving skills using Microsoft Excel.
As Figure 6.15 shows, a stochastic model requires three types of input
in order to generate output. These inputs are system parameters, decisions
(controllable inputs), and random variables (probabilistic inputs). The
output provides data on system performance and related statistics, such as
sample mean, sample variance, sample standard deviation, and a frequency
distribution, such as a histogram or frequency polygon.
To perform a Monte Carlo simulation, we follow these steps:
• Modeling — to create and implement a model of the real system.
• Random variables generation — based on statistical concepts and
sampling theory.
• Simulation — sample the behavior of the system and perform
replications.

Figure 6.15: A Stochastic Model.

Table 6.3: Microsoft Excel Functions for Random Variable


Generation.
Random Variable Microsoft Excel Function
Uniform random RAND( )
Variable over [0, 1]
Uniform random variable a + (b − a) × RAND( )
Over [a, b], a < b
Normal random variable with NORMINV(RAND( ), µ, σ)
Mean µ and standard deviation σ
• Output data analysis — estimate performance measures from the output
data.
• Decision-making — use the estimates to make decisions for the system.
Table 6.3 shows the Microsoft Excel functions which generate various
random variables commonly used in practice.
Overbooking: A flight has 100 seats. Due to no-shows and last-minute
cancellations, the airline routinely sells 115 tickets to minimize the
likelihood of empty seats. However, if more than 100 passengers show up,
someone has to be rejected. Suppose the probability of showing up for a
flight is 90% (so the probability of a no-show or cancellation is 10%) for
each passenger; passengers make their decisions independently of other
passengers; the cost of an empty seat is $100; and the cost of rejecting a
passenger because of overbooking is $200.
(1) As a passenger, what is the chance of being rejected?
(2) As the airline, what is the cost of overbooking?
In this example, the parameters are the costs of an empty seat and a
rejection; the decisions, or controllable inputs, are the number of seats and
number of tickets sold; and the random variables, or probabilistic inputs, are
the passenger behaviors — characterized by the no-show and cancellation
probability. The output is the system performance in terms of the likelihood
of passenger rejections and the total cost (the cost of empty seats and
rejected passengers) of overbooking.
The model simulates the behavior for each of the 115 passengers who
purchase a ticket. Because the likelihood of showing up is 90%, and each
passenger behaves independently, for each passenger, we can use an IF
function to determine whether that passenger will show up: IF(RAND( ) <
0.9, 1,0). That is, we generate a uniformly distributed random number
between 0.0 and 1.0, called RAND( ), for each of the 115 passengers who
have purchased a ticket. If a particular RAND( ) is less than 0.9, the
associated passenger will show up, and in Microsoft Excel, the value 1 will
be placed in the associated cell; otherwise, the value in the cell will be 0.
Then we add up the values for every passenger to obtain the total number of
passengers who will show up. Comparing this number with the available
seats, 100, we can calculate the number of rejected passengers (where more
than 100 passengers show up) and empty seats (where fewer than 100
passengers show up) for one run (sample). By repeating this sampling for a
planned (large) number of iterations, we are simulating the results for a
large number of flights. Based on the sample simulation results, we can
estimate the probability of rejection as the total number of rejections
divided by the total number of passengers who show up over all simulated
runs. The average cost can be estimated by the cumulative cost for all
simulated runs divided by the number of iterations, or runs. The screenshots
in Figure 6.16 illustrate the simulation for one run, and the estimated
probability of rejection and total cost for 1,005 simulated flights, where (1)
the probability of a passenger being rejected upon arrival at the airport
because of overbooking is 0.038, and (2) the average total cost of
overbooking per flight is $806.67.
To optimize the system, we can simply run the simulation under
different decision parameters (i.e., varying the number of tickets sold) and
choose the one that optimizes the system performance, as measured by
average total cost.
Figure 6.16: Microsoft Excel Spreadsheet for Overbooking Simulation.

Queueing: As an example of how Monte Carlo simulation can be used in


queueing analysis, consider a single server queue with normally distributed
interarrival times and service times (since these times are not random, we
cannot use the M/M/1 relationships discussed in Section 6.2.3). The mean
interarrival time is 1 hour with a standard deviation of 0.3 hour, and the
mean service time is 0.8 hour with a standard deviation of 0.25 hour.
(1) What is the mean waiting time in the queue?
(2) What is the waiting time distribution?
In this example, the parameters are the arrival rate, λ, and the mean service
time, 1/µ; the decisions, or controllable inputs, are the values for those
parameters; and the probabilistic inputs are the distributions of interarrival
times and service times. The output is the customer waiting time in the
queue.
To build a model to simulate the waiting times, we let Wn, Yn, and Sn be
the waiting time, interarrival time and service time of the nth customer (n ≥
1) respectively. Assuming that the system begins empty, we have the
following simplified model,

(6.9)

To visualize this, we define several additional variables, An and Fn, to


be the arrival time and finishing time for the nth customer. Let x+ = max{0,
x}. Table 6.4 shows the computations for the waiting time and finishing
time for each customer.
Substituting the equation for F1 into the equation for W2, and noting
that the interarrival time Y2 = A2−A1, we have the simplified equation for
Wn shown in Equation (6.9).

Table 6.4: Computations for Normal Interarrival and Service Times.


Figure 6.17: Microsoft Excel Spreadsheet for Queueing Simulation.

This model can be easily implemented in Microsoft Excel, where Yn


and Sn are generated by using the function shown in Table 6.3 for normally
distributed inter-arrival times and normally distributed service times,
NORMINV(RAND( ), 1, 0.30) or NORMINV(RAND ( ), 0.8, 0.25),
respectively. A screenshot of the Microsoft Excel spreadsheet is shown in
Figure 6.17.
The waiting time distribution (Question (2)) is shown in Figure 6.18, a
histogram of the waiting times over 1,000 simulated customers, and the
mean waiting time (Question (1)) is estimated to be 0.19 hour.

6.2.5 Strategies for managing waiting lines


As discussed in Section 6.2.2, excessive waiting times often result in some
reneging by customers, and therefore reduction in demand. Similarly, if
excessive waiting is known or observed by potential customers, they may
reconsider their need for service and balk. Under the pressure of long wait
times, service providers may speed up, spending less time per customer in
order to reduce overall service time, but also reducing quality in the
process. Sustained pressure to rush may induce the service providers to
eliminate time-consuming tasks and perform the bare minimum by cutting
corners. In summary, excessive waiting has a negative impact on customer
satisfaction, demand volume, and service content and quality. Waiting lines
need to be managed so that excessive waiting times can be avoided.

Figure 6.18: Distribution of Waiting Time from Queueing Simulation.

Waiting lines can be managed in several ways:


• Manage arrival, or demand variability by using appointment and
reservation systems.
• Manage customer arrival rates, e.g., by informing customers of current
congestion levels and congestion pricing.
• Manage capacity, e.g., by adjusting capacity (service level and/or
service level) to meet demand.
• Manage the psychology of waiting by making waiting more pleasant.
Appointment scheduling and reservation systems let customers know
when, where, and from whom they will receive services, and is widely used
in practice in, for example, doctors’ offices, hotels, flights, professional
services, restaurants, and auto services. The purpose is to manage the
arrival variability, so as to reduce waiting time and improve customer
satisfaction. It also allows service providers to plan daily work, as they can
reschedule appointments from busy hours to less busy hours. Thus
appointment scheduling can help providers “smooth” the workload (within
days and across days) and better utilize service capacity. In practice,
however, appointment scheduling is not just filling one’s timetable with
people; the providers also need to manage no-shows (much like the no-
shows in the airline overbooking example).
To deal with no-shows, one can either utilize operational strategies or
behavioral intervention. The most popular operational strategies include:
• The Bailey–Welch Schedule: Having more people to come at the
beginning of the day.
• Overbooking: schedule more appointments than available time slots.
The popular behavioral interventions include:
• Reminder systems, such as text messages, phone calls or emails.
• Financial incentives, such as transportation vouchers for rejected
passengers.
• Monetary penalties, such as no-show fees.
• Dismiss “frequent offenders” from obtaining service.
These interventions typically have a positive effect, but cannot eliminate
no-shows altogether.
In addition to arrival variability, customer arrival rates can also be
managed by informing customers of current congestion levels (e.g.,
electronic traffic information), and instituting congestion pricing at peak
hours (e.g., higher tolls during rush hours).
Capacity management is a deep and broad topic which will be
discussed more thoroughly in Section 6.3.
If we cannot reduce waiting time, we may be able to make customers
feel better while they wait by managing their expectations, by being fair
(social justice),4 by making the waiting environment fun and interesting,
and by providing feedback about delays to the customers.
In fact, being fair to customers may often be more important than
cutting the waiting time. Customers resent seeing someone who arrived
later, but received service before them; this is the reason FCFS is the most
popular order of service. For instance, a report on the fast food industry
showed that customer satisfaction in certain single-queue Wendy’s
restaurants is higher than many multi-queue Burger King and McDonald’s
restaurants despite the latter’s average waiting time being only half of the
former. It is believed that the Wendy’s customers prefer the longer queue
with the guaranteed FCFS service order to the multi-line situation with a
high chance of social injustice.5
An example of social injustice in the multi-line situation takes place
when additional cash registers are opened in a retail store when the
checkout lines become too long. This may be unfair to customers waiting in
the existing lines because customers arriving later have a higher chance of
being checked out more quickly by the newly-opened lines, and thus the
system operates almost in a last-come first-served manner.
Waiting times can be made more pleasant by providing imaginative
lobby design options, such as televisions, computers, mirrors, Wi-Fi
connections, vending machines, etc. Some restaurants hand out menus to
customers while they wait in line; coffee shops may take orders while
customers are waiting during busy hours.
Waiting can also be made more tolerable if information is provided to
customers on much longer they will have to wait. In fact, customers usually
“feel better” about waiting in queues when they can estimate their waiting
time, and “feel worse” with uncertain and unexplained waits. Examples can
be found in elevator floor signals, credit card customer service telephone
lines, Department of Motor Vehicle waiting information displays, and
doctors’ office receptionists informing waiting patients of emergency cases.
In summary, this section has provided the following insights into
waiting lines and their management:
• Waiting is the result of limited capacity and randomness, and can be
modeled by queueing theory.
• Simply setting the capacity to the average demand in designing a
service system may not work because variability in both demand and
service time can lead to waiting times and queue lengths that grow
without bound.
System design, such as adjusting the number of servers and redesigning
• customer routes can have a big impact on reducing waiting times.
• The psychology of waiting can be a powerful tool to “manage”
customers’ waiting experience.

6.3 Capacity Management


6.3.1 Strategies for capacity management
To manage capacity for service operations, we must first quantify the costs
associated with mismatched supply and demand. If capacity is greater than
demand, we have the cost of underutilized capacity, such as idle staff,
equipment, or facilities, all of which require investment, resulting in costs,
but do not bring in revenue when underutilized. On the other hand, if
capacity is smaller than demand, we have the cost of overutilized capacity,
such as overtime salaries, long waiting times for customers who may opt to
leave for other providers, and loss of morale among overworked employees.
Capacity planning must balance the economic trade-off between the
cost of service (or underutilization cost) and the cost of waiting (or
overutilization cost), as shown in Figure 6.19. Adding both costs together
results in a convex total cost curve, TC, for the organization, enabling us to
identify a service capacity which yields a minimum total combined cost, in
dollars per unit time, as shown in Equation (6.10).

(6.10)

where
cw = cost of waiting in dollars per customer per unit time,
cs = cost of service in dollars per server per unit time.
Figure 6.19: Economic Trade-Off in Capacity Planning.

To illustrate, suppose a repair shop has Poisson, or random, input with


a mean arrival rate of two machines per hour. Service time is exponential
with a mean of 0.4 hours. The cost of providing each repair person is $32
per hour, and it’s estimated that the cost of having a machine idle is $200
per hour. To find the number of repair people to assign in order to minimize
cost, we have λ = 2/hour; 1/µ = 0.4 hours so µ = 2.5/hour; cw =
$200/hour/machine; cs = $32/hour/repair person. Using Equation (6.10),

Starting with s = 1, we find L = 4 machines, so that TC = $832/hour. From


the shape of the total cost curve in Figure 6.19, we know that once the total
cost reaches a minimum, it will continue to increase. We can thus proceed,
by trial and error, until we identify the optimal number of servers, s, which
minimizes the total cost. For s = 2, L = 0.951 machines, so that TC =
$238.20/hour. Since the total cost curve is decreasing, we continue
calculating the total cost, as shown in Table 6.5.
We see that total cost is minimized when we have three repair people in
the system.

Table 6.5: Optimal Number of Servers to Minimize Total Cost/Hour.


Notice another important insight: while we do reduce cost significantly
when we increase the number of servers from 1 to 2 because of the decrease
in the average number of machines in the system, the subsequent changes in
L depend very little on the number of servers — in fact, the value of L is
approaching the ratio λ/µ = 0.8, and adding servers only increases the
service cost without significantly improving the average number of
machines in the repair shop. Thus, while we may have control over arrival
rate, service rate, and number of servers, we cannot necessarily improve the
performance measures for a queueing system by improving any one of these
three inputs. In this example, if we wish to reduce the average number of
machines in the system further, increasing the number of servers will not
achieve this; we would have to either decrease the arrival rate (for example,
in a repair shop, by replacing aging equipment, or instituting a program of
preventive maintenance) or increase the service rate (for example, by
retraining the current operators, or hiring more skilled servers).
In addition to balancing the cost of service and the cost of waiting,
there are other effective strategies for increasing efficiency, such as:
• Building safety capacity in response to peak demand (emergency
personnel, rush hour express).
• Better systems designs, like pooling of servers, in which we cross-train
staff for multiple roles (cashiers and stockers in supermarkets; nurses
working in multiple service lines; and airlines sharing gates, ramps,
baggage-handling equipment and ground personnel).
• Increasing customer self-service (customers disposing of trash in fast
food restaurants; buffets).
• Smart planning and scheduling of capacity by peak and non-peak hours,
as discussed in Section 6.3.2.

Table 6.6: Minimum Number of Nurses Required per Nightshift at a Hospital.


6.3.2 Quantitative tools for staff planning and scheduling
In this section, we illustrate quantitative tools for staff planning and
scheduling with an example of nurse staffing and an example of transport
services planning at the Mayo Clinic.
Nurse Staffing. A hospital wishes to schedule weekly nightshift for its
nurses. Based on last year’s patient history, it requires a minimum number
of nurses each day on a weekly basis, as shown in Table 6.6.
Every nurse works 5 days in a row. How many nurses are needed at a
minimum to meet the demand?
Clearly, the hospital needs at least 28 nurses to cover the peak demand
over the weekend. But since each nurse works at most 5 days, the hospital
needs more than 28 nurses to cover all 7 days. On the other hand, because
each nurse works more than 1 day, the number of nurses needed should be
smaller than the sum of the daily requirements for the entire week. To
determine the minimum number of nurses required at the hospital, we can
use a linear programming (LP) formulation. We define the decision
variables to be:
x1 = number of nurses starting their workweek on Monday, so they work
Monday–Friday.
x2 = number of nurses starting their week on Tuesday, so they work
Tuesday–Saturday.
x7 = number of nurses starting their week on Sunday, so they work
Sunday–Thursday.
To determine how many nurses we need to schedule on Monday, we
note that the nurses starting their week on Monday work on Monday.
However, nurses starting their week on Tuesday or Wednesday do not work
on Monday because they work Tuesday–Saturday or Wednesday–Sunday,
respectively. Following the same logic, we can formulate the LP model
shown in Figure 6.20.

Figure 6.20: LP Model for Nurse Nightshift Staffing.

Solving this problem using the methods discussed in Chapter 3, the


optimal solution shows that we need 30 nurses to meet all the requirements.
Transport Services Planning at the Mayo Clinic.6 Because the Mayo
Clinic has isolated buildings in Rochester, Minnesota, it needs transport
services to ensure patients’ on-time arrivals to the next location and reduced
waiting times and delays for expensive resources. Between 7,000 and 9,000
requests are received per month for transport services. The scheduling of
transporters is based on “experience,” which sometimes leads to long
waiting times for patients and negative financial consequences.
To determine the staffing requirements in transport services, we first
forecast the demand, based on historical data, by day and by hour, and then
calculate the required staff during each period based on queueing theory
(using an M/M/s model).
Demand forecasting. The number of requests for transportation
services varies by month, day of the week, and hour of the day. Figure 6.21
shows, based on historical data, that the largest number of requests occur on
Wednesdays, and, within each day, the number of requests is low in the
early morning but quickly reaches a peak in the late morning, remains quite
flat between noon and early afternoon, and then drops sharply towards the
late afternoon.

Figure 6.21: Requests for Transport Services at the Mayo Clinic by Hour and Week.

Table 6.7: Number of Transport Requests per Hour at the Mayo Clinic.

Specifically, the hourly rate of requests for a typical day is shown in


Table 6.7.
Staff requirement estimation. To estimate the required number of staff (the
transporters), we use an M/M/s queueing model, where the input is the
expected hourly demand and the service rate. Historical data indicates that a
transporter can handle an average of 3.69 requests per hour. Thus, the
servicing rate is 3.69/hour. There are essentially no waiting room limits.
The M/M/s queueing model allows us to find the minimum number of
transporters needed in order to satisfy a multi-layer waiting time
requirement given the service requirement that at least 75% of all patients
should wait less than 2 minutes, 85% of all patients should wait less than 5
minutes, and 95% of all patients should wait less than 10 minutes.
In order to determine the minimum number of servers that will yield
the required minimum probabilities that the average waiting times, Wq, will
be less than the threshold levels specified, we use the QTP function
QTPMMS MinServers(Threshold Time, Service Level, Arrival Rate,
Service Rate, Queue Capacity), where the threshold times are 0.033 hours
(2 minutes), 0.083 hours (5 minutes), and 0.167 hours (10 minutes); the
associated service levels are 0.75, 0.85, and 0.95; the arrival rates for each
hourly period are given in Figure 6.21, and the service rate is 3.69/hour. For
example, during the 8–9 am period, where the arrival rate is 31.7
requests/hour, Figure 6.22 shows that the minimum number of transporters
needed to meet the waiting time requirement for a threshold of 2 minutes is
12, for a threshold of 5 minutes is 12, and for a threshold of 10 minutes is
also 12, so the minimum number of transporters to meet the waiting time
requirements for all three thresholds is 12.
In the same way, we can find the minimum required staff in every
hourly time slot, as shown in Table 6.8.

6.4 Case Studies


6.4.1 Hillcrest Bank — staffing and scheduling
Staffing and scheduling are not trivial tasks, because employees must be
hired to work for at least some minimum amount of time (e.g., a shift). The
staff at Hillcrest Bank can work 8-hour, 6-hour, and 4-hour shifts. Only a
limited number of employees can work 6-hour and 4-hour shifts. The bank
operates 8 hours per day, between 9 am and 5 pm, and the bank manager,
Alan, must determine how many employees of each type are needed to start
work at each hour with the objective of minimizing staff costs. Alan has
collected data on arrivals and service times during each hourly period and
found that both arrivals and departures occur randomly. He has conducted
an M/M/s analysis and has found that in order to meet predetermined
acceptable service standards, a minimum of six employees will be required
during the hours of 9–11 am and 2–5 pm, and a minimum of eight
employees will be required between 11 am and 2 pm to handle the
somewhat heavier demand during lunch hours at nearby businesses. In
addition, the union contract limits the number of employees working 6-hour
shifts to five per day, and the number working 4-hour shifts to four per day.
Figure 6.22: QTP Software Calculation of Minimum Number of Transporters to Meet Waiting
Time Requirements between 8 and 9 am.

Table 6.8: Minimum Number of Transporters to Meet Waiting Time Requirements.


We define the decision variables as:
xt = number of staff starting their 8-hour shifts at t = 1, 2, … , 8,
yt = number of staff starting their 6-hour shifts at t = 1, 2, … , 8,
zt = number of staff starting their 4-hour shifts at t = 1, 2, … , 8.
Since all employees are paid the same hourly wage, the objective is to
minimize total number of person-hours for the day, subject to the following
constraints:
• For each hour, the total number of employees working must be at least
the minimum required.
• The total number of employees working 6-hour shifts cannot exceed 5.
• The total number of employees working 4-hour shifts cannot exceed 4.
• All decision variables must be non-negative integers.
Questions for Hillcrest Bank Case Discussion:
• Using an integer programing model, how many employees should be
scheduled to begin 8-hour, 6-hour, and 4-hour shifts during each of the 8
hours of operation?
• How many employees will be working during each of the 8 hours?
• If the hourly cost of an 8-hour shift employee is 40% more than the
hourly cost of a 6-hour shift and 4-hour shift employee (because of
benefits which must be paid to “full-time” employees), how would the
problem formulation change and what would the optimal work schedule
be? Comment on the advantages and disadvantages of this solution as
compared to the previous solution.

6.4.2 Brier Health Systems — Centralized Customer Contact Center


Today’s customer experience requires a full service “end-to-end” approach,
from initial contact to post-service follow-up and support. Brier Health
Systems is a regional health care provider consisting of primary care
physicians, hospitals, specialists and therapy care centers. Located in the
Southeastern US, Brier is a growing entity which has acquired and
consolidated large hospital systems and recruited independent physician
groups and care centers. Competing in a complex and growing market
environment subsequent to the deployment of the Affordable Health Care
Act, Brier Health Systems recognizes that customer service at all points of
contact is critical to developing and retaining a loyal and trusting customer
base. Their customer base reflects the diversity of the region in which they
operate, from the young professional millennial to the aging baby boomer.
In order to better engage this broad community, Brier is focusing on a
strategy to differentiate themselves from competitors through an advanced
and convenient customer contact center. Services provided would include
support on questions related to available physicians and scheduling
appointments, diagnosis, and treatment options, as well as insurance claims
support. Currently, Brier utilizes multiple websites, a general phone contact
center and the phone contact centers for their respective network of
providers.
A critical component of the contact center strategy is to optimize the
resources and their effectiveness to support their customers. The multiple
centers are using various technologies and experience varied peak workload
periods. Additionally, patient contact information is difficult to consolidate
from the various centers, limiting Brier’s ability to provide end-to-end
service to their customers.
Questions for Brier Health Systems Case Discussion:
• Would consolidating and harmonizing into one central contact center
help Brier Health Systems achieve their strategic objectives? What are
the benefits in resource management? What are the benefits in data
consolidation?
• Should Brier Health Systems consider advancing their technology base
to include online chat capabilities? Patient registration and email
contact? Backup call centers for overload periods?
• Should Brier Health Systems consider low-cost labor sites for their call
center? What would be the benefits? What would be the risks?

6.5 Exercises
1. Consider an M/M/1 queueing system, where the mean arrival rate is
eight customers per hour, and the single server can serve one customer
in 12 minutes, on average. Calculate
(a) the utilization, ρ,
(b) the average total time a customer spends in the system,
(c) the average time a customer spends in the queue,
(d) the average number of customers in the system,
(e) the average number of customers in the queue,
(f) the probability that an arriving customer will have to wait.
2. The manager of the system in problem (1) decides to add a second
server, who works at the same rate as the current single server.
Calculate the system performance measures for this system under the
two scenarios below:
(a) two separate lines form in front of each server, and customers do
not change lines or leave the system before being served,
(b) one line is formed, where the first person on line proceeds to the
next available server.
3. A bank with one ATM in the lobby is trying to determine whether to
add additional ATMs. Arrivals occur randomly at an average rate of 22
per hour, and service times are exponential with an average of 2.5
minutes. The bank manager is considering three configurations: one
additional ATM, two additional ATMs, and three additional ATMs. For
each configuration, calculate:
the probability that an arriving customer will have to wait in the
(a)
single waiting line,
(b) the average waiting time,
(c) the average number of people waiting in line,
(d) if the lobby has room for only four customers to wait, calculate
parts (a), (b), and (c) for each of the three configurations.
4. In problem (3), the cost of waiting is estimated to be $50 per customer
per hour, and the cost of service is $24 per hour per server. Calculate
the average total cost per hour (= the average cost of waiting per hour
plus the average cost of service per hour) for each of the
configurations (a single ATM, two ATMs, three ATMs, and four
ATMs) and specify the number of additional ATMs the bank should
install to minimize total cost.
5. In the YouTube video “Why the other line is likely to move faster,”
https://www.youtube.com/watch?v=F5Ri HhziI0, the speaker mentions
that in order to have less than 1% of people encounter blocked calls,
we need seven trunk lines. Show how you can get this result by
applying an M/M/s queueing model here. What would be the blocking
probability (i.e., the likelihood that a customer cannot receive service
immediately after making the call) if the system uses only six trunk
lines?
6. A technical services help desk receives, on average, 100 calls an hour,
and each specialist (server) can answer and resolve customer questions
in 15 minutes, on average. Questions arrive randomly, and service
times are exponential. Management wishes to determine the number of
specialists to assign to ensure the following service levels:
(a) 50% of all customers wait less than 5 minutes;
(b) 80% of all customers wait less than 5 minutes;
(c) 90% of all customers wait less than 5 minutes;
(d) Comment on the quality of this operation. What would your
recommendation be to the management?
ZF Health is a federally qualified community health center which
7. served 25,800+ patients with 86,000+ visits in 2014. Its call center
handles appointments, medication, and medical assistance. Currently, a
significant portion of calls are not answered, and the annual patient
survey indicates many complaints about being unable to reach
someone on the phone. The objective of the health center is thus to
adjust staff requirements for the call center so that 99% of all calls are
answered within 2 minutes.
The call center is open from 8 am to 6 pm, with an average of 2
staff members. Based on historical reports, an average call lasted 3:03
minutes, and an average of 358.8 calls come in per day. Use an M/M/s
model to determine the likelihood of a call being answered within 2
minutes. How many staff members need to be on duty to achieve the
goal of answering 99% calls within 2 minutes?
8. A carwash on the side of a highway has room for only four cars in the
queue (queue capacity) in addition to the car being washed (additional
cars would be waiting in the right lane of the highway, and so must
leave rather than patronize the carwash). Cars arrive randomly at an
average rate of 10 per hour, and two teams are employed to wash each
car, with exponential service time averaging 4 minutes.
(a) What is the probability that an arriving car cannot be
accommodated by this carwash?
(b) What is the average number of cars waiting in line to be washed?
(c) What is the average total time spent by each car?
9. The Washington Place Hotel has 200 rooms. Due to no-shows and last-
minute cancellations, the hotel often accepts reservations for 210
rooms in the hopes of minimizing the probability of empty rooms. If
more than 200 guests arrive, those in excess of 200 are directed to a
nearby competitor and given a discount coupon for a future stay at
Washington Place. Suppose the probability of a no-show or
cancellation is 8%); the cost of an empty room is $85; and the cost of
turning away a guest (in terms of loss of good will and the cost of a
discount coupon for a later visit) is estimated to be $95.
Use Monte Carlo simulation to determine:
(a) the probability that an arriving guest will have to be directed to a
competitor,
(b) the probability that fewer than 200 rooms will be filled on a given
night,
(c) the cost to the hotel owner of this overbooking policy?
10. A small private car service operator with one limousine has found that
during late-night hours, the time between customer requests is
normally distributed, with a mean of 30 minutes and a standard
deviation of 4 minutes. The average ride is 15 minutes in duration,
normally distributed with a standard deviation of 3 minutes.
(a) What is the average time spent by a customer who calls to request
a limo?
(b) What is the average number of customers waiting for a limo?
(c) What is the waiting time distribution?
11. Consider the Mayo Clinic example in Section 6.3.2; suppose on the
busiest day, Wednesday, we have the data on number of requests
shown in Table 6E11.1.
(a) Use the M/M/s queueing model to determine the minimum number
of transporters required to meet the multi-layer service levels.

Table 6E11.1: Number of Transport Requests per Hour on a Wednesday.


Time Patient Requests per Hour
6–7am 5.8
7–8am 22.6
8–9am 35.7
9–10am 48.4
10–11am 50.1
11–12pm 41.1
12–1pm 39.5
1–2pm 38.7
2–3pm 37.9
3–4pm 33.2
4–5pm 22.0
5–6pm 10.6
6–7pm 2.3

(b) Formulate and solve an integer programing model to determine an


optimal staff schedule if employees can work 4-hour, 6-hour, or 8-
hour shifts, and at most two, 6-hour shifts and one, 4-hour shift are
permitted.

Endnotes
1. “List of Countries by GDP Sector Composition,” StatisticsTimes.com, 2015 Available at:
http://statisticstimes.com/economy/countries-by-gdp-sector-composition.php.
2. L. Morrow, “Waiting as a Way of Life,” Time, July 23, 1984, p. 65.
3. L. Groeger, M. Tigas, and S. Wei, “ER Wait Watcher: Which Emergency Room Will See You the
Fastest,” ProPublica, May 27, 2015. Available at: https://projects.propublica.org/emergency/.
4. R. Larson, “Perspectives on Queues: Social Justice and the Psychology of Queueing,” Operations
Research, Vol. 35, No. 6, 1987, pp. 895–905.
5. A. Lewin, private 1986 communication with R. Larson, Ibid.
6. D. Kuchera and T. Rohleder, “Optimizing the Patient Transport Function at Mayo Clinic,”
Quality Management in Health Care, Vol. 20, No. 4, 2011, pp. 334–342.
Index

A
aggregate planning, 8, 103, 105
Air Champion, 111
Air Champion outsourcing, 110
Air Champion Outsourcing Case, 113
EnergyBoat, Inc., 103
PowerZoom, 115
PowerZoom Energy Bar Case, 114, 116
Amazon, 1, 145
American Production and Inventory Control Society (APICS), 196
American Royal Financial Inc., 191, 193
analysis of variance (ANOVA), 48
Apple, 18–19

B
batch size, 176
beta probability distribution
mean, 215
project duration, 216
variance, 215–216
binary variables, 93
Big M method, 95
conditional constraints, 94
if-then decisions, 94
k out of n options, 93
BJ, 102
Boeing, 145

C
capacity management, 12, 231, 234, 258, 260
capacity planning, 15, 86, 111, 236
staffing, 236
causality, 48–49
cause and effect, 48
central limit theorem, 216
challenger space shuttle, 23
channels, 241
Chrysler, 145
collaboration, 155
collaborative planning, forecasting and replenishment (see also CPFR), 60, 65
consolidation, 183
constraints, 89–90
continuous review batch size model
service level model, 176
Type 1 service level, 176
Type 2 service level, 177
continuous-review batch size, 175
correlation, 48–49
Costco, 102
crashing
linear program, 208
linear programming model, 205
critical path method
backward pass, 202–203
critical activities, 200–203, 210–211, 220
forward pass, 202
immediate predecessors, 192–193, 198–199, 202, 207
immediate successors, 202
non-critical activities, 200–201, 204, 210
slack task time, 221
slack time, 200, 210
customer service, 84

D
decision variables, 90–91
Dell, 136
Dell computer, 135
Delphi method, 103
demand and supply planning, 101, 116
demand categorization, 59
demand forecasting, 7, 18, 21, 24, 40–41, 43–44, 60, 64, 103
arithmetic mean, 7, 23, 25–26, 28–29, 31, 35, 41, 53, 55, 63
bullwhip effect, 15, 161, 174–175
causal models, 16
cycles, 20
cyclical, 21
data smoothing factor, 50
Delphi method, 16
dependent, 43
dependent variable, 44, 50
econometric models, 16, 21
exponential smoothing, 7, 19, 29, 36, 38–42, 50, 52, 63
forecasting errors, 175
Holt’s method, 7, 13, 19
Holt’s trend model, 42, 50–51
independent, 43
independent variables, 50
last period demand, 26, 61
last period value, 7, 23, 25, 28–29, 35, 41–42, 63
least squares estimate(s), 44, 46
linear regression, 7, 46–48, 54, 78
mean square error, 57
moving averages, 7, 19, 29–31, 33, 35, 38, 40–42, 63
multiple regression, 50
multiple-step-ahead, 32, 39
naïve, 55
naïve methods, 41
naïve models, 19, 23–25
one-step-ahead forecasts, 30–33, 39
qualitative, 7, 15–16
quantitative, 7, 16
regression, 13, 19, 42, 45–47, 55, 69
regression analysis, 23
regression coefficients, 46, 78
regression model, 55
k-step-ahead forecast, 50
seasonal, 7, 18–20, 39, 52, 138, 141, 170
seasonal adjustment, 53–55, 63
seasonal factors, 14, 19, 53–54, 56
seasonal indices, 7
seasonality, 20–21
seasonally-adjusted forecasts, 56–57
simple regression, 63
smoothing constants, 36, 40, 52
stationary, 7, 13, 18, 23–24, 28, 34, 39, 41, 53, 55
Stay Warm Call Center, 61
Stay Warm Call Center Case, 16, 22
time series, 18–19, 46, 50, 52, 55
time series models, 16
trend smoothing factor, 50
trends, 7, 18–21, 29, 41–42, 45, 47–48, 53, 55
variables, 43
Winter’s method, 7, 57
Winter’s model, 53, 57
demand management, 58–59
demand planning, 82, 103, 114
demand surge, 175
demand–supply mismatch, 66, 70, 72, 82–83, 88
demand–supply planning, 7–8, 85–86, 97, 103
chase strategy, 87, 102
level strategy, 87
offloading, 102
tiered workforce, 102
time flexibility strategy, 87
yield management, 102
distribution, 7, 64, 114
distribution centers, 180

E
economic order quantity (EOQ) model, 9, 147, 149–150, 165, 167, 175
joint, economic order quantity, 154
sensitivity, 150
efficiency, 4, 65, 85, 116, 247
EnergyBoat, 104–106
enterprise resource planning (ERP) system, 64
enterprise resource planning systems
Oracle, 8
SAP, 8
enterprise resource planning systems, 8
EOQ model with finite delivery rate, 167
production batch time, 167
production cycle time, 167
retailer vs. manufacturer, 169
error, 21, 36–37, 47
exponential service time M/M/s, 241

F
facility location, 7, 81
first-come-first-served (see also FCFS), 239, 241
forecast accuracy, 21, 39, 47, 72
mean absolute deviation (see also MAD), 21–22, 26, 28, 31, 33–35, 39–40
mean squared error (see also MSE), 22, 26, 28, 34–35, 40–41, 44, 47–48, 57
forecast error, 21, 69, 141, 146, 174
forecasting accuracy, 59, 82

G
Gantt chart, 204
GI/G/s
generally distributed arrival and service times, 249
independent arrivals, 249
Google, 1
gross domestic product (GDP), 232
H
Home Depot, 65

I
IBM, 135, 170, 174
IKEA, 102
integer programming, 92, 109
inventory, 59, 84, 104–106, 111–112, 114
inventory management, 4, 7, 9, 82, 133
backlog, 86, 105, 110–112
backordered, 144
backorders, 9, 147, 162–163, 167, 170, 179
base-stock level, 179
carrying, 163
carrying cost, 110, 143
COGS, 170
collaboration, 153
collaborative strategy, 184
consignment with modified VMI, 67
continuous review, 143
cost, 163
cost of goods sold, 170
cycle stock, 136, 139, 147, 181
cycle time, 4, 82, 138, 148–150, 162–163, 165, 168
De-Icier Case, 169, 184
demand surge, 137
discount management policy, 162
discounting strategies, 161
distributor-based control, 68
durable items, 10, 174–175
economic order quantity (EOQ) model, 134, 150
economies of scale, 145–147, 161, 181
EOQ model with planned shortages, 162, 164, 167
finished goods, 134
fill rate, 165, 177, 179
forward-buy, 162
forward-buy stock, 139, 161
goodwill, 145
holding, 163
holding cost rate, 143
holding cost(s), 140, 143, 145–147, 149–150, 165, 175–176
ImportHome LLC, 185
ImportHome LLC Case, 181, 186
inventory performance measures, 140
inventory policy, 146–147
investment-buy stock, 139
joint ordering, 157
joint ordering strategy, 9, 155–156
just-in-time, 4
lead time(s), 66, 72, 141–142, 146, 152, 170, 174–176, 180, 185
life cycle, 146–147, 171
loss, 145
loss of goodwill, 134
lost sales, 134–135, 145–146, 169–171
markdown cost(s), 146, 169, 171
mixed SKU strategy, 9, 155
mountain tent case, 150
mountain tent company, 183
Office Supplies, Inc., 150, 181
Office Supplies, Inc. Case, 182
order cost, 175
order quantity, 148–150, 156, 163, 177
order size, 158
ordering cost, 9, 149–150, 163, 176
ordering frequency, 156
outsource, 146
outsourcing, 112
penalty cost, 110, 144, 162
perishable items, 9
pipeline stock, 138
planned shortage model, 165
planned shortages, 9, 82, 110, 112, 167
point of sale (POS) replenishment, 67
prebuilt stock, 138
quantity discount, 9, 161
quantity discount model, 158
raw materials and supplies, 134
reorder point, 152, 175–177
review cycle, 141, 143
review period, 179
risk pooling, 180
safety stock, 10, 67, 81, 135–136, 138–139, 147, 174–175, 179–180
safety stock cost, 175
safety stock model, 174
semi-finished, 140
semi-finished products, 134
service level, 145
service level requirement, 178
service requirement(s), 145, 147, 174–176, 180
service time, 145
shelf-life, 162, 174
shipping capacity, 152
shortages, 105, 144, 162
shortage penalty, 165
shortage penalty cost, 163
speculative-buy stock, 139
stock-outs, 144–145, 162, 174, 176
subcontracting, 112
technology obsolescence, 162
transportation capacity, 167
vendor managed inventory (VMI), 67
work-in-process (WIP), 134, 140
inventory performance measures
average flow time, 140
inventory turnover rate(s), 140–141
number of turns, 140
inventory policy, 133, 140, 170, 174

L
last period demand, 62
law of parsimony, 29
law of succinctness, 29
life cycle, 139, 142–143, 169–170, 174
linear program, 90
linear programming, 8, 88, 100, 113
constraints, 108
decision variables, 89, 106, 112
Microsoft Excel Solver, 9, 97, 121
objective function, 108
objective function coefficient ranges, 127
post-optimality analysis, 127
reduced cost, 125
right-hand side ranges, 128
sensitivity analysis, 100, 125
shadow price, 100, 126
Liz Claiborne, 135
Loews, 65
logistics, 14, 64, 135, 182, 185
loss function, 179
LP model, 89–91, 106, 112, 264

M
M/M/s
exponential interarrival times, 241, 249
exponentially distributed service times, 249
random arrivals, 241
random departures, 249
make-to-order, 165
mathematical programming, 87
marketing strategies, 7
mixed integer programming, 92
Monte Carlo simulation, 12, 249, 251
overbooking, 252
queuing, 254
random variables generation, 251

N
negotiation, 7
network
activity-on-node, 220
activity-on-node representation, 198
arcs, 198
branches, 198
nodes, 198, 220
path(s), 200–201, 212
Newsboy Model, 170
Newsvendor Model, 9, 170–171
critical ratio, 172–174
overage cost, 171–174
underage cost, 171–172, 174
normal density function
mean, 216
standard normal score, 217
variance, 216
Nortel, 15

O
Occam’s Razor, 29
objective function, 90
one-step-ahead forecast, 29
one-period model, 172
operations management, 2–3
operations management vs. supply chain management, 2
ordering (or production) cost, 144
outsourcing, 7, 104
outsourcing/subcontracting, 81, 106

P
Parkinson’s law, 219
periodic-review base-stock model, 178
base-stock level, 178, 180
target inventory level, 178
Type 1 service level, 178
pharmaceutical supply chain, 4
biologics, 6
product expiry, 6
reverse logistics, 6
pipeline stock, 136
planning horizon, 8, 86, 105, 112
prebuilt stock, 136
Procter & Gamble, 135
procurement, 64, 84, 111, 114
production planning, 88
project duration
mean, 216
variance, 217
project management, 7, 10, 191–192, 196–197, 222
critical path(s), 200–201, 203–205, 210–213, 216, 221
cost-benefit analysis, 211
crashing, 193, 205, 208
critical chain project management (see also CCPM), 191, 219–220
critical path method (see also CPM), 10, 191–192, 199, 201, 218–220
direct costs, 210
execution, 197
expected deadline, 209
expedited duration, 193–194
expediting activity time, 208
expediting plan, 210
Gantt chart, 220
human factors, 218–219
indirect costs, 210, 213
matrix organizational structure, 196
microsoft project, 221–222
microsoft software, 220
network, 199–201
PDS Company, 222
planning, 197
product launch process, 222–223
program evaluation and review technique (see also PERT), 11, 191, 214, 216–219
beta probability distribution, 215
probablistic activity times, 214
uncertain activity durations, 214
project duration, 192, 194, 205, 209–213, 217
project planning, 197
task durations, 195, 202
three-estimate approach, 194–195
time–cost analysis (TCA), 10, 191
project planning
execution, 198
network, 198
planning, 198

Q
quantity discount model, 134
all-unit discount, 158
discount category, 159
queueing, 11
cost of service, 12, 260, 262
cost of waiting, 12, 260

R
random, 21
random arrivals, 249
reorder point (Q–R) model, 175
residual error, 21
resilience, 3
resource allocation models, 103
responsiveness, 3–4, 82, 84, 116
risk pooling
square root law, 180–181

S
safety stock model, 185
sales and operations planning, 81
salvage value, 171–172
Sam’s Club, 102
service management, 7, 11, 231
Bailey–Welch schedule, 258
Brier Health Systems, 269
congestion pricing, 258
cost efficiency, 234
Hillcrest Bank, 266
no-shows, 258
overbooking, 258
quality of service, 234
service capacity, 232, 236, 241, 258
service management economics, 232
service operations, 232
system capacity, 236
shelf-life, 143
simulation models, 251
Sport Obermeyer, 146
spurious correlation, 48
staffing, 263–264, 266
standard normal density function, 173
stochastic model, 251–252
subcontract, 111
supply chain profitability, 3, 7, 12
supply planning, 82, 84, 114
S&OP, 101–103, 115–116

T
Target, 16
time–cost analysis
project management, 205
Toyota, 4
transformation process, 2
transport services planning, 264

U
US trade, 1

W
waiting line management, 11, 231, 234
arrival process, 240
arrival rate, 240–241, 262
average arrival rate, 236
average number of customers in the system, 242–243
average queue length, 236, 242
average service time, 235, 237
average time, 243
average time in the queue, 242
average time in the system, 242
average total time, 235
average total time in the system, 248
average waiting time, 235–236
balk, 256
balking, 239, 241
channels, 235
congestion, 238, 241, 244–245
demand forecasting, 264
demand surge, 238
equilibrium, 242
finite space with no waiting room, 248
first-come-first-served (see also FCFS), 239, 241
first-come-first-served service order, 259
GI/G/s, 249
in the queue, 243
in the system, 243
interarrival time, 236–237
jockeying, 240–241
Kendall notation, 241
last-in-first-out (LIFO), 239
Little’s Laws, 244
M/M/s, 241, 244, 249, 264–265, 268
multi-stage queues, 239
multiple server queueing, 239
multiple servers in series, 239
number of servers, 241
order of service, 239, 259
overflow, 243, 250
parallel single server queues, 239
performance measures, 241, 262
priority service (tagged service), 239
probability of delay, 236
queue capacity, 249
queue lengths, 231
queueing model, 240
queueing systems, 235, 239, 251
queueing theory, 236, 264
Queueing ToolPak (QTP), 243
queues, 234, 238
random arrivals, 236
reneging, 239, 241, 256
self-service, 262
service process, 241
service rate, 236, 241, 262
service times, 237
single server queue, 239
social injustice, 259
social justice, 258
steady state, 242
system performance, 243, 251
system performance measures, 236, 242
traffic, 244
utilization, 242, 245
waiting lines, 234
waiting room capacity, 248
waiting times, 231, 235, 258, 264
Wakefern, 65
Walmart, 16–17, 135
Wegmans, 65
Winter’s method, 14
workforce level, 104
workforce planning, 7

X
Xenon Products Company, 63–64

You might also like