Professional Documents
Culture Documents
LSS GB Complete Slides
LSS GB Complete Slides
Sigma
Green Belt
Mohammad Ali Toosy
Lean Six Sigma Master
Black Belt
Trap of Success (Hays, 2018)
What are your reasons
In Lean Six Sigma, avoiding the tendency to jump to conclusions and make
assumptions about things is crucial. However are
You’re contemplating applying Lean Six Sigma in your business or organisation,
and you need to understand what you’re getting yourself into.
Your business is implementing Lean Six Sigma and you need to get up to speed.
Perhaps you’ve been lined up to participate in the programme in some way.
Your business has already implemented either Lean or Six Sigma and you’re
intrigued by what you might be missing.
You’re considering a career or job change and feel that your CV or resume will
look much better if you can somehow incorporate Lean or Six Sigma into it.
You’re a student in business, operations or industrial engineering, for example,
and you realise that Lean Six Sigma could help shape your future.
(Morgan & Brenig-Jones, 2016)
Learning Objectives
Learning objectives for this Training is to enable you to:
• Understand why a company should utilize the Six Sigma and Lean
methodologies
• Learn the roles of measurement and statistics in Six Sigma
• Gain exposure to a range of tools, from simple to advanced
• Understand the value of combining Six Sigma with Lean methodology
• Understand when and why to apply Six Sigma and Lean tools
• Engage in a step-by-step application of the methodology and tools
(Fliedner, 2015)
1. Make quality a full and equal partner, with innovation starting from the inception
of product development.
2. Emphasize getting high-quality product design and process matches upstream,
before manufacturing planning has frozen the alternatives.
3. Make full-service suppliers a quality partner at the beginning of design rather
than implementing a quality surveillance program later.
4. Make the acceleration of new product introduction a primary measure of the
effectiveness of a company’s quality program.
Philip Crosby (1926-
2001)
He was one of ITT’s first vice presidents of corporate
quality, and gained prominence in the quality field after
publishing Quality Is Free in 1979. Subsequently, he
founded Philip Crosby Associates, a quality management
consulting firm, and the Quality College, an institute that
provides quality training for top management.
Confucius
Lean Transformation
(Kelly, 2018)
• So, what is a “transformation”? Generically, a
transformation is thorough or dramatic change in the
form or appearance of an item.
raising delivery and service levels – 100% on-time-in-full (OTIF), 100% supply
reliability, transparent collaborating;
reducing costs: win–win cost reduction sharing, managing supply chain inventory
and cost;
reducing response time – shortening process lead time, shortening overall order
fulfillment times, shortening replenishment times.
Since the 1980s, organizations have attempted to
introduce quality methodologies with varying degrees of
success. The reasons for their success or failure are
numerous, but one reason stands out more than any
other: the lack of a sustainable quality culture. With Lean
Six Sigma (LSS), an organization is setting out on a distinct
(Shaffie & journey using common operating mechanisms, training, an
organizational structure, objectives, and a common
Shahbazi, language. The fact that LSS is increasingly the approach to
quality chosen by both large and small organizations is in
2012) large part because it has staying power; it is a proven
methodology that has produced measurable financial
results over two decades. If an organization and all its
various divisions have accepted the introduction of LSS,
they are all working toward a common strategic goal, and
they all understand the path to attaining this goal, then the
effectiveness of LSS is greatly enhanced.
The Lean Mindset: Ask the Right Question
(Poppendieck & Poppendieck, 2013)
Start with an inspiring purpose, and overcome the curse of short-term thinking
Energize teams by providing well-framed challenges, larger purposes, and a direct line of sight between their
work and the achievement of those purposes
Delight customers by gaining unprecedented insight into their real needs, and building products and services
that fully anticipate those needs
Achieve authentic, sustainable efficiency without layoffs, rock-bottom cost focus, or totalitarian work systems
The Purpose of Business emphasizes the principle Optimize the Whole, taking the Shareholder Value Theory to
task for the short-term thinking it produces. The alternative is to Focus on Customers, whose loyalty
determines the long-term success of any business. It is one thing for business leaders to have a vision of who
their customers are, but quite another to put the work systems in place to serve those customers well. In the
end, the front-line workers in a company are the ones who make or break the customer experience.
It turns out that the “rational” thinking behind the Shareholder Value Theory has had a strong influence on the
way workers are treated. It all boils down to Douglas McGregor’s Theory X and Theory Y. Theory X assumes that
people don’t like work and will do as little as possible. Theory Y assumes the opposite: Most people are eager
to work and want to do a good job. The lean principle Energize Workers is solidly based on Theory Y—start with
the assumption that workers care about their company and their customers, and this will be a self-fulfilling
prophecy. The principle of reciprocity is at work here—if you treat workers well, they will treat customers well,
and customers will reward the company with their business.
Energize Workers (Poppendieck &
Poppendieck, 2013)
Energized Workers is based on the work of Mihaly Csikszentmihalyi, who found that the most energizing human experience is
pursuing a well-framed challenge. Energized workers have a purpose that is larger than the company and a direct line of sight
between their effort and achieving that purpose. They strive to reach their full potential through challenging work that requires
increasing skill and expertise. They thrive on the right kind of challenge—a challenge that is not so easy as to be boring and not so
hard as to be discouraging, a challenge that appeals to aspirations or to duty, depending on the “regulatory fit.”
Regulatory fit is a theory that says some people (and some companies—startups, for example) are biased toward action and
experimentation and respond well to aspirational challenges. Other people (and companies—big ones, for example) prefer to be safe
rather than sorry. For them, challenges that focus on duty and failure prevention are more inspiring. But either way, a challenge that
is well matched to the people and the situation is one of the best ways to energize workers.
One of the most important challenges in a lean environment is to Constantly Improve. Whether it is a long-term journey to improve
product development practices or an ongoing fault injection practice to hone emergency response skills, striving to constantly get
better engages teams and brings out the best in people.
Delight Customers (Poppendieck &
Poppendieck, 2013)
Delighted Customers urges readers to Focus on Customers, understand what they really need, and make sure that the right products
and services are developed. This is the first step in the quest to Eliminate Waste, especially in software development, where building
the wrong thing is the biggest waste of all.
Some products present extraordinary technical challenges—inventing the airplane or finding wicked problems in a large data
management system. Other products need insightful design in order to really solve customer problems. Before diving into
development, it is important to Learn First to understand the essential system issues and customer problems before attempting to
solve them.
When developing a product, it is important to look beyond what customers ask for, because working from a list of requirements is not
likely to create products that customers love. Instead, leaders like GE Healthcare’s Doug Dietz, who saw a terrified child approach his
MRI scanner, understand that a product is not finished until the customer experience is as well designed as the hardware and
software.
Great products are designed by teams that are able to empathize with customers, ask the right questions, identify critical problems,
examine multiple possibilities, and then develop products and services that delight customers.
Genuine Efficiency (Poppendieck &
Poppendieck, 2013)
Genuine Efficiency starts by emphasizing that authentic, sustainable efficiency does not mean layoffs, low costs, and controlling work
systems. Development is only a small portion of a product’s life cycle, but it has a massive influence on the product’s success. It is
folly to cut corners in development only to end up with costly or underperforming products in the end. Those who Optimize the
Whole understand that in product development, efficiency is first and foremost about building the right thing.
Two case studies from Ericsson Networks demonstrate that small batches, rapid flow, autonomous feature teams, and pull from the
market can dramatically increase both predictability and time to market on large products. Here we see the lean principles of Focus
on Customers, Deliver Fast, Energize Workers, and Build Quality In at work.
A case study from CareerBuilder further emphasizes how focusing on the principle of Deliver Fast leads to every other lean principle,
especially Build Quality In and Focus on Customers. A look at Lean Startup techniques shows that constant experiments by the
product team can rapidly refine the business model for a new product as well as uncover its most important features. Here the lean
principles of Optimize the Whole, Deliver Fast, and Keep Getting Better are particularly apparent.
Breakthrough Innovation (Poppendieck &
Poppendieck, 2013)
Breakthrough Innovation starts with a cautionary tale about how vulnerable businesses are—even
simple businesses like newspapers can lose their major source of revenue seemingly overnight. But
disruptive technologies don’t usually change things quite that fast; threatened companies are
usually blind to the threat until it’s too late. How can it be that industry after industry is overrun
with disruptive innovation and incumbent companies are unable to respond?
The problem, it seems, is too much focus on today’s operations—maybe even too much focus on
the lean principle of Eliminate Waste—and not enough focus on the bigger picture, on Optimize the
Whole. Too much focus on adding features for today’s customers and not enough focus on potential
customers who need lower prices and fewer features. Too much focus on predictability and not
enough focus on experimentation. Too much focus on productivity and not enough focus on impact.
Too much focus on the efficiency of centralization and not enough appreciation for the resiliency of
decentralization.
Lean Mindset (Poppendieck & Poppendieck,
2013)
• Lean organizations appreciate that the real knowledge resides at the place where work is done, in
the teams that develop the products, in the customers who are struggling with problems. Several
case studies—including Harman, Intuit, and GE Healthcare—show how the lean principles of
Focus on Customers, Energize Workers, Learn First, and Deliver Fast help companies develop
breakthrough innovations before they get blindsided by someone else’s disruptive innovations.
• Developing a lean mindset is a process that takes time and deliberate practice, just like
developing any other kind of expertise. No matter how well you “know” the ideas presented in
this book, actually using them in your work on a day-to-day basis requires that you spend time
trying the ideas out, experimenting with them, making mistakes, and learning.
than a tactic
(Byrne, 2016) Every person takes ownership for
problem-solving and learning in order to
Thinking 4.
5.
Conduct an experiment
Analyze the data to draw conclusions
(Dahl, 2020) 6. Document the methods and findings
and…repeat
It’s a process that systematically challenges everything
continuously, and it must be performed throughout
the entire Lean enterprise.
Modern Lean
Framework
(Dahl, 2020)
Understand that leaders are both born and made.
Seven Things
That Must be Expect some degree of failure.
Lean Six Feeding higher-quality leads into the sales funnel at a fraction of
the cost.
Sigma to Reducing developmental timelines for new products by 20 to 50
percent while nearly eliminating the high cost of defects.
Reduce Cost Slicing away complexity and variability throughout the supply
chain to yield 10 to 30 percent cost savings while shortening
process lead time by as much as 80 percent.
(George, 2010)
Lean Six Sigma is the synthesizing agent of
business performance improvement that, like
an alloy, is the unification of proven tools,
Lean Six methodologies, and concepts, which forms a
Sigma unique approach to deliver rapid and
sustainable cost reduction.
(George, 2010)
Ironically, Six Sigma and Lean have often been
regarded as rival initiatives—Lean enthusiasts
noting that Six Sigma pays little attention to
anything related to speed and flow, Six Sigma
supporters pointing out that Lean fails to
Lean V Six address key concepts like customer needs and
variation. Both sides are right. Yet these
Sigma arguments are more often used to advocate
choosing one over the other, rather than to
support the more logical conclusion that we
need to blend Lean and Six Sigma.
(Bentley & Davis, 2009)
• emphasizes the need to recognize opportunities and
eliminate defects as defined by customers
• recognizes that variation hinders our ability to
reliably deliver high-quality services
• requires data-driven decisions and incorporates a
comprehensive set of quality tools under a powerful
framework for effective problem solving
Six Sigma • provides a highly prescriptive cultural infrastructure
effective in obtaining sustainable results
• when implemented correctly, promises and delivers
$500,000+ of improved operating profit per Black Belt
per year (a hard dollar figure many companies
consistently achieve)
Six Sigma is a highly disciplined approach used
to reduce the process variation to such a great
extent that the level of defects is drastically
reduced to less than 3.4 per million process,
Six Sigma product or service opportunities. The
approach relies heavily on statistical tools
which, though known earlier, were primarily
limited to use by statisticians and quality
professionals. (Motiwani, 2012)
Michael George defines Lean Six Sigma as “a
methodology that maximizes shareholder
value by achieving the fastest rate of
improvement in customer satisfaction, cost,
quality, process speed and invested capital”.
Lean Six One of the important principles of Lean Six
Sigma is that “the activities that cause
Sigma hindrances in customer's critical-to-quality
(CTQ) requirements and create the longest
time delays in any process offer the greatest
opportunity for improvement”.
(Urdhwareshe, 2011)
Lean
focuses on maximizing process velocity
• provides tools for analyzing process flow and delay times at each
activity in a process
• centers on the separation of "value-added" from "non-value-added"
work with tools to eliminate the root causes of non-value-add
activities and their cost
• provides a means for quantifying and eliminating the cost of
complexity
Six Sigma is a systematic methodology to
home in on the key factors that drive the
performance of a process, set them at the
best levels, and hold them there for all time.
(Wedgwood,
2016) Lean is a systematic methodology to reduce
the complexity and streamline a process by
identifying and eliminating sources of waste in
the process—waste that typically causes a lack
of flow.
Lean Six Sigma (Wedgwood, 2016)
• The two methodologies interact and reinforce one another, such that
percentage gains in Return on Invested Capital (ROIC%) is much faster if
Lean and Six Sigma are implemented together. (Some people might
question whether ROIC is a valuable metric for service businesses, and the
answer is yes: Many service businesses—hotels, airlines, restaurants,
health care—are very capital intensive. In most other service business—
software development, financial services, government, etc.—the biggest
costs are salaries/benefits, so invested capital is really the "cost of
people.")
• In short, what sets Lean Six Sigma apart from its individual components is
the recognition that you can't do "just quality" or "just speed."
(George,
Rowlands &
Kastle, 2005)
In manufacturing businesses, a significant investment in equipment may be
The Strategic required to improve labor productivity. In contrast, service operations are
primarily driven by intellectual capital. According to Warren Buffett, "the best
Imperative of
kind of investment to make is one in which a huge return results from a very
small increment of invested capital".
Investing in By application of Lean Six Sigma, the numerator of the ROIC equation can be
increased without increasing financial investment. At Lockheed Martin's
(Berkshire At Stanford Hospital and Clinics, big savings came from bringing together a
group of surgeons without any capital investment at all (details were
Hathaway, provided earlier in this chapter). If Buffett likes this kind of investment, so will
your shareholders.
1984) The concept of linking Lean Six Sigma efforts to shareholder value is critically
important but seldom discussed. If the link isn't made, your organization may
realize some gains, but it will be a crapshoot as to whether your investment
in Lean Six Sigma will help drive your strategic goals.
Lean Six Sigma for services is about getting results rapidly. The kind of results
that can be tracked to the bottom line in support of strategic objectives. The
kind that leaves delighted customers wanting to do more business, that
creates value for your shareholders, and that energizes employees.
(George, 2003) results. Lean Six Sigma also incorporates the Six Sigma view of the evils of
variation and reduces its impact on queue times. Finally, Lean Six Sigma
uniquely attacks the hidden costs of complexity of your offering.
Combine Lean Six Sigma's ability to achieve service improvements with its
focus on shareholder value and you have a powerful tool for executing the
CEO's strategy, and a tactical tool for P&L managers to achieve their annual
and quarterly goals. How you do that is the subject of the rest of this book.
From the start, the promoters of Lean assumed
they knew what business leaders wanted. They
had hypotheses about leaders, their wants and
needs, that turned out to be wrong. Why?
Because they lacked a factual understanding of
Respect for how leaders think and what is most important to
them. Simply put, they did not bother to
People understand the current state of the people who
lead organizations. The assumption, still in use
today, has proven to be a big mistake. - Bob
Emiliani
https://bobemiliani.com/lean-got-a-bad-start/
Lean Six Sigma
• The first principle you need to get when selecting a Lean Six Sigma
project is criticality, meaning you have to select and define a project that
is critical to the satisfaction of both your customers and your business.
Otherwise, it’s not worth your time, effort, or resources.
1. Service processes are usually slow processes, which are
expensive processes. Slow processes are prone to poor quality…
which drives costs up… and drives down customer satisfaction and
hence revenue. The result of slow processes: more than half the
cost in service applications is non-value-add waste.
Why Service 2. Service processes are slow because there is far too much "work-
functions
in-process" (WIP), often the result of unnecessary complexity in
the service/product offering. It doesn't matter whether the WIP is
reports waiting on a desk, emails in an electronic in-box, or sales
need Lean orders in a database. When there is too much WIP, work can spend
more than 90% of its time waiting, which doesn't help your
Six Sigma customers at all and, in fact, creates or inflicts substantial waste
(non-value-add costs) in the process.
3. In any slow process, 80% of the delay is caused by less than 20%
of the activities. We only need find and improve the speed of 20%
of the process steps to effect an 80% reduction in cycle time and
achieve greater than 99% on-time delivery.
1) CEO & Managerial Engagement.
Core Elements
of the Six 2) You have to allocate appropriate
resources (= staff and time commitments)
Sigma to high-priority projects.
Equipment: The various items needed for the work. Items can be as simple as a stapler or as complicated as a lathe used in manufacturing.
Consider whether you have the right equipment, located in an appropriate and convenient place, and being properly maintained and serviced.
Method: How the work needs to be actioned – the process steps, procedures, tasks and activities involved.
Materials: The things necessary to do the work – for example, the raw materials needed to make a product.
Environment: The working area – perhaps a room or surface needs to be dust-free, or room temperature must be within defined parameters.
Considering the Key Principles
of Lean Six Sigma (Morgan &
Brenig-Jones, 2016)
1. Focus on Customer
2. Identify and understand how the work gets done
3. Manage Improve and Smooth the process flow
4. Remove non value added steps and waste
5. Manage by fact and reduce variation
6. Involve and equip people in the process
7. Undertake improvement activity in a Systematic way
Lean Strategy harnesses that power and delivers a new way of creating
value from lean. Leading lean experts address popular misconceptions
about the basics of lean/TPS, showing the true purpose of tools,
methods, and attitudes that leverage the intelligence of every
employee doing the work. You’ll learn how to think—and then act—
differently, tapping the power of every person in your organization in a
disciplined manner that generates unparalleled, sustainable success
that is responsive to today’s most pressing challenges
1 Design robustness: Features need to prove their robustness before being included in the final
product, which means largely relying on known engineering standards and a cautious approach to
innovation. Toyota is known as a “fast follower” because it follows the market in terms of
innovation, and it is very cautious in adding innovative features until they have been fully tested and
mastered. The built-in quality frame applies at all levels of the system from the largest—don’t put
on the market a feature you’re not 100 percent sure of—to the most detailed—don’t pass on to the
next worker a job you’re not 100 percent sure of.
Lean’s promise is that aligning individual success and company success makes for a
better-performing business model. By supporting individual employees in writing
their own story and in helping you with your overall objectives for the business, you
can change the story of the business and, by putting pressure on your competitors,
the story of the industry as a whole. Becoming a market leader through challenging
yourself (and forcing your competitors to catch up) is the key to sustainable and
profitable growth, even in times of heightened disruption. Lean thinking is a
structured method to learn how to do this.
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Lean thinking doesn’t focus exclusively on the study of work (Taylorism), nor on the
study of people (as with motivation programs), nor indeed the study of financial
management. Instead, Lean thinking focuses on looking specifically at
the relationship employees have with their work. How do employees understand
their work? Its purpose? How do they feel about it? How do they cope with
problems that appear? How well do they collaborate with their colleagues? This is
hard. As we look at any work situation, we have been trained to look at one of
these:
• The process: For example, how well does it flow, where are the obstacles, and
what is its cost?
• The people: For example, what is their attitude, are they competent, are they
motivated, how experienced are they, and what are their personality traits?
But we rarely look at how the people think and feel about the job—the invisible
cartoon bubble on the top of their heads that explains what they’re going to do
next.
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Changing the focus of management to seeing how people think about their work and deepening
their relationship with their job redefines the role to quite an extent:
• Is the work environment conductive to doing good work? Or is the friction of day-to-day obstacles
to getting the job done such that people give up on doing the best they can?
• Are teams stable and supportive? Do people look forward to seeing their coworkers in the
morning, and do they feel they can be themselves, drop the “company face,” and discuss issues and
take initiatives with their colleagues without risk of blame or criticism?
• Is there a clear line of sight to the greater purpose and overall plan? Beyond giving meaning to
the job, a clear understanding of desired outcomes (as opposed to immediate output) enables
autonomous decision-making, initiatives in unexpected conditions, and creative thinking for
improvement that contributes to the overall result.
• Can they learn and progress while doing their daily job? Do people find opportunities to practice
the skills they’re interested in or new skills they want to acquire in an environment that recognizes
their effort and hard work and tutors them into deepening their own mastery of their job?
• Do leaders learn from local improvements? And do they demonstrate to all how their efforts
contribute to the common good? Is their contribution and effort recognized and fairly rewarded?
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
The Lean strategy involves a revolutionary mindset change. For more
than a quarter of a century, strategy has been shaped by five key
questions captured by Michael Porter:
1. How do you respond to the bargaining power of customers?
2. How do you increase our bargaining power over suppliers?
3. How do you counter the threat of substitute products or services?
4. How do you deal with the threat of new entrants?
5. How do you better jockey for position among current competitors?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
The Lean strategy involves a revolutionary mindset change. For more
than a quarter of a century, strategy has been shaped by five key
questions captured by Michael Porter:
1. How do you respond to the bargaining power of customers?
2. How do you increase our bargaining power over suppliers?
3. How do you counter the threat of substitute products or services?
4. How do you deal with the threat of new entrants?
5. How do you better jockey for position among current competitors?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
Toyota has grown to dominate the automotive market by following a
radically different approach—indeed, a “non-strategy” according to Wall
Street analysts. Toyota’s leaders chose to respond to five different questions:
1. How do you increase customer satisfaction to build brand loyalty?
2. How do you develop individual know-how to increase labor productivity?
3. How do you improve collaboration across functions (and other partners)
to boost organizational productivity?
4. How do you encourage problem solving to better engage employees and
grow human capital?
5. How do you support an environment conducive to mutual trust and
developing great teams to nurture social capital?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
This strategic learning occurs at three levels:
• Team-based agility makes it easier to pick up customer signals and
respond faster to adapt continuous processes to customer real-life usages.
• Kaizen activities and managing learning curves creates a reflexive learning
environment where development topics are tackled deliberately in order to
reinvest gains from improvement into technical and teamwork learning.
• These two levels of learning activities build up a strategic capability for
learning to learn: how to go to the market with experiments, and follow
them up quickly if they work or backtrack without losses if they don’t, but
also the ability to face difficult issues and explore new domains to keep
offering new, undiscovered value to customers.
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
20-year study of lean transformation efforts has shown that CEOs who succeed
with Lean all share three rare traits:
1. They find a sensei: After experimenting with improvement workshops and
projects, at some point they look for and find a sensei with whom they can work.
2. They accept the exercises the sensei prescribes: Although some of the tasks seem
counterintuitive or do not relate easily to what is currently perceived as urgent,
they agree to explore and find out, and get involved in practical exercises to learn
by doing.
3. They commit explicitly to learning—their teams and themselves: At some point,
they look beyond immediate results from kaizen activities (which remain important
as a sign of better response by the teams) and they look for learning (whether the
experiment has succeeded or failed, what has this taught whom and what is the
next step?)
The Lean Strategy: Using Lean to Create Competitive Advantage,
Unleash Innovation, and Deliver Sustainable Growth
• Indirect payroll. The time devoted by executives, team members, process owners, and others to
such activities as measurement, voice of the customer data gathering, and improvement projects.
• Training and consulting. Teaching people Six Sigma skills and getting advice (from folks like us) on
how to make the effort successful can be a significant investment, too.
• Improvement implementation costs. Expenses to install new solutions or process designs can
range from a few thousand dollars to millions—especially for IT-driven solutions. (But keep in
mind that you’re likely already making these investments, and their payoff may not be very
positive.)
KEYS TO SUCCESS (Cavanagh, Pande &
Neuman, 2014)
• Tie Six Sigma Improvement Efforts to Business Strategy and Priorities
Link Customers, Process, Data, and Innovation to Build the Six Sigma System
Why?
What?
How?
Who?
BIG QUESTION 1: WHY? (Watson-Hemphill,
2016)
Here are three examples that illustrate different ways in which the why question has been answered:
A financial services company had grown rapidly through acquisition and needed a methodology to unify the culture and streamline its
core processes. Lean Six Sigma provided a foundation and methodology that established a common language concerning process
improvement and product design. It also helped the company create a culture of continuous improvement. By using Lean Six Sigma to
remove waste, the company was able to decouple its cost curve from its growth curve—meaning that it could generate much more
value for the amount of effort invested, significantly increasing profitability.
A manufacturing company was experiencing significant margin pressure as a result of overseas competition. The leadership team
chose to deploy Lean Six Sigma to reduce operational expenses and leveraged the design methodologies to enhance the customer’s
experience and introduce new products. They also were able to use Lean Six Sigma to increase the skill level of the workforce and
prepare people for future leadership positions within the company.
A large hospital was interested in improving its patient satisfaction scores. Using Lean Six Sigma, the employees focused their
improvement efforts on patient-facing processes that historically had caused patient frustration and dissatisfaction. The initiative was
driven not by a need to reduce costs, but by a desire to better satisfy the hospital’s customers. Ratings for the hospital improved in
every category by 20 to 40 percent.
BIG QUESTION 2: WHAT? (Watson-Hemphill,
2016)
Factors that go into the most robust processes incorporate:
Identification of diverse criteria for both benefit and effort that can
be used to score the projects against the business priorities
To identify the highest-value projects for its Lean Six Sigma rollout, a multinational manufacturer
conducted a series of assessments at multiple plants, in the supply chain operations, in the sales
organization, and in the regulatory functions. They tied their project selection and measurements to
established operational targets involving revenue generation, quality, cost, and productivity. Looking
across multiple areas for ideas that met the criteria led to a diversity of projects. For example, high-
value opportunities were identified in improving plant efficiencies, streamlining the human resource
process, reducing days’ sales outstanding, and optimizing the sales process.
A healthcare company began by identifying their core processes. They then linked their annual goals
to improvement needs in those core processes as a way to select target project opportunities. Some
projects directly improved patient care. Others improved satisfaction with the call centers. Some
streamlined bureaucratic processes that delayed payment to all parties. Still others reduced costs,
such as the cost of expired materials and obsolete medical supplies.
BIG QUESTION 3: HOW? (Watson-Hemphill,
2016)
Knowledge about when to use which method and which road map is more sophisticated today, and
organizations that fail to take the differences into account can waste time and effort. The principles
we recommend are:
Develop expertise in both the Lean and Six Sigma methodologies so that you understand which
toolset is appropriate when. A few people still hold out for a pure Lean or a pure Six Sigma
approach, but they are in the minority.
Use the DMAIC (Define–Measure–Analyze–Improve–Control) structure as the road map for problem
solving and process improvement. There are two basic types of DMAIC projects:
The traditional Lean Six Sigma project team approach, where a group meets regularly over a period
of time to solve a difficult problem that has no obvious solution. The emphasis is typically on tools
that focus on understanding the voice of the customer, collecting the right data, and analyzing the
data with statistical methods to identify the true root cause of the problem.
BIG QUESTION 3: HOW? (Watson-Hemphill,
2016)
Kaizen projects, where a group of selected team members are brought together for
an intense one-week period to complete a cycle of rapid improvement on a
problem with a smaller scope. This structure is best used in situations where
process waste or inefficiencies are a problem and the Lean toolset is more
appropriate.
Organizations that follow these guidelines have reaped a secondary benefit: the experience with
Lean Six Sigma has helped their managers and managerial candidates become better and more
capable leaders. Here are three examples:
• A large hotel chain trained their manager candidates in Lean Six Sigma methods to ensure that
future leaders were more data-driven, process-driven, and customer-focused.
• A pharmaceutical company wanted to broaden the skills of its high-potential employees and
better enable them to work cross-functionally across the business. Lean Six Sigma gave them a
new toolkit, and also provided opportunities for professional development.
• A financial services company that was experiencing rapid expansion needed to develop more of a
process focus to keep up with its growth. The CEO selected Lean Six Sigma to develop a
foundation of greater analytic abilities and data-based decision making within his management
team.
Leading and Managing Lean (Fliedner, 2015)
Whether the setting is manufacturing, service, administration, health
care, education, politics, or something else, it must be understood that
lean management must possess a systems perspective. A survey of
practitioners suggested that the single most important lean skill,
knowledge, or expertise item is the possession of a systems view and
thinking. Over many years, this result has maintained its consistency in
conversations with practitioners. Companies that have implemented
successful lean programs have commonly taken into account the entire
enterprise, ranging from suppliers to customers and everything in
between.
Leading and Managing Lean (Fliedner, 2015)
Lean must be viewed as a comprehensive system consisting of
leadership, culture, team, and practices and tools. A system is simply a
set of integrated parts sharing a clearly defined goal. In a system, if
changes are made to optimal values for only a few elements, the
system will not likely come close to achieving all the benefits that are
available through a fully coordinated move and may even have negative
payoffs. A firm must implement lean as part of a systematic and
comprehensive transformation of production and operation
procedures. If only a select few of the system elements reach optimal
levels, then the full benefits of change might be diminished.
Leading and Managing Lean (Fliedner, 2015)
Lean management must be viewed as an integral system of
four, interdependent elements: leadership, culture, team, and practices
and tools. Each of these necessary components affects the
effectiveness of the other components. For example, lean leaders must
be able to rely upon a supportive organizational culture. Lean leaders
are responsible for creating that culture. In order for a transformation
process to produce and to eliminate waste, it takes an immediate
response from every functional discipline, accounting, finance,
purchasing, and so on, when opportunities or issues arise. It takes a
coordinated effort of a team to achieve goals. Respect for team,
people, and their ideas for improvement are a necessary component of
lean management.
Leading and Managing Lean (Fliedner, 2015)
In the survey of practitioners, the second-most important ranking lean
system element was “human relations skills,” which was identified as
consisting of leadership, change management, and team problem
solving.4 This was followed by real-world knowledge and experiences,
lean culture, and then lean practices and tools among many others.
Lean & Change Management (Whiton,
Protzman & Protzman, 2018)
Change management is not only a large part of Lean; it is an essential part of
becoming Lean. As you move from accepting new ideas and then transitioning
them into reality, one should not mitigate the importance of this component to
successfully disseminate, deploy, and sustain Lean. We have a saying when
implementing Lean: 50% is task and 50% is people. Fifty percent is applying the
Lean tools to any process, which is the scientific management part of Lean. The
other 50% is what we call the people piece, or change management. There must be
a balance between these two pieces. Listed below are various change tools to
consider when implementing Lean systems. Keep in mind that people mainly fear
changes they perceive as negative. None of us resist changes we perceive as
positive. Would any of you object to the change of increasing your pay by 10%?
Even if the change was made without telling you ahead of time? We would all view
this increase in pay as positive. Positive changes or changes that fit our paradigms
pass through our filters easily. It is only the changes we perceive as negative that
generate resistance.
The Change Acceleration Process (Whiton,
Protzman & Protzman, 2018)
The change acceleration process (CAP) model was popularized by GE
and used at AlliedSignal (now Honeywell). The model is:
Quality×Acceptance=Effectiveness or Q×A=E.Quality×Acceptance=Effect
iveness or Q×A=E.
If the change you are trying to deploy is not accepted, then you will not
achieve an effective result. One must understand (leverage
stakeholders analysis tools) and deal with resistance from key
stakeholders, build an effective influence strategy and communication
plan for the change, and determine its effectiveness. Again, the
relationship in the equation is multiplicative, which suggests if any of
the components are zero, the change will not be effective.
The Change Acceleration Process (Whiton,
Protzman & Protzman, 2018)
This process involves building a predictive model by assessing each key
stakeholder’s anticipated resistance to change and building a strategy to increase
their acceptance. Following a more formal-change roadmap will help ensure the
success of the change. The roadmap will help by deploying and aiding risk
mitigation with ongoing communication and barrier removal throughout the effort.
The change acceleration process (CAP) is outlined below:
1. Creating a shared need
2. Shaping a shared vision
3. Mobilizing commitment
4. Making change last
5. Monitoring progress
6. Changing systems and structures
Lean Impact (Chang & Ries, 2018)
Lean Impact is an approach to maximizing social benefit in the face of the complex challenges in our society. It
builds upon the best practices for innovation from the Lean Startup and beyond, while introducing new
techniques tailored to the unique nature of the mission-driven arena. By combining scientific rigor with
entrepreneurial agility, we can dramatically increase both the depth and breadth of our impact.
The essence of Lean Impact is captured by three core guiding principles. Throughout this book, I’ll demonstrate
the power of this new mindset and how to translate it into practical action to fuel social innovation.
Think big. Be audacious in the difference you aspire to make, basing your goals on the size of the real need in
the world rather than what seems incrementally achievable.
Start small. Between a desire to help people who are suffering today and pressure from funders to hit delivery
targets, interventions often scale too soon. Starting small and staying small makes it far easier to learn and
adapt – setting you on a path to greater impact over time.
Relentlessly seek impact. Whether due to excitement, attachment, or the requirements imposed by a funder,
we can become wedded to our intervention, technology, or institution. To make the biggest impact, fall in love
with the problem, not your solution.
Virtuous Cycle
(Watson-
Hemphill, 2016)
Behaviour (Charron, Harrington; Voehl &
Mignosa 2013)
If we are to become true LSS practitioners, we need to gain a basic
understanding of the ninth waste, behavior waste, and how we apply
Kaizen to eliminate this and other wastes and facilitate change in our
organization. The ninth waste, behavior waste, teaches us that change
and improvement begin with us as individuals (personal waste) and
collectively (people waste) as members of an organization. Eliminating
non-Lean beliefs and behaviors across the organization requires the
sustained application of Kaizen.
Most organizations are familiar with
Kaizen, Kaizen (“small continuous”)
changes;
Kaikaku &
Kakushin
however, Kaikaku (“a
(Charron, transformation of mind”), which
Practical implications – This survey answers the need for Lean and Six Sigma unified
methodology achievement in soft factors area and gives applicable results for
companies in the supply chain that produces low-volume, high-complexity products.
Research limitations/implications – The research was restricted to studying the impact of LSS on
the workflow and resource consumption of the MRD in Indian allopathic hospitals only. The validity
of the results can be improved by including more hospitals and more case studies from the
healthcare sector in different countries.
Practical implications – Using this study, practitioners can identify which LSS,
Quality System and ICT combination results in best performance and quick success.
On theoretical front, the study confirms impact of LSS and QMS on organisational
performance.
Research limitations/implications – This paper seeks to contribute to and broaden the limited body
of evidence of the applicability of Lean Six Sigma to the UK public sector and identifies areas for
further research and review.
Practical implications – Understanding the applicability of Lean Six Sigma affords opportunities to
public sector agencies in the current budget climate but additionally affords ways in which quality of
service can be enhanced. In some cases, it provides opportunities to meet new statutory
requirements around community empowerment.
Originality/value – The paper contributes to the body of evidence that demonstrates the
effectiveness of Lean Six Sigma within the public sector and suggests opportunity for those agencies
to meet funding challenges faced across the UK.
Exploring Lean Six Sigma implementation barriers in
Information Technology industry (Shamsi & Alam, 2018)
Purpose – The purpose of this paper is to present critical barriers and obstacles faced by Information Technology (IT)
industry in the implementation of Lean Six Sigma (LSS) as the business improvement methodology.
Design/methodology/approach – A literature review of peer-reviewed journal articles, master and doctoral theses,
paradigmatic books with managerial impact and survey reports was used to identify distinct barriers. An empirical
survey, using 400 self-administered questionnaires, was then conducted. Data about 11 LSS barriers from 256 usable
questionnaires, with a response rate of 64 per cent, were collected and analyzed by means of statistical data analysis
software.
Findings – The challenges of “part-time involvement in Lean Six Sigma projects”, “time consuming”, “staff turnover in
middle of project”, “difficulty in data collection” and “difficulty in identifying project scope” emerged as the most
critical barriers in the context of IT industry. This research work advocates the development of a strategy for
addressing the most critical barrier instead of focusing on all for successful implementation.
Originality/value – This paper will prove to be a fantastic resource for many researchers and practitioners who are
engaged in research and applications of LSS in the IT industry. Moreover, the scarcity of literature specific to LSS in IT
industry will be addressed to some extent.
Lean Six Sigma effect on Jordanian pharmaceutical industry’s
performance (Alkunsol, Sharabati, AlSalhi and El-Tamimi, 2019)
Findings – The results show that there is an agreement on high implementation of Lean Six Sigma variables among Jordanian
Pharmaceutical Manufacturing organizations; there are strong relationships among Lean Six Sigma variables, except between non-
utilized talent and transportation; there are strong relationships between Lean Six Sigma variables and business performance. All
Lean Six Sigma variables have effect on business performance, except extra processing and waiting time.
Research limitations/implications – This study was carried out on the pharmaceutical industry in Jordan, generalizing results of one
industry and/or one country to other industries and/or countries may be questionable. Extending the analyses to other industries and
countries represents future research opportunities.
Practical implications – Implementing Lean Six Sigma variables in all Jordanian Pharmaceutical Manufacturing organizations can
improve their business performance; also, it can be applied to other manufacturing industry.
Social implications – The aim of all organizations is to reduce waste, which leads to reserve the natural resources, which is considered
as a corporate social responsibility.
Originality/value – Only few studies related to Lean or Six Sigma have been carried out in pharmaceutical industry in Jordan.
Therefore, this study might be considered as an initiative study, which studies the effect of both Lean and Six Sigma on
pharmaceutical industry in Jordan.
Hospital management from a high reliability organizational change
perspective A Swedish case on Lean and Six Sigma (Eriksson, 2017)
Findings – The nurses perceived that Lean worked better than Six Sigma, because of its bottom-up
approach, and its similarities with nurses’ well-known work qualities. Nurses coordinate patients
care, collaborate in teams and take leadership roles. To maintain high reliability and to become
quality developers, nurses need stable resources. However, professional’s logic collides with
management’s logic. Expert knowledge (top-down approach) without nurses’ local knowledge
(bottom-up approach) can lead to problems. Healthcare quality methods are standardized but must
be used with flexibility. However, HROs ensue not only from method quality but also from work
attitudes, commitment and continuous work-improvement.
Originality/value – The study uses theoretical concepts from HROs, which were developed for
unexpected events, to explain the consequences of implementing Lean and Six Sigma in healthcare.
Lean Six-Sigma: the means to healing an ailing
NHS? (Bancroft, Saha, Li, Lukacs & Pierron 2018)
Findings – The model produced a robust positive impact when Lean Six-Sigma is adopted,
increasing the likelihood of A&E dependents meeting their performance objective to see
and treat patients in 4 h or less.
Research limitations/implications – Further variables such as staffing levels, A&E admission
type could be considered in future studies. Additionally, it would add further clarity to
analyse hospitals and trusts individually, to gauge which are struggling.
Practical implications – Should the NHS further its understanding and adoption of Lean Six-
Sigma, it is believed this could have significant improvements in productivity, patient care
and cost reduction.
Social implications – Productivity improvements will allow the NHS to do more with an
equal amount of funding, therefore improving capacity and patient care.
Originality/value – Through observing A&E and its ability to treat patients in a timely
fashion it is clear the NHS is struggling to meet its performance objectives, the
recommendation of Six-Sigma in A&E should improve the reliability and quality of care
offered to patients.
Lean Six Sigma applications in the textile industry: a
case study (Adikorley, Rothenberg & Guillory, 2017)
Findings – Three successful projects, two on changeover time reduction and one on
metal contamination, were completed. Additional findings from this study suggest
that strategic partnerships with other high-performing companies and storytelling
are two critical success factors. Also, it is critical for management to convey a clear
vision for LSS that can be operationalized within a company for successful
deployment of LSS textile projects.
Research limitations/implications – The findings from this case study cannot be
generalized.
Originality/value – The literature on LSS in small- and medium-sized businesses is
limited. The literature on the use of LSS in the textile and apparel industry is even
more limited. This paper shows various processes within the textile complex where
LSS has been deployed successfully, yielding economic impacts. By using qualitative
methods, the value of strategic partnerships, storytelling and a vision was seen.
Business survival and market performance through Lean Six Sigma in the chemical
manufacturing industry (Muganyi, Madanhire & Mbohwa, 2019)
Findings – The research findings were mainly based on the inferences obtained from a chemical
product manufacturing concern in South Africa, to distinguish the efficacy and relevance of Lean Six
Sigma as strategic business survival tool and imputing strategic resonance to corporate strategy.
Research limitations/implications – This research was limited to distinguishing Lean Six Sigma as a
business survival strategic tool and an ultimate enhancer of market performance for a chemical
product manufacturing entity. The implementation and evaluation of the Lean Six Sigma
methodology as a business survival strategic and market performance enhancement option for the
case study organization was entailed as the corollary of deductive resemblance to similar entities.
Practical implications – This study enables continuous improvement practitioners to evaluate the
Lean and Six Sigma practices. The advantages posed by the simultaneous and optimized application
of the two approaches versus individual application were assessed and verified to produce
enhanced continuous improvement. This poses further challenges to scholars and academics to
pursue further researches on the practicality of applying Lean Six Sigma as a strategic option.
Originality/value – The paper prompts the efficacy of well publicized methodologies and evaluates
their implementation for strategic performance for manufacturing organizations. The practical
application, constraints and resultant effects of deploying Lean Six Sigma were reviewed to give
impetus to the methodology.
Reducing medication errors using lean six sigma methodology in a Thai
hospital: an action research study (Trakulsunti, Antony, Dempsey &
Brennan, 2020)
Findings – The number of dispensing errors decreased from 6 to 2 incidents per
20,000 inpatient days per month between April 2018 and August 2019 representing
a 66.66% reduction. The project has improved the dispensing process performance
resulting in dispensing error reduction and improved patient safety. The
communication channels between the hospital pharmacy and the pharmacy
technicians have also been improved.
Research limitations/implications – This study was conducted in an inpatient
pharmacy of a teaching hospital in Thailand. Therefore, the findings from this study
cannot be generalized beyond the specific setting. However, the findings are
applicable in the case of similar contexts and/or situations.
Originality/value – This is the first study that employs a continuous improvement
methodology for the purpose of improving the dispensing process and the quality
of care in a hospital. This study contributes to an understanding of how the
application of action research can save patients’ lives, improve patient safety and
increase work satisfaction in the pharmacy service.
Resistance to Change (Charron, Harrington;
Voehl & Mignosa 2013)
Organizational change is often more difficult than it first appears. Why
is individual change so difficult? What is it that makes organizational
change so difficult? These are fundamental questions that must be at
the forefront of any LSS transformation process.
KAIKAKU—
TRANSFORMATION Kaikaku: Change + radical.
OF MIND (Charron,
Harrington; Voehl Kai: To take apart and make new. Kaku: Radically alter.
& Mignosa 2013)
Kaikaku: Transformation of mind.
Analyze: Analyze is the third step in the DMAIC problem-solving method. The measurements/data must be
analyzed to see if they are consistent with the problem definition and also to identify a root cause. A problem
solution is then identified. Sometimes, based on the analysis, it is necessary to go back and restate the problem
definition and start the process over.
Attribute Data : Attribute data are data that are not continuous, that fit into categories that can be described in
terms of words (attributes). Examples: "good" or "bad," "go" or "no-go," "pass" or "fail," and "yes" or "no."
Chi-Squared Test: This test is used on variables (decimal) data to see if there was a statistically significant
change in the sigma between the population data and the current sample data. This test is done only after the
data plots have indicated that there has been no radical change in the shape of the data plots.
Some Definitions (Brussee, 2004)
Child Distributions: This term is used when interfacing with quality department data. A
child distribution refers to the sample averages and the sigma of multiple sample averages.
These are labeled and .
Confidence Tests Between Groups of Data: These tests are used to determine if there is a
statistically significant change between samples or between a sample and a population.
These are normally done at a 95% confidence level.
Continuous Data (Variables Data): Continuous data can have any value in a continuum.
They are decimal data without "steps."
Control: Control is the final step in the DMAIC problem-solving method. A verification of
control must be implemented. A robust solution (like a part change) will be easier to keep
in control than a qualitative solution.
Some Definitions (Brussee, 2004)
Control Chart : A control chart is a tool for monitoring variance in a process over
time. A traditional control chart is a chart with upper and lower control limits on
which are plotted values of some statistical measure for a series of samples or
subgroups. A traditional control chart uses both an average chart and a sigma chart.
See Simplified Control Chart.
Correlation Testing: This tool uses historical data to find what variables changed at
the same time or position as the problem time or position. These variables are then
subjected to further tests or study.
F Test: This test is used on variables (decimal) data to see if there was a
statistically significant change in the sigma between two samples. This test is
done only after the data plots have indicated that there has been no radical
change in the shape of the data plots.
Some Definitions (Brussee, 2004)
Fishbone Diagram: This Six Sigma tool uses a representation of a fish skeleton to
help trigger identification of all the variables that can be contributing to a problem.
The problem is visually shown as the fish "head" and the variables are shown on
the "bones." Once all the variables are identified, the key two or three are
highlighted for further study.
Improve: Improve is the fourth step in the DMAIC problem-solving method. Once a
solution has been analyzed, the fix must be implemented. The expected results
must be verified with independent data after solution implementation.
Minimum Sample Size: The number of data points needed to enable statistically
valid comparisons or predictions.
N: This is the sample size or, in probability problems, the number of independent
trials, like the number of coin tosses, the number of parts measured, etc.
Need-Based Tolerances: This Six Sigma tool emphasizes that often tolerances are
not established based on the customer's real needs. A tolerance review offers
opportunity for both the customer and the supplier to save money.
Some Definitions (Brussee, 2004)
Normal Distributions: A bell-shaped distribution of data that is indicative of the distribution of data
from many things in nature. Information on this type of distribution is used to predict populations
based on samples of data.
Number s (or x Successes): This is the total number of "successes" that you are looking for in a
probability problem, like getting exactly three heads. This is used in Excel's BINOMDIST.
Parent Populations: This term is used when interfacing with quality department data. A parent
population refers to the individual data and their related statistical descriptions, like average and
sigma. These are labeled and S.
Plot Data: Most processes with continuous data have data plot shapes that stay consistent unless a
major change to the process has occurred. If the shapes of the data plots have changed
dramatically, then the quantitative formulas can't be used to compare the processes.
Some Definitions (Brussee, 2004)
Probability Determination: This is the likelihood of an event happening by pure
chance.
Process Flow Diagram: The process flow diagram, and specifically the locations
where data are collected, may help pinpoint possible areas contributing to a
problem.
Some Definitions (Brussee, 2004)
Proportional Data: Proportional data are based on attribute inputs, such as "good" or "bad," "yes"
or "no," etc. Examples are the proportion of defects in a process, the proportion of "yes" votes for a
candidate, and the proportion of students failing a test.
RSS Tolerances: When establishing tolerances on stacked parts, the traditional method is to use
"worst-case" fit, even though the probability of this fit may be extremely low. The RSS method (root
sum-of-squares) of establishing tolerances takes this probability into consideration, resulting in
generally looser tolerances with no measurable reduction in quality.
Some Definitions (Brussee, 2004)
Sample Size, Proportional Data: This tool calculates the minimum sample size needed to get
representative attribute data on a process generating proportional data. Too small a sample may
cause erroneous conclusions. Excessive samples are expensive.
Sample Size, Variables Data: This tool calculates the minimum sample size needed to get
representative data on a process with variables (decimal) data. Too small a sample may cause
erroneous conclusions. Excessively large samples are often expensive.
Simplified Control Chart: A control chart—traditional or simplified—is a tool for monitoring variance
in a process over time. Traditional control charts have two graphs and are not intuitive. Simplified
control charts have one graph, are intuitive, and are operator-friendly. See Control Chart.
Simplified DOE: This Six Sigma tool enables tests on an existing process to establish optimum
settings on the key process input variables.
Some Definitions (Brussee, 2004)
Simplified FMEA: This Six Sigma tool is used to convert qualitative concerns on collateral damage to
a prioritized action plan. Unintentional collateral harm may occur to other processes due to a
planned process or product change.
Simplified Gauge Verification: This Six Sigma tool is used on variables data (decimals) to verify that
the gauge is capable of giving the required accuracy of measurements compared to the allowable
tolerance.
Simplified QFD: This Six Sigma tool is used to convert qualitative customer input into specific
prioritized action plans. The customer includes everyone who is affected by the product or process.
Simplified Transfer Function: The simplified transfer function shows the variation contribution of
each component to the total variation of an assembly or a process. This allows for component focus
to effect total variation reduction.
Some Definitions (Brussee, 2004)
t Test: This Six Sigma test is used to see if there was a statistically significant change
in the average between population data and the current sample data, or between
two samples. This test on variables data is done only after the data plots have
indicated that there has been no radical change in the shapes of the data plots and
the chi-squared test or F test shows no significant change in sigma.
Tolerance Stack-up Analysis: This is the process of evaluating the effect that the
dimensions of all components can have on an assembly. There are various methods
used, including worst case, RSS (root sum-of-squares), modified RSS, and Monte
Carlo simulations.
Variables Data: Variables data (continuous data) are generally in decimal form.
Theoretically you could look at enough decimal places to find that no two values
are exactly the same.
BASICS Model Baseline (B): (Protzman III, Keen &
Protzman, 2018)
Create the vision.
Train the leadership and implementation team in Lean.
Charter the team, scope the project.
Select the pilot area and team members.
Conduct five-day Lean training seminar.
Baseline metrics, identify the “gaps” and set targets.
Build a chronological file—take photos and videos of how it is today.
Health check.
Value-stream map: current, ideal, and future state.
Determine the customer demand and takt time (TT).
BASICS Model Assess/Analyze (A): (Protzman III,
Keen & Protzman, 2018)
Involve all the staff to analyze the process.
Process-flow analysis (PFA)—become the customer or product. This
includes a point-to-point diagram of how the product flows.
Create process block diagram.
Group tech analysis (if required).
Workflow analysis (WFA)—become the operator. This includes a
spaghetti chart of how the operator works.
Setup/changeover analysis (SMED).
BASICS Model Suggest Solutions (S): (Protzman III,
Keen & Protzman, 2018)
Update the process block diagram—one-piece flow vision for the
process.
Create the optimal layout for the process.
The ten-step process for creating master layouts.
Design the work stations.
Create standard work.
Determine the capacity and labor requirements.
Make and approve recommendations.
Train staff in the new process.
BASICS Model Suggest Implement (I): (Protzman
III, Keen & Protzman, 2018)
Implement the new process—use pilots.
Start up the new line.
Update standard work.
Determine capacity and staffing (PPCS).
Implement line balancing.
Implement line metrics.
Visual management—Incorporate 5S, visual displays, and controls.
Implement Lean materials system.
Implement mistake-proofing.
Implement total productive maintenance (TPM).
BASICS Model Suggest Check (C): (Protzman III,
Keen & Protzman, 2018)
Do you know how to check?
Check using the visual-management system.
Heijuka and scheduling.
Mixed model production.
BASICS Model Suggest Sustain (S): (Protzman III,
Keen & Protzman, 2018)
Document the business case study and results.
Create the Lean culture.
Create a sustain plan.
Upgrade the organization.
Ongoing leadership coaching.
A startup is a temporary organization
designed to search for a repeatable and
scalable business model (under extreme
uncertainty).
- Steve Blank, The Startup Owner’s Manual
Five major lessons from the lean startup Thinking
(Nir, 2018)
1) Start with the customer in mind.
2) Define and communicate the mission and vision.
3) Synthesize an integrative operating model.
4) Identify metrics that matter.
5) Pivot or persevere.
An innovation is the application of creative ideas,
knowledge, or new technology,
to a product/service, process, or business model that
is accepted by markets and society.
• Adapted from OECD 2005 and Wikipedia.
Innovation = Imagination +
Creativity +
Implementation (Execution) +
Value
The Lean Startup Principles
• Entrepreneurs are everywhere (even in well-established firms).
• Entrepreneurship is management (but different from managing
traditional firms).
• Validated learning (about customers).
• **Build-Measure-Learn feedback loop.
• Innovation accounting (using actionable metrics – mostly about
customer behaviors).
• Eric Ries, The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to
Create Radically Successful Businesses, 2011.
** http://theleanstartup.com/principles
Resources
• Why the Lean Startup Changes Everything
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
• Eric Ries, The Lean Startup: How Today's Entrepreneurs Use
Continuous Innovation to Create Radically Successful Businesses,
September 13, 2011. http://theleanstartup.com/principles
• **Lean Startup-Visual Summary (link)
• The Lean Approach (The Kauffman Founders School)
• http://www.entrepreneurship.org/Founders-School/The-Lean-Approach
• *Eric Ries: "The Lean Startup" | Talks at Google (video) ~ 1
hour
• **Steve Blank How to Build a Startup: The Lean LaunchPad
free course on Udacity (link) or (YouTube video list)
Case Studies
• Lean Startup Cases
• **IMVU Case Study (link)
• http://theleanstartup.com/casestudies
• Rent the Runway Case
• ***The Supply & Demand of Rent the Runway | Jennifer
Hyman https://www.youtube.com/watch?v=JYFWpJvq6DU (about
testing business ideas)
• Introduction Chapter of the Innovation Method
http://innovatorsdna.com/wp-
content/uploads/2014/09/Innovators-Method-Sample.pdf
Customer Development Process
http://www.spikelab.org/img/blog/4steps-f6b16b56.png
Read this
Steve Blank, "Why The Lean Start-up Change Everything," HBR, May 2013. (link)
Lean Startup Cycle
* *
Legend:
*Process
* Outcome
http://www.agileacademy.co/wp-content/uploads/2014/09/LeanStartupCycle.png
http://entrepreneurship.saddleback.edu/Resources/Lean%20Startup/Lean%20Startup-Visual%20Summary.pdf
Lean Startup: 3 Components
Nathan Furr and Jeff Dyer, The Innovator’s Method: Beginning the Lean Startup into Your
Organization, Harvard Business Review Press, 2014. (read this introduction (Rent the
Runway case is an great example illustrating Lean Startup principles) and Chapter 1 at
http://innovatorsdna.com/wp-content/uploads/2014/09/Innovators-Method-Sample.pdf
Business Plan
Business Plan would identify the customer need, describe the
product or service, estimate the size of the market, and
estimate the revenues and profits based on projections of
pricing, costs, and unit volume growth.
Show
**Rent the Runway: An inside look at the tech startup's success
https://www.youtube.com/watch?v=cyvfsi3MX-M
Fill out the Business Model Canvas based on the RTR case presented
in the video
Rent the Runway Case
• Rent the Runway Case
• ***The Supply & Demand of Rent the Runway | Jennifer
Hyman https://www.youtube.com/watch?v=JYFWpJvq6DU (about testing business
ideas)
• Introduction Chapter of the Innovation Method
http://innovatorsdna.com/wp-content/uploads/2014/09/Innovators-Method-
Sample.pdf
Business Model Canvas
https://fivewhys.files.wordpress.com/2012/02/canvas1.jpg
(Video)
Source: “Why the Lean Startup Changes Everything“ HBR 2013
Food on the Table
• The Concierge Minimum Viable Product (MVP)
• http://mealplanning.food.com/
• Food on the Table will ask you for your preferences—the types of
foods you like and dishes you enjoy, and the grocery stores you shop
at. Then it will recommend dishes for you for the week, helping you
get out of food ruts or scrambling to figure out what to make for
dinner.
• The app helps you save money by listing the items on sale each week
at your chosen grocery stores and recipes based on those sale items.
Add the recipes to your meal plan and the foods to your grocery list.
The mobile app syncs up with your account on the website.
http://www.macworld.com/article/2109727/food-on-the-table-makes-meal-planning-and-grocery-shopping-a-piece-of-cake.html
Food On The Table’s Lean Startup
Food on the Table (FotT) began life with a single customer.
Instead of supporting thousands of grocery stores around the
country as it does today, FotT supported just one. How did
the company choose which store to support? The founders
didn’t—until they had their first customer. Similarly, they
began life with no recipes whatsoever—until their first
customer was ready to begin her meal planning. In fact, the
company served its first customer without building any
software, without signing any business development
partnerships, and without hiring any chefs.
(link)
MVP
In product development, the Minimum Viable Product
(MVP) is a strategy used for fast and quantitative
market testing of a product or product features. It is
an iterative process of idea generation, prototyping,
presentation, data collection, analysis and learning.
An MVP Example
Why is Dropbox more popular than other programs with
similar functionality?
•Well, let’s take a step back and think about the sync problem and what
the ideal solution for it would do:
• There would be a folder.
• You’d put your stuff in it.
• It would sync.
Source:
http://michaelrwolfe.com/2013/10/19/why-
is-dropbox-more-popular-than-other-
programs-with-similar-functionality/
MVP and Product/Market Fit
• MVP: A product that includes just enough features to allow
useful feedback from early adopters
• Customers will use the MVP and pay for it.
• Minimum Viable Product, or MVP, a bare-bones offering
that allows the team to collect customer feedback and to
validate concepts and assumptions that underlie the
business idea.
• Product/Market Fit—the idea that they were crafting a
solution the market wants, one that customers are willing to
pay for, and one that’ll scale into a large, profitable
business.
https://steveblank.com/2014/07/30/driving-corporate-innovation-design-thinking-customer-development/
Learning Cycle
http://startupclass.samaltman.com/ Lecture #1
http://startupclass.samaltman.com/ Lecture #1
http://startupclass.samaltman.com/ Lecture #1
Hypotheses Testing and Insight
Identify Business Model
Hypotheses
Gain
based on
Facts
Collect Data/Facts
By conducting experiments
Lean Startup- Assumptions and Experiments Mapping
• Map the key assumptions & attached them to the various parts of your business
model.
• Develop quick and inexpensive experiments to validate or invalidate the
assumptions quickly. Pivot if assumptions are invalidated.
• This is the central idea of the Lean Startup Movement.
http://www.alexandercowan.com/business-model-canvas-templates/
Business Model Design Meets Customer Development
https://steveblank.com/2010/11/11/get-out-of-the-building-and-win-50000/
http://blog.startupinstitute.com/2015-3-3-what-does-pivot-mean/
What Is Pivot?
• Pivot: A change to business model component based on
customer feedbacks. A pivot is not a failure.
https://steveblank.com/2010/11/11/get-out-of-the-building-and-win-50000/
Learning and Assumptions Testing
Produce Evidence with a Call to Action (CTA)
Use experiments to test if customers are interested, what preferences
they have, and if they are willing to pay for what you have to offer. Get
them to perform a call to action (CTA) as much as possible in order to
engage them and produce evidence of what works and what doesn’t.
Use Experiments to Test
• Willingness to Pay
Test Customer Interest with Google AdWords
We use Google Ad Words to illustrate this technique because it’s
particularly well suited for testing based on its use of search terms for
advertising (other services such as LinkedIn and Facebook also work
well).
• Select search terms. Select search terms that best represent what you
want to test (e.g., the existence of a customer job, pain, or gain or the
interest for a value proposition).
• Design your ad/test. Design your test ad with a headline, link to a
landing page, and blurb. Make sure it represents what you want to
test.
• Launch your campaign. Define a budget for your ad/ testing campaign
and launch it. Pay only for clicks on your ad, which represent interest.
• Measure clicks. Learn how many people click on your ad. No clicks
may indicate a lack of interest.
Tracking User’s Actions
• “Fabricate” a unique link. Make a
unique and trackable link to more
detailed information about your ideas
(e.g., a download, landing page) with
a service such as goo.gl.
• Track if the customer used the link or
not. If the link wasn’t used, it may
indicate lack of interest or more
important jobs, pains, and gains than
those that your idea addresses.
Funnel
• Split testing, also known as A/B testing, is a technique to compare the
performance of two or more options.
• We can apply the technique to compare the performance of
alternative value propositions with customers or to learn more about
jobs, pains, and gains.
Split
Testing
What to Test?
Here are some elements that you can easily test with A/B testing:
• Alternative features
• Pricing
• Discounts
• Copy text
• Packaging Website variations . . .
Call to Action: How many of the test subjects perform the CTA?
• Purchase /Rent
• E-mail sign-up
• Click on button
• Survey
• Completion of any other task
http://andrewchen.co/when-has-a-consumer-startup-hit-productmarket-fit/
Customer Development Process & 3 Stages of Fit
Problem/Solution Product/Market Business Model
Fit Fit Fit
(Scaling)
OODA
https://en.wikipedia.org/wiki/OODA_loop
** https://foundrmag.com/3-proven-startup-strategies-for-success/
• Observation –
collecting data with
your senses
• Orientation –
analyzing and
synthesizing the
information to form a
perspective
• Decision – choosing
a course of action
based on that
perspective, and
• Action – taking
action on that
decision.
http://www.idea-
sandbox.com/blog/deci
sion-making-like-a-
fighter-pilot/
Nail It to Scale It
PSF PMF BMF
http://www.nailthenscale.com/Nail-It-Then-Scale-It-Book-Graphics-PDF.pdf
http://www.nailthenscale.com/wp-content/uploads/2015/06/Big_Idea_Canvas_v7.4.pdf
The Minimum Feature Set is "...the
smallest or least complicated problem
the customer will pay us to solve."
Apple II & VisiCal MVP is "The minimum viable product
is that version of a new product which
allows a team to collect the maximum
Apple I amount of validated learning about
customers with the least effort."
http://jsahni.blogspot.com/2014_02_01_archive.html
Lean
Canvas
https://leanstack.com/why-lean-canvas/
Customer Development
The process to The process to
search for the right scale the sales &
Search business model! Execution operations.
Lean Startup Cycle Ideas/Business Think with
Pivot/ your hands
Model
Persevere
• Technologies
• Social trends
Learn • Pains/Gains Build
• Insights • Prototyping
Minimize the total • Minimum viable
• Hypotheses
• 5 Whys time & resource products (MVP)
through the
Learning Cycle
Data Product/service
• Innovation Accounting • Address Job to
Net Promoter Score be done
AARRR
• Fit (P-S, P-M)
Measure
• Split tests Find/get real
• Observations customers
• Click streams
Business
Model Identify Design
Canvas Hypotheses Prototypes
version X & Tests
Revising &
Pivoting via
Revise, Redesign,
Speedy and Pivot current Conduct
Learning BMC
Analyze Data &
Test
Obtain
Cycles Insights
Business
Model Measure Results
Canvas & Build Metrics
version x+1
A Balancing Act of Art (Creativity) & Science
• In reality, executing a startup is a balance between
creativity/intuition/instinct and the scientific method: hypothesize >
a/b test > conclude > repeat.
• Inspiration will help you find a problem to solve.
• Creativity will allow you to brainstorm potential solutions to that
problem.
• The scientific method will guide you toward which of these solutions
will actually solve your customer’s problem.
https://blog.ycombinator.com/the-scientific-method-for-startups/
Build-Measure-Learn and MVP
• A core component of Lean Startup methodology is the build-measure-
learn feedback loop.
• The first step is figuring out the problem that needs to be solved and
then developing a minimum viable product (MVP) to begin the
process of learning as quickly as possible.
• Once the MVP is established, a startup can work on tuning the engine.
This will involve measurement and learning and must include
actionable metrics that can demonstrate cause and effect question.
(link)
• utilize an investigative development method called the "Five Whys"-
asking simple questions to study and solve problems along the way
http://theleanstartup.com/principles
Testing Your Business Model
http://businessmodelalchemist.com/blog/2011/01/methods-for-the-business-model-generation-how-bmgen-and-custdev-fit-perfectly.html
Lean Innovation
https://steveblank.files.wordpress.com/2016/08/nga-lean-innovation.png
Start Your Business Lean and Quick:
The Lean Startup Approach
Introduce an innovative method to build a new business in
the context of lean startup approach.
The application and cases of deploying the Build-Measure-
Learn cycle to measure user feedbacks and to validate
assumptions in a startup’s business model will be discussion.
Individuals who have been trained to perform as members of Six Sigma Teams. They are used to
collect data, participate in problem solving, and assist in the implementation of the individual improvement
activities.
Individuals who have completed Six Sigma training, are capable of serving on Six Sigma project
teams, and managing simple Six Sigma projects.
Individuals who have had advanced training with specific emphasis on statistical applications and
problem-solving approaches. These individuals are highly competent to serve as on-site consultants and
trainers for application of Six Sigma methodologies.
Individuals who have had extensive experience in applying Six Sigma and who have mastered the Six
Sigma methodology. In addition, these individuals should be capable of teaching the Six Sigma methodology to
all levels of personnel and to deal with executive management in coaching them on culture change within the
organization.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Master Black Belt
• Certifying LSSBB and Lean Six Sigma Green Belts (LSSGBs)
• Training LSSBBs and LSSGBs
• Developing new approaches
• Communicating best practices
• Taking action on projects that the LSSBB is having problems in defining the root
causes and implementing the change
• Conducting long-term LSS projects
• Identifying LSS opportunities
• Reviewing and approving LSSBB and LSSGB project justifications and project plans
• Working with the executive team to establish new behavioral patterns that reflect
a Lean culture throughout the organization
• (Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Black Belt (LSSBB)
One LSSBB for every 100 employees is the standard practice. (Example: A small
organization with only 100 employees needs only one LSSBB or two part-time
LSSBBs.)
Their responsibilities are to lead Lean Six Sigma Teams (LSSTs) and to define and
develop the right people to coordinate and lead the LSS projects. Candidates for
LSSBB should be experienced professionals who are already highly respected
throughout the organization. They should have experience as a change agent and
be very creative. LSSBBs should generate a minimum of US$1 million in savings per
year as a result of their direct activities. LSSBBs are not coaches. They are
specialists who solve problems and support the LSSGBs and LSSYBs. They are used
as LSST managers/leaders of complex, simple, and important projects. The position
of LSSBB is a full-time job; he/she is assigned to train, lead, and support the LSST.
They serve as internal consultants and instructors.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Green Belt
Being an LSSGB is a part-time job. An LSSGB is assigned to manage a
project or work as a member of an LSST by the LSS champion and
his/her manager. Sometimes an LSSGB is the manager of the area that
is most involved in the problem. However, it is very difficult for
managers to lead or even serve on an LSST unless they are relieved of
their management duties. They will need to spend as much as 50% of
their time working on the LSS project. In most cases, it is preferable
that the LSSGB is a highly skilled professional who has a detailed
understanding of the area that is involved in the problem. LSSGBs work
as members of LSSTs that are led by LSSBBs or other LSSGBs. They also
will form LSSTs when projects are assigned to them.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Yellow Belt (LSSYB)
One Lean Six Sigma Yellow Belt (LSSYB) for every five employees and four
LSSYBs for every LSSGB is the standard practice. (Example: A small
organization with 100 employees needs only 1 LSSBB, 5 LSSGBs, and 20
LSSYBs.)
LSSYBs will have a practical understanding of many of the basic problem-
solving tools and the DMAIC methodology. Team members are usually
classified as LSSYBs when they have completed the 2 or 3 days of LSSYB
training and passed an LSSYB exam. They will work part-time on the project
and still remain responsible for their normal work assignments. However,
they should have some of their workload re-assigned to give them time to
work on the LSST. They usually serve as the expert and coordinator on the
project for the area they are assigned to.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Understanding the People Issues (Morgan &
Brenig-Jones, 2012)
Six Sigma and Lean originated from industrial manufacturing backgrounds, with
early emphasis on tools and techniques. Now, however, most managers accept that
recognising and handling the people issues is the biggest challenge in implementing
Lean Six Sigma successfully.
Lean Six Sigma aims to make change happen in order to improve things. Human
beings, like most creatures, are cautious and sceptical about change – it spells
danger. Humans have an inbuilt resistance to change, especially if somebody tells
us it’s going to be ‘good for us’. Most people fear losing something they have as a
result of change.
Dealing with personal fear and loss is another big challenge in implementing Lean
Six Sigma, but few enthusiasts in statistical theory cover this in their extensive
training.
Understanding people is key to implementing a Lean Six Sigma project. Almost
always, if Six Sigma and Lean projects fail, people issues of one form or another are
the cause.
Working Right from the Start (Morgan &
Brenig-Jones, 2012)
Unfortunately, we don’t know an easy formula for solving the challenge
of managing people in a Lean Six Sigma project. However, in over 80
implementations of Lean Six Sigma, we have found a small number of
common factors that consistently stand out as critical for success.
Perhaps not surprisingly, leadership commitment is one of these critical
factors. Clinching buy-in at the beginning is the real challenge.
Heart of Change
(Kotter & Cohen,
2012)
Managing Change (Morgan & Brenig-Jones,
2012)
• George Eckes, a well-known writer on this subject, uses a simple but eloquent expression to express gaining
acceptance for change and overcoming resistance, whether for a whole Lean Six Sigma programme or for
the changes resulting from part of a Lean Six Sigma project:
• E=QxA
• E is the effectiveness of the change in practice: This represents the effectiveness of the implementation,
which depends on the quality of the solution and the level of acceptance.
• Q is the technical quality of the solution: The ‘hard’ tools of Lean and Six Sigma will have proven that the
solution works when tested. An ideal solution may have been identified, but its effectiveness will depend on
the degree to which it is accepted.
• A is the acceptance of the change by people: Having a high ‘score’ for A is as important as having a good-
quality solution.
• Some hardened practitioners believe that the A factor is more important than the Q factor and is the real
key to success in Lean Six Sigma. To understand how people perceive things and to win support, you need to
score well on both factors.
• If you’re in the early stages of deploying a Lean Six Sigma programme, the A factor is likely to start with
winning support from senior managers.
• Keep Q × A in mind as a simple shorthand for a highly complicated issue – dealing with the human mind.
Overcoming resistance (Morgan & Brenig-
Jones, 2012)
1. Building relationships on mutual commitment. 19. Having consistency of purpose.
4. Being open to change how we work. 22. Continuously seeking to achieve competitive advantage.
5. Having an inspiring vision of the future. 23. Continuously building customer confidence.
6. Having clear goals and targets. 24. Using measures to compare our performance with best practices.
7. Having plans that are clear and well-communicated. 25. Being honest and sincere.
8. Having a winning strategy. 26. Understanding our market, customers and competitors.
10. Dealing effectively with those resisting change. 28. Rewarding the right behaviours.
12. Attracting and retaining world-class people. 30. Modifying systems and structures to support business assurance.
14. Having clear and open communication. 32. Doing what we say we will do.
15. Learning from each other. 33. Expecting our performance standards to increase continuously.
16. Working well together across all functions. 34. Encouraging expressions of different points of view.
18. Finding more productive ways of working. 36. Wanting to learn from our mistakes so we don’t repeat them.
The Cultural Web (Morgan & Brenig-Jones,
2012)
Change Reaction (Morgan & Brenig-Jones,
2012)
Comparing energy and attitude (Morgan &
Brenig-Jones, 2012)
Winners are open and responsive to change. They
want to do their best and have the energy to see
things through to the end.
Strong support
Weak support
(champions)
Low High
power power
Strong opposition
Weak opposition
(blockers)
Negative attitude
(Potential blockers)
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 2: Align the Leaders
Leadership alignment means that the organization’s leaders actively
and visibly support the program, which makes the difference between
real success and halfway to failure. It’s critical to the program’s success,
a fundamental requirement. If you lack leadership alignment, I suggest
that you do a one-off demonstration for results purposes and do not go
to the next step.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 3: Get the Best Black Belt Candidates
The third step is to screen and select your best-of-the-best people to
participate in the Six Sigma journey. We did a study of hundreds of
black belts who were trained and practicing during the mid-1990s. We
reviewed the results obtained by each of those black belts in terms of
the financial outcomes of their projects and their ability to complete
tasks for project successes. In addition, we reviewed failed black belt
candidates and successful black belts. The outcome of the study was a
selection tool, which is presented later in this chapter, with a list of 11
criteria and a rating scale for ensuring maximum probability of success.
I strongly encourage you to use this tool, which has been proven to
work for more than 50 deployment clients.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 4: Choose Projects with High Financial Impact
The next step is to select projects to solve problems that will have a
high financial impact. Let’s assume you have a list of problems in your
business and a list of your specific business functions and, by the way,
you also know the financial impact that each project could make. Now,
match your highest-impact projects with your high-potential black belt
candidates.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
DMAIC (Bruce, 2015)
Define. The Define phase determines the project’s purpose,
objectives, and scope; collects information on the customers
and the process involved; and specifies the project deliverables to
customers (internal and external).
Measure. In the Measure phase you learn exactly what is known about
the problem and what is unknown, select one or more metrics in the
process, map the process, make the necessary accurate and sufficient
measurements, and record the results to establish the current
capability—the baseline.
DMAIC (Bruce, 2015)
Analyze. The purpose of the Analyze phase is to sort through all the
potential Xs that are causing the costly defects. It’s like inputting all the Xs
through a funnel so that the resulting output is the vital few Xs that are
causing the defects versus the trivial many.
This approach (observation and experimentation), which has been used by science for
hundreds of years, is the key to advancing knowledge and improving our understanding of
our surroundings. We must be able to accurately observe our surroundings, document
what we see, investigate and analyze our observations to find out what is causing what we
see, and ultimately take effective action to improve our environment. —R.D. Laing
Algorithms are used to predict the risk that criminals will reoffend and to sentence them
accordingly. Is this more or less fair than allowing judges to determine the same?
“Data ethics” purports to provide answers to these questions, or at least a framework for
wrestling with them.
Well, let’s start with “what is ethics?” If you take the average of every definition you can
find, you end up with something like ethics is a framework for thinking about “right” and
“wrong” behavior. Data ethics, then, is a framework for thinking about right and wrong
behavior involving data.
Data Ethics (Grus, 2019)
Some people talk as if “data ethics” is (perhaps implicitly) a set of commandments about
what you may and may not do. Some of them are hard at work creating manifestos, others
crafting mandatory pledges to which they hope to make you swear. Still others are
campaigning for data ethics to be made a mandatory part of the data science curriculum
You should care about ethics whatever your job. If your job involves data, you are free to
characterize your caring as “data ethics,” but you should care just as much about ethics in
the nondata parts of your job.
Perhaps what’s different about technology jobs is that technology scales, and that
decisions made by individuals working on technology problems (whether data-related or
not) have potentially wide-reaching effects.
A tiny change to a news discovery algorithm could be the difference between millions of
people reading an article and no one reading it.
Data Ethics (Grus, 2019)
A single flawed algorithm for granting parole that’s used all over the country
systematically affects millions of people, whereas a flawed-in-its-own-way
parole board affects only the people who come before it.
So yes, in general, you should care about what effects your work has on the
world. And the broader the effects of your work, the more you need to
worry about these things.
Instead, I’ll try to teach you just enough to be dangerous, and pique
your interest just enough that you’ll go off and learn more.
Surveys (Rumsey, 2019)
An experiment imposes one or more treatments on the participants in such a way that clear
comparisons can be made. Once the treatments are applied, the response is recorded. For example,
to study the effect of drug dosage on blood pressure, one group might take 10 mg of the drug, and
another group might take 20 mg. Typically, a control group is also involved, where subjects each
receive a fake treatment (a sugar pill, for example).
Experiments take place in a controlled setting, and are designed to minimize biases that might
occur. Some potential problems include: researchers knowing who got what treatment; a certain
condition or characteristic wasn’t accounted for that can affect the results (such as weight of the
subject when studying drug dosage); or lack of a control group. But when designed correctly, if a
difference in the responses is found when the groups are compared, the researchers can conclude a
cause and effect relationship.
It is perhaps most important to note that no matter what the study, it has to be designed so that the
original questions can be answered in a credible way.
Colelcting Data (Rumsey, 2019)
If the data are categorical (where individuals are placed into groups, such as gender or political affiliation), they are
typically summarized using the number of individuals in each group (called the frequency) or the percentage of individuals
in each group (the relative frequency).
Numerical data represent measurements or counts, where the actual numbers have meaning (such as height and weight).
With numerical data, more features can be summarized besides the number or percentage in each group. Some of these
features include measures of center (in other words, where is the “middle” of the data?); measures of spread (how diverse
or how concentrated are the data around the center?); and, if appropriate, numbers that measure the relationship
between two variables (such as height and weight).
Some descriptive statistics are better than others, and some are more appropriate than others in certain situations. For
example, if you use codes of 1 and 2 for males and females, respectively, when you go to analyze that data, you wouldn’t
want to find the average of those numbers — an “average gender” makes no sense. Similarly, using percentages to
describe the amount of time until a battery wears out is not appropriate.
CHARTS AND GRAPHS (Rumsey, 2019)
Data are summarized in a visual way using charts and/or graphs. Some of the
basic graphs used include pie charts and bar charts, which break down
variables such as gender and which applications are used on teens’
cellphones. A bar graph, for example, may display opinions on an issue using
5 bars labeled in order from “Strongly Disagree” up through “Strongly
Agree.”
But not all data fit under this umbrella. Some data are numerical, such as
height, weight, time, or amount. Data representing counts or measurements
need a different type of graph that either keeps track of the numbers
themselves or groups them into numerical groupings. One major type of
graph that is used to graph numerical data is a histogram.
Analyzing Data (Rumsey, 2019)
After the data have been collected and described using pictures and
numbers, then comes the fun part: navigating through that black box
called the statistical analysis. If the study has been designed properly,
the original questions can be answered using the appropriate analysis,
the operative word here being appropriate. Many types of analyses
exist; choosing the wrong one will lead to wrong results.
Central Tendencies (Grus, 2019)
Usually, we’ll want some notion of where our data is centered. Most
commonly we’ll use the mean (or average), which is just the sum of the data
divided by its count.
If you have two data points, the mean is simply the point halfway between
them. As you add more points, the mean shifts around, but it always
depends on the value of every point. For example, if you have 10 data points,
and you increase the value of any of them by 1, you increase the mean by
0.1. If you have two data points, the mean is simply the point halfway
between them. As you add more points, the mean shifts around, but it
always depends on the value of every point. For example, if you have 10 data
points, and you increase the value of any of them by 1, you increase the
mean by 0.1.
Central Tendencies (Grus, 2019)
We’ll also sometimes be interested in the median, which is the middle-most
value (if the number of data points is odd) or the average of the two middle-
most values (if the number of data points is even).
For instance, if we have five data points in a sorted vector x, the median is
x[5 // 2] or x[2]. If we have six data points, we want the average of x[2] (the
third point) and x[3] (the fourth point).
The range is zero precisely when the max and min are equal, which can only happen if the
elements of x are all the same, which means the data is as undispersed as possible.
Conversely, if the range is large, then the max is much larger than the min and the data is
more spread out.
Like the median, the range doesn’t really depend on the whole dataset. A dataset whose
points are all either 0 or 100 has the same range as a dataset whose values are 0, 100, and
lots of 50s. But it seems like the first dataset “should” be more spread out.
Correlation and Causation (Grus, 2019)
You have probably heard at some point that “correlation is not causation,” most likely from
someone looking at data that posed a challenge to parts of his worldview that he was
reluctant to question. Nonetheless, this is an important point—if x and y are strongly
correlated, that might mean that x causes y, that y causes x, that each causes the other,
that some third factor causes both, or nothing at all.
Consider the relationship between number of friends and daily minutes. It’s possible that
having more friends on the site causes DataSciencester users to spend more time on the
site. This might be the case if each friend posts a certain amount of content each day,
which means that the more friends you have, the more time it takes to stay current with
their updates.
However, it’s also possible that the more time users spend arguing in the DataSciencester
forums, the more they encounter and befriend like-minded people. That is, spending more
time on the site causes users to have more friends.
Correlation and Causation (Grus, 2019)
A third possibility is that the users who are most passionate about data
science spend more time on the site (because they find it more
interesting) and more actively collect data science friends (because
they don’t want to associate with anyone else).
For our purposes you should think of probability as a way of quantifying the
uncertainty associated with events chosen from some universe of events.
Rather than getting technical about what these terms mean, think of rolling
a die. The universe consists of all possible outcomes. And any subset of these
outcomes is an event; for example, “the die rolls a 1” or “the die rolls an
even number.”
Probability (Grus, 2019)
Notationally, we write P(E) to mean “the probability of the event E.”
One could, were one so inclined, get really deep into the philosophy of
what probability theory means. (This is best done over beers.) We
won’t be doing that.
Dependence and Independence (
Grus, 2019)
Roughly speaking, we say that two events E and F are dependent if knowing something about whether E
happens gives us information about whether F happens (and vice versa). Otherwise, they are independent.
For instance, if we flip a fair coin twice, knowing whether the first flip is heads gives us no information about
whether the second flip is heads. These events are independent. On the other hand, knowing whether the first
flip is heads certainly gives us information about whether both flips are tails. (If the first flip is heads, then
definitely it’s not the case that both flips are tails.) These two events are dependent.
Mathematically, we say that two events E and F are independent if the probability that they both happen is the
product of the probabilities that each one happens:
P(E,F)=P(E)P(F)
In the example, the probability of “first flip heads” is 1/2, and the probability of “both flips tails” is 1/4, but the
probability of “first flip heads and both flips tails” is 0.
Random Variables (Grus, 2019)
A random variable is a variable whose possible values have an
associated probability distribution. A very simple random variable
equals 1 if a coin flip turns up heads and 0 if the flip turns up tails. A
more complicated one might measure the number of heads you
observe when flipping a coin 10 times or a value picked from range(10)
where each number is equally likely.
Continuous Distributions
A coin flip corresponds to a discrete distribution—one that associates
positive probability with discrete outcomes. Often we’ll want to model
distributions across a continuum of outcomes. (For our purposes, these
outcomes will always be real numbers, although that’s not always the
case in real life.) For example, the uniform distribution puts equal
weight on all the numbers between 0 and 1.
The Normal Distribution
The normal distribution is the classic bell curve–shaped distribution
and is completely determined by two parameters: its mean μ (mu) and
its standard deviation σ (sigma). The mean indicates where the bell is
centered, and the standard deviation how “wide” it is.
KEY TERMS FOR DATA TYPES (Bruce, Gedeck &
Bruce, 2020)
Numeric: Data that are expressed on a numeric scale
Continuous: Data that can take on any value in an interval. (Synonyms: interval, float, numeric)
Discrete: Data that can take on only integer values, such as counts. (Synonyms: integer, count)
Categorical: Data that can take on only a specific set of values representing a set of possible
categories. (Synonyms: enums, enumerated, factors, nominal)
Binary: A special case of categorical data with just two categories of values, e.g. 0/1, true/false.
(Synonyms: dichotomous, logical, indicator, boolean)
Ordinal: Categorical data that has an explicit ordering. (Synonyms: ordered factor)
KEY TERMS FOR ESTIMATES OF LOCATION
(Bruce, Gedeck & Bruce, 2020)
Mean : The sum of all values divided by the number Weighted median: The value such that one-half of
of values. the sum of the weights lies above and below the
sorted data.
Synonyms - average
Data quality is often more important than data quantity, and random
sampling can reduce bias and facilitate quality improvement that would
be prohibitively expensive.
Selection Bias (Bruce, Gedeck & Bruce, 2020)
Yogi Berra, “If you don’t know what you’re looking for, look hard enough and
you’ll find it.”
Selection bias: Bias resulting from the way in which observations are
selected.
Sample statistic: A metric calculated for a sample of data drawn from a larger population.
Sampling distribution: The frequency distribution of a sample statistic over many samples or resamples.
Central limit theorem: The tendency of the sampling distribution to take on a normal shape as sample size
rises.
Standard error: The variability (standard deviation) of a sample statistic over many samples (not to be
confused with standard deviation, which, by itself, refers to variability of individual data values).
Central Limit Theorem (Bruce, Gedeck &
Bruce, 2020)
The means drawn from multiple samples will resemble the familiar bell-shaped
normal curve (see “Normal Distribution”), even if the source population is not
normally distributed, provided that the sample size is large enough and the
departure of the data from normality is not too great. The central limit theorem
allows normal-approximation formulas like the t-distribution to be used in
calculating sampling distributions for inference—that is, confidence intervals and
hypothesis tests.
The central limit theorem receives a lot of attention in traditional statistics texts
because it underlies the machinery of hypothesis tests and confidence intervals,
which themselves consume half the space in such texts. Data scientists should be
aware of this role, but, since formal hypothesis tests and confidence intervals play a
small role in data science, and the bootstrap is available in any case, the central
limit theorem is not so central in the practice of data science.
Standard Error (Bruce, Gedeck & Bruce, 2020)
The standard error is a single metric that sums up the variability in the sampling distribution for a statistic. The
standard error can be estimated using a statistic based on the standard deviation s of the sample values, and
the sample size n: Standarderror=SE=sn−−√
As the sample size increases, the standard error decreases, corresponding to what was observed in Figure 2-6.
The relationship between standard error and sample size is sometimes referred to as the square-root of n rule:
in order to reduce the standard error by a factor of 2, the sample size must be increased by a factor of 4.
The validity of the standard error formula arises from the central limit theorem. In fact, you don’t need to rely
on the central limit theorem to understand standard error. Consider the following approach to measuring
standard error:
Collect a number of brand new samples from the population. For each new sample, calculate the statistic (e.g.,
mean).Calculate the standard deviation of the statistics computed in step 2; use this as your estimate of
standard error.
Confidence Intervals (Bruce, Gedeck & Bruce,
2020)
Frequency tables, histograms, boxplots, and standard errors are all ways to
understand the potential error in a sample estimate.
Degrees of freedom
A parameter that allows the t-distribution to adjust to different sample
sizes, statistics, and number of groups.
Note (Bruce, Gedeck & Bruce, 2020)
What do data scientists need to know about the t-distribution and the
central limit theorem? Not a whole lot. These distributions are used in
classical statistical inference, but are not as central to the purposes of
data science. Understanding and quantifying uncertainty and variation
are important to data scientists, but empirical bootstrap sampling can
answer most questions about sampling error. However, data scientists
will routinely encounter t-statistics in output from statistical software
and statistical procedures in R, for example in A-B tests and regressions,
so familiarity with its purpose is helpful.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
The t-distribution is actually a family of distributions resembling the
normal distribution, but with thicker tails.
With the Weibull, the estimation task now includes estimation of both parameters, β and
η. Software is used to model the data and yield an estimate of the best-fitting Weibull
distribution.
A/B Testing (Bruce, Gedeck & Bruce, 2020)
An A/B test is an experiment with two groups to establish which of two treatments, products,
procedures, or the like is superior. Often one of the two treatments is the standard existing
treatment, or no treatment. If a standard (or no) treatment is used, it is called the control. A typical
hypothesis is that a new treatment is better than control.
A proper A/B test has subjects that can be assigned to one treatment or another. The subject might
be a person, a plant seed, a web visitor; the key is that the subject is exposed to the treatment.
Ideally, subjects are randomized (assigned randomly) to treatments. In this way, you know that any
difference between the treatment groups is due to one of two things:
Luck of the draw in which subjects are assigned to which treatments (i.e., the random assignment
may have resulted in the naturally better-performing subjects being concentrated in A or B)
KEY TERMS FOR A/B TESTING (Bruce, Gedeck &
Bruce, 2020)
Treatment: Something (drug, price, web headline) to which a subject is exposed.
Subjects: The items (web visitors, patients, etc.) that are exposed to treatments.
Test statistic: The metric used to measure the effect of the treatment.
Hypethesis Tests (Bruce, Gedeck & Bruce,
2020)
Hypothesis tests, also called significance tests, are ubiquitous in the traditional statistical
analysis of published research. Their purpose is to help you learn whether random chance
might be responsible for an observed effect.
An A/B test (see “A/B Testing”) is typically constructed with a hypothesis in mind. For
example, the hypothesis might be that price B produces higher profit. Why do we need a
hypothesis? Why not just look at the outcome of the experiment and go with whichever
treatment does better?
The answer lies in the tendency of the human mind to underestimate the scope of natural
random behavior. One manifestation of this is the failure to anticipate extreme events, or
so-called “black swans” (see “Long-Tailed Distributions”). Another manifestation is the
tendency to misinterpret random events as having patterns of some significance. Statistical
hypothesis testing was invented as a way to protect researchers from being fooled by
random chance.
KEY TERMS (Bruce, Gedeck & Bruce, 2020)
Null hypothesis: The hypothesis that chance is to blame.
One-way test: Hypothesis test that counts chance results only in one
direction.
The other side of this coin, so to speak, is that when we do see the real-world
equivalent of six Hs in a row (e.g., when one headline outperforms another by
10%), we are inclined to attribute it to something real, not just chance.
The Null Hypothesis (Bruce, Gedeck & Bruce,
2020)
Hypothesis tests use the following logic: “Given the human tendency to react to unusual
but random behavior and interpret it as something meaningful and real, in our
experiments we will require proof that the difference between groups is more extreme
than what chance might reasonably produce.” This involves a baseline assumption that the
treatments are equivalent, and any difference between the groups is due to chance. This
baseline assumption is termed the null hypothesis. Our hope is then that we can, in fact,
prove the null hypothesis wrong, and show that the outcomes for groups A and B are more
different than what chance might produce.
Null = “no difference between the means of group A and group B,” alternative = “A is
different from B” (could be bigger or smaller)
Taken together, the null and alternative hypotheses must account for all possibilities. The
nature of the null hypothesis determines the structure of the hypothesis test.
One-Way, Two-Way Hypothesis Test (Bruce,
Gedeck & Bruce, 2020)
Often, in an A/B test, you are testing a new option (say B), against an established default option (A) and the
presumption is that you will stick with the default option unless the new option proves itself definitively better.
In such a case, you want a hypothesis test to protect you from being fooled by chance in the direction favoring
B. You don’t care about being fooled by chance in the other direction, because you would be sticking with A
unless B proves definitively better. So you want a directional alternative hypothesis (B is better than A). In such
a case, you use a one-way (or one-tail) hypothesis test. This means that extreme chance results in only one
direction count toward the p-value.
If you want a hypothesis test to protect you from being fooled by chance in either direction, the alternative
hypothesis is bidirectional (A is different from B; could be bigger or smaller). In such a case, you use a two-way
(or two-tail) hypothesis. This means that extreme chance results in either direction count toward the p-value.
A one-tail hypothesis test often fits the nature of A/B decision making, in which a decision is required and one
option is typically assigned “default” status unless the other proves better. Software, however, including R and
scipy in Python, typically provides a two-tail test in its default output, and many statisticians opt for the more
conservative two-tail test just to avoid argument. One-tail versus two-tail is a confusing subject, and not that
relevant for data science, where the precision of p-value calculations is not terribly important.
Resampling (Bruce, Gedeck & Bruce, 2020)
Resampling in statistics means to repeatedly sample values from observed
data, with a general goal of assessing random variability in a statistic. It can
also be used to assess and improve the accuracy of some machine-learning
models (e.g., the predictions from decision tree models built on multiple
bootstrapped data sets can be averaged in a process known as bagging: see
“Bagging and the Random Forest”).
There are two main types of resampling procedures: the bootstrap and
permutation tests. The bootstrap is used to assess the reliability of an
estimate; it was discussed in the previous chapter (see “The Bootstrap”).
Permutation tests are used to test hypotheses, typically involving two or
more groups
Key Terms (Bruce, Gedeck & Bruce, 2020)
Permutation test
The procedure of combining two or more samples together, and randomly (or exhaustively)
reallocating the observations to resamples.
Synonyms
Randomization test, random permutation test, exact test.
Resampling
Drawing additional samples (“resamples”) from an observed data set.
2) Shuffle the combined data, then randomly draw (without replacement) a resample of the same size as
group A (clearly it will contain some data from the other groups).
3) From the remaining data, randomly draw (without replacement) a resample of the same size as group B.
4) Do the same for groups C, D, and so on. You have now collected one set of resamples that mirror the sizes
of the original samples.
5) Whatever statistic or estimate was calculated for the original samples (e.g., difference in group
proportions), calculate it now for the resamples, and record; this constitutes one permutation iteration.
6) Repeat the previous steps R times to yield a permutation distribution of the test statistic.
Exhaistive Permunation Test (Bruce, Gedeck &
Bruce, 2020)
In an exhaustive permutation test, instead of just randomly shuffling
and dividing the data, we actually figure out all the possible ways it
could be divided. This is practical only for relatively small sample sizes.
With a large number of repeated shufflings, the random permutation
test results approximate those of the exhaustive permutation test, and
approach them in the limit. Exhaustive permutation tests are also
sometimes called exact tests, due to their statistical property of
guaranteeing that the null model will not test as “significant” more
than the alpha level of the test (see “Statistical Significance and P-
Values”).
Bootstrap Permunation Test (Bruce, Gedeck &
Bruce, 2020)
In a bootstrap permutation test, the draws outlined in steps 2 and 3 of
the random permutation test are made with replacement instead of
without replacement. In this way the resampling procedure models not
just the random element in the assignment of treatment to subject, but
also the random element in the selection of subjects from a
population. Both procedures are encountered in statistics, and the
distinction between them is somewhat convoluted and not of
consequence in the practice of data science.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
In a permutation test, multiple samples are combined, then shuffled.
The shuffled values are then divided into resamples, and the statistic of
interest is calculated.
This issue is related to the problem of overfitting in data mining, or “fitting the
model to the noise.” The more variables you add, or the more models you run, the
greater the probability that something will emerge as “significant” just by chance.
In supervised learning tasks, a holdout set where models are assessed on data that
the model has not seen before mitigates this risk. In statistical and machine
learning tasks not involving a labeled holdout set, the risk of reaching conclusions
based on statistical noise persists.
Degrees of Freedom (Bruce, Gedeck & Bruce,
2020)
In the documentation and settings to many statistical tests and probability
distributions, you will see a reference to “degrees of freedom.” The concept is
applied to statistics calculated from sample data, and refers to the number of
values free to vary. For example, if you know the mean for a sample of 10 values,
there are 9 degrees of freedom (once you know 9 of the sample values, the 10th
can be calculated and is not free to vary). The degrees of freedom parameter, as
applied to many probability distributions, affects the shape of the distribution.
The number of degrees of freedom is an input to many statistical tests. For
example, degrees of freedom is the name given to the n – 1 denominator seen in
the calculations for variance and standard deviation. Why does it matter? When
you use a sample to estimate the variance for a population, you will end up with an
estimate that is slightly biased downward if you use n in the denominator. If you
use n – 1 in the denominator, the estimate will be free of that bias.
KEY TERMS & Ideas (Bruce, Gedeck & Bruce,
2020)
Key Terms
n or sample size
The number of observations (also called rows or records) in the data.
d.f.
Degrees of freedom.
Key Ideas
• The number of degrees of freedom (d.f.) forms part of the calculation to
standardize test statistics so they can be compared to reference
distributions (t-distribution, F-distribution, etc.).
• The concept of degrees of freedom lies behind the factoring of categorical
variables into n – 1 indicator or dummy variables when doing a regression
(to avoid multicollinearity).
ANOVA (Bruce, Gedeck & Bruce, 2020)
Suppose that, instead of an A/B test, we had a comparison of multiple
groups, say A-B-C-D, each with numeric data. The statistical procedure
that tests for a statistically significant difference among the groups is
called analysis of variance, or ANOVA.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Pairwise comparison: A hypothesis test (e.g., of means) between two groups among
multiple groups.
Omnibus test: A single hypothesis test of the overall variance among multiple group
means.
F-statistic: A standardized statistic that measures the extent to which differences among
group means exceeds what might be expected in a chance model.
Expectation or expected: How we would expect the data to turn out under
some assumption, typically the null hypothesis.
Chi-Square Test: Statistical Theory (Bruce,
Gedeck & Bruce, 2020)
Chi-Square Test: Statistical Theory
Asymptotic statistical theory shows that the distribution of the chi-
square statistic can be approximated by a chi-square distribution (see
“Chi-Square Distribution”). The appropriate standard chi-square
distribution is determined by the degrees of freedom (see “Degrees of
Freedom”). For a contingency table, the degrees of freedom are related
to the number of rows (r) and columns (s) as follows:
degreesoffreedom=(r−1)×(c−1)
Power (Bruce, Gedeck & Bruce, 2020)
If you run a web test, how do you decide how long it should run (i.e., how many
impressions per treatment are needed)? Despite what you may read in many
guides to web testing, there is no good general guidance—it depends, mainly, on
the frequency with which the desired goal is attained.
Effect size: The minimum size of the effect that you hope to be able to detect in a
statistical test, such as “a 20% improvement in click rates”.
Power: The probability of detecting a given effect size with a given sample size.
Significance level: The statistical significance level at which the test will be
conducted.
Sample Size (Bruce, Gedeck & Bruce, 2020)
The most common use of power calculations is to estimate how big a sample
you will need.
• You must specify the minimum size of the effect that you want to detect.
• You must also specify the required probability of detecting that effect size
(power).
• Finally, you must specify the significance level (alpha) at which the test will
be conducted.
KEY TERMS FOR SIMPLE LINEAR REGRESSION
(Bruce, Gedeck & Bruce, 2020)
Response: The variable we are trying to predict. Regression coefficient: The slope of the regression line.
Synonyms - dependent variable, Y-variable, target, outcome Synonyms - slope, b1, β1, parameter estimates, weights
Y=b0+b1X
We read this as “Y equals b1 times X, plus a constant b0.” The symbol b0 is known as the
intercept (or constant), and the symbol b1 as the slope for X. Both appear in R output as
coefficients, though in general use the term coefficient is often reserved for b1. The Y
variable is known as the response or dependent variable since it depends on X. The X
variable is known as the predictor or independent variable. The machine learning
community tends to use other terms, calling Y the target and X a feature vector.
Throughout this book, we will use the terms predictor and feature interchangeably.
Multiple Linear Regression (Bruce, Gedeck &
Bruce, 2020)
When there are multiple predictors, the equation is simply extended to
accommodate them:
Y=b0+b1X1+b2X2+...+bpXp+e
Instead of a line, we now have a linear model—the relationship between
each coefficient and its variable (feature) is linear.
All of the other concepts in simple linear regression, such as fitting by least
squares and the definition of fitted values and residuals, extend to the
multiple linear regression setting. For example, the fitted values are given by:
Yˆi=bˆ0+bˆ1X1,i+bˆ2X2,i+...+bˆpXp,i
Key Terms (Bruce, Gedeck & Bruce, 2020)
Root mean squared error: The square R-squared: The proportion of variance
root of the average squared error of the explained by the model, from 0 to 1.
regression (this is the most widely used Synonyms: coefficient of determination,
metric to compare regression models). R2
Synonyms - RMSE
t-statistic: The coefficient for a
Residual standard error: The same as predictor, divided by the standard error
the root mean squared error, but of the coefficient, giving a metric to
adjusted for degrees of freedom. compare the importance of variables in
Synonyms - RSE the model. See “t-Tests”.
Extrapolation: Extension of a model beyond the range of the data used to fit it.
Interpreting the Regression Equation (Bruce,
Gedeck & Bruce, 2020)
In data science, the most important use of regression is to predict some
dependent (outcome) variable. In some cases, however, gaining insight
from the equation itself to understand the nature of the relationship
between the predictors and the outcome can be of value. This section
provides guidance on examining the regression equation and
interpreting it.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Correlated variables: When the predictor variables are highly correlated, it is difficult to interpret
the individual coefficients.
Multicollinearity: When the predictor variables have perfect, or near-perfect, correlation, the
regression can be unstable or impossible to compute.
Synonyms - collinearity
Confounding variables: An important predictor that, when omitted, leads to spurious relationships
in a regression equation.
Main effects: The relationship between a predictor and the outcome variable, independent from
other variables.
Interactions:An interdependent relationship between two or more predictors and the response.
Regression Diagnostics (Bruce, Gedeck &
Bruce, 2020)
In explanatory modeling (i.e., in a research context), various steps, in
addition to the metrics mentioned previously (see “Assessing the
Model”), are taken to assess how well the model fits the data; most are
based on analysis of the residuals. These steps do not directly address
predictive accuracy, but they can provide useful insight in a predictive
setting.
KEY TERMS FOR REGRESSION DIAGNOSTICS
(Bruce, Gedeck & Bruce, 2020)
Standardized residuals: Residuals divided by Non-normal residuals: Non-normally
the standard error of the residuals. distributed residuals can invalidate some
technical requirements of regression, but
are usually not a concern in data science.
Outliers: Records (or outcome values) that
are distant from the rest of the data (or the
predicted outcome). Heteroskedasticity: When some ranges of
the outcome experience residuals with
higher variance (may indicate a predictor
Influential value: A value or record whose missing from the equation).
presence or absence makes a big difference
in the regression equation.
Partial residual plots: A diagnostic plot to
illuminate the relationship between the
Leverage: The degree of influence that a outcome variable and a single predictor.
single record has on a regression equation. Synonyms - added variables plot
Synonyms - hat-value
Define Stage (Shaffie & Shahbazi, 2012)
Given these steps, the main deliverables
in this phase are:
The identification of a product,
process, or service that is in need of
improvement
The identification of a customer that is
driving the need for improvement and
defining their expectations
A team charter with a problem
statement, goal, financial benefits,
scope, and required resources
An outline of areas of risk
The receipt of approval or sign-off
from the project champion or
executive sponsor
The basic requirements are:
SMART M: Measurable
Criteria
(Shaffie & A: Attainable
Shahbazi,
2012) R: Relevant
T: Time-bound
Problem Statement (Shaffie & Shahbazi,
2012)
The problem statement is an objective (quantifiable) description of the
“pain” experienced by internal and/or external customers as a result of
a poorly performing process or service. When writing the
problem statement, consider the following questions:
• What is wrong or not meeting the customer’s need or expectations?
• When and where do the problems occur?
• How big is the problem, and what is its impact?
Problem Statement (Shaffie & Shahbazi,
2012)
Here is an example of a poorly written problem statement:
“Our customers are angry with us, resulting in poor customer retention.”
“In the second quarter (when), 15 percent of our customers have left the
bank for another institution (what). The current rate of attrition is up from 8
percent to the current level of 15 percent (magnitude). This negatively
affects our operating cash flow (impact or consequence).
Project Scope (Shaffie & Shahbazi, 2012)
The three critical questions that are considered when developing the
scope are:
1. What process, service, or product will the team focus on?
2. Is there anything that is outside the scope?
3. Are there any constraints (process, IT, resources, and so on) that
must be considered?
Project Champion (Shaffie & Shahbazi, 2012)
The champion is the individual who has initiated the improvement. Either the
project idea has come from her or she sees the need for change. The champion
creates the sense of urgency for the improvement. She is also known as the barrier
buster. At every stage of project execution, the project leader may be faced with
resistance—team members may be reluctant to help, area managers may not
assign the required resources to the project, functional managers may reject
improvement ideas, and so on. The central role of the project champion is to work
within her sphere of influence to set the direction for the project and to support
the Green Belt or Black Belt in its execution. She is the liaison between the project
team and management, helping to create the need for change, setting the direction
for the project, assigning the required resources, allocating the required funding,
and removing any and all barriers to project execution. During the course of the
project, she reviews progress frequently. Furthermore, once the project has ended
and changes have been implemented, the project champion ensures that those
changes recommended by the team are not cast aside after a few months.
Project Leader (Shaffie & Shahbazi, 2012)
The project leader is typically a Black Belt and is allocated to the project
full-time. The Black Belt is the key planner for the project and is
ultimately responsible for the deliverables: he defines the direction for
project execution, determines the deliverables in each phase, creates
team assignments, follows progress, and acts as a link to management.
The project leader is responsible for maintaining project execution
momentum—any potential risks should be escalated to the Master
Black Belt or project champion. The project leader is also responsible
for keeping the champion and the key stakeholders updated on the
status of the project.
Key Stakeholders (Shaffie & Shahbazi, 2012)
Key stakeholders are individuals whose areas, processes, and teams will
be directly affected by the project. They are typically part of
management. It is critical to identify them during the Define phase,
because without their buy-in and involvement, the probability of long-
term success will be reduced. It is the responsibility of the project
leader to always keep the key stakeholders abreast of the project
status. Any signs of resistance from the key stakeholders should be
escalated to the project champion for quick resolution.
Teams (Shaffie & Shahbazi, 2012)
Core Cross-Functional Team
The members of the core team are expected to allocate 10 to 15 percent of
their time to supporting the project. These are functional area experts,
representing the processes, services, or products that are within the scope of
the project.
Supplementary Team
The members of the supplemental team act as support for the project on an
as-needed basis. The time commitment from these members is minimal.
Examples of supplementary team members may include HR, whose expertise
will be needed during the Control phase to help ensure sustainability,
Compliance to sign off on documentation or process changes, or IT to
support the extraction of data from the various databases.
Financial Benefits (Shaffie & Shahbazi, 2012)
There will be situations in which projects that do not provide significant
financial benefits will be selected for execution, for example, those
dealing with compliance or regulatory issues. However, for the most
part, Lean Six Sigma projects should have financial benefits associated
with them. If you are focused on reducing cycle time, this should yield a
reduction in touch time, and hence a reduction in labor. A reduction in
cycle time also reduces inventory, such as unprocessed applications.
The faster an account is opened, the faster revenue can be realized.
Calculating the expected financial benefits helps prioritize
improvement opportunities and creates motivation for the team.
Business Case Example
(Shaffie & Shahbazi,
2012)
(Shaffie & Shahbazi, 2012)
Characteristics of a well-defined business case include:
• A clearly defined problem and goal statement
• Clearly understood defect and opportunity definitions
• No presupposed solution
• A need for improvement related to the customer’s requirements
• Alignment of the project with a business strategy
• A manageable project scope—it can yield results within four to six months
• Identifiable and measurable impact
• Adequate resources assigned to the project
• A data-driven project!
SIPOC (Shaffie & Shahbazi, 2012)
Since the objective is to define a high-level view of the process in its “as-is” state, we must
have the core cross-functional team involved in the development of a SIPOC.
1. As the first step, the team must agree on the start and end points of the process. The
business case can help guide this discussion.
2. Working backward, list the Customers. Identify each customer’s CTQ (accurate, timely,
simple, and so on) and the primary Output (loans, calls, x-rays, or something else) that the
customer receives from the process.
3. With the C and O of the SIPOC defined, using brainstorming techniques, the team should
outline the five to seven high-level Process steps that result in the outputs. Process steps
typically start with a verb.
4. Once the team has agreed on the process steps, the critical Inputs that affect the quality
of the process can be identified.
5. The last step is to list all the Suppliers that provide inputs to the process.
6. The SIPOC should then be validated by walking the actual process.
STEP 4: DEFINE AND EXECUTE
A CHANGE MANAGEMENT STRATEGY
The main benefits of a well-defined and well-executed change management strategy are
that it:
• Helps an organization start and successfully complete Lean Six Sigma projects with
shorter cycle time.
• Creates a shared vision of the goal and deliverables of the project.
• Ensures that you have the buy-in of key stakeholders prior to starting the project.
Image Helps identify the key stakeholders.
Image Outlines their required level of support if the project is to be a success.
• Allows for efficient implementation of solutions.
Image Increases access to required resources and data.
• Ensures that the Control portion of the project is sustainable—that there is an easy
handoff.
(Shaffie & Shahbazi, 2012)
Characteristics (Shaffie & Shahbazi, 2012)
1. Identifying change leadership. Every project will need a champion who will
sponsor the change.
2. Developing a shared need. The project leader and champion have to clearly
articulate the need for change. It is best to leverage the business case. The need for
change can be driven by a threat or opportunity, or it can be instilled within the
organization and widely shared through data, demonstration, or demand.
3. Shaping a vision. The desired outcome of the project is clear, legitimate, widely
understood, and shared.
4. Mobilizing commitment. There is a strong commitment from key constituents to
invest in change. The team leader is to demand and receive management attention.
5. Institutionalizing the change and monitoring progress. Once change is started, it
endures, and best practices are transferred throughout the organization.
Key
Stakeholder
Analysis
(Shaffie &
Shahbazi,
2012)
Key Stakeholder (Shaffie & Shahbazi, 2012)
• Responsible for the final decision
• Likely to be affected, positively or negatively, by the outcomes you
want
• In a position to assist or block your achievement of the outcomes
• An expert or special resource that could substantially affect the
quality of your end product or service
• Able to have influence over other stakeholders
Important (DeCarlo, 2007)
♦ An organization can only afford to focus its improvement efforts on
processes that are critical to customer satisfaction or the viability of the
business.
♦ There are many ways to generate Lean Six Sigma project ideas, and many
different sources for project ideas.
♦ Two of the most important and useful ways to generate project ideas are
through a CT Flow Down and the Voice of the Customer (VOC).
♦ Before you actually choose a Lean Six Sigma project, you have to separate
your viable ideas from those that are not worth your time and effort.
♦ You can use a Prioritization Matrix to methodically determine which
projects are best to focus on right away.
DEFINE PHASE SUMMARY
The Define phase is one of the more challenging steps as it requires the
Green Belt or Black Belt to clearly outline the need for change. Uncovering
an improvement opportunity is not always a comfortable topic of discussion
for all members of leadership—it inherently brings to light issues that have
been overcompensated or ignored in the past. However the project leader
can better facilitate discussion and remove emotion by using data in problem
and goal statements. The more precise and accurate the definition of the
opportunity, the better the chances that the project will be prioritized
accordingly. It should also be noted that it is typically in the Define phase
that a project is either accepted or rejected by the leadership team. And any
rejections should not be viewed as failure by the Green Belt or Black Belt,
rather it is cause for celebration. Thanks to her due diligence the company
avoided investing time and money on an issue that is noncritical or blown
out of proportion.
(Voehl, Harrington & Charron, 2014)
We don’t know what we don’t know. We can’t act on what we don’t
know. We won’t know until we search. We won’t search for what we
don’t question. We don’t question what we don’t measure.”
—Dr. Mikel Harry
I often say that when you can measure what you are speaking about
and express it in numbers, you know something about it; but when you
cannot measure it, when you cannot express it in numbers, your
knowledge is of a meagre and unsatisfactory kind.
—William Thomson, Lord Kelvin (1824–1907)
In God we trust. All
others must bring data.
—W. Edwards Deming
DMAIC (Voehl, Harrington & Charron, 2014)
Define: Select an appropriate project and define the problem, especially in
terms of customer-critical demands.
Measure: Assemble measurable data about process performance and
develop a quantitative problem statement.
Analyze: Analyze the causes of the problem and verify suspected root
cause(s).
Improve: Identify actions to reduce defects and variation caused by root
cause(s) and implement selected actions, while evaluating the measurable
improvement (if not evident, return to step 1, Define).
Control: Control the process to ensure continued, improved performance
and determine if improvements can be transferred elsewhere. Identify
lessons learned and next steps.
DMADV (Voehl, Harrington & Charron, 2014)
Define: Define design goals that are consistent with customer
demands.
Measure: Identify and measure product characteristics that are critical
to quality (CTQ).
Analyze: Analyze to develop and design alternatives, create a high-level
design, and evaluate design capability to select the best design.
Design: Complete design details, optimize the design, and plan for
design verification.
Verify: Verify the design, set up pilot runs, implement the production
process, and hand it over to the process owners.
Purpose of Measure Phase (Brue,2015)
Measure phase the team focuses the improvement effort by
gathering information about the current situation. The other
goals of this phase are to:
Define one or more CTQ characteristics (dependent variables);
Map the process in detail;
Evaluate the measurement systems;
Assess the current level of process performance to establish a
baseline capability and the short- and long-term process sigma
capabilities; and
Quantify the problem.
The objectives of the measure (Keller, 2011)
Process definition at a detailed level to understand the decision points
and detailed functionality within the process.
♦ Discrete measures are those where you can sort items into distinct,
separate, non-overlapping categories. Examples: types of aircraft, categories
of different types of vehicles, types of credit cards. Discrete measures include
artificial scales like the ones on surveys, where people are asked to rate a
product or service on a scale of 1 to 5. Discrete measures are sometimes
called attribute measures because they count items or incidences that have a
particular attribute or characteristic that sets them apart from things with a
different attribute or characteristic.
Measurement Concept #3: Measure for a Reason
(Neuman, Cavanagh & Pande, 2001)
Ever notice how much useless data gets collected at work? It’s probably
because the computer has made it easy to collect tons of numbers,
however trivial. But don’t let your team get sucked into that quagmire.
Unless there’s a clear reason to collect data—a key variable you want to
track—don’t bother. There are basically two reasons for collecting data:
There’s a “Catch 22” to developing a good sampling plan: You already have to
know something about the data you’re collecting before you collect it. As a
result, your first measures won’t be as reliable as you’d like because they’re
based on educated “guesstimates.” The longer you take measures, however,
the better you’ll know your process and the better your sampling plan will
be.
Check Sheet (Neuman, Cavanagh & Pande,
2001)
♦ Keep it simple. If the form is cluttered, hard to read, or confusing, there’s a risk of
errors or nonconformance.
♦ Label it well. Make sure there is no question about where data should go on the form.
♦ Include space for date, time, and collector’s name. These obvious details are often
omitted, causing headaches later.
♦ Organize the data collection form and compiling sheet (the spreadsheet you’ll use to
compile all the data) consistently.
♦ Include key factors to stratify the data.
Common types of checksheets include …
♦ Defect or Cause Checksheet. Used to record types of defects or causes of
defects. Examples: reasons for field repair calls, types of operating log discrepancies,
causes of late shipments.
♦ Data Sheet. Captures readings, measures or counts. Examples: transmitter power level,
number of people in line, temperature readings.
Check Sheet (Neuman, Cavanagh & Pande,
2001)
♦ Frequency Plot Checksheet. Records a measure of an item along a scale
or continuum. Examples: gross income of loan applicants, cycle time for
shipped orders, weight of packages.
♦ Concentration Diagram Checksheet. Shows a picture of an object or
document being observed on which collectors mark where defects actually
occur. Examples: damage done to rental cars, noting errors on application
forms.
♦ Traveler Checksheet. Any checksheet that actually travels through the
process along with the product or service being produced. The check-sheet
lists the process steps down one column, then has additional columns for
documenting process data. Some examples of traveler checksheet uses are
capturing cycle time data for each step in an engineering change order,
noting time or number of people working on a part as it is assembled,
tracking rework on an insurance claim form.
Keep an Eye on (Neuman, Cavanagh & Pande,
2001)
♦ Accuracy: How precise is the measurement: hours, minutes,
seconds, millimeters, two decimal places?
♦ Repeatability: If the same person measures the same unit with the
same measuring device, will they repeat the same results every time
they do it? How much variation is there between measurements?
♦ Reproducibility: If two or more people or devices measure the
same thing, will they produce the same results? What is the variation?
♦ Stability: How much do accuracy, repeatability, and reproducibility
change over time? Do we get the same variation in measures that we
did a week ago? A month ago?
Measeurement (Neuman, Cavanagh & Pande,
2001)
Collect the Data
Implement your plans. Remember that part of your plan includes the
“sample size,” that is, the number of data points you have to collect.
Your data collection should stop when you’ve reached the appropriate
sample size, unless there were problems with some of that data. Do
not continue to collect data unless there are plans to make it a
standard part of the process.
Monitor Accuracy and Refine Procedures as Appropriate
Throughout the data collection, be sure to monitor both the
procedures and devices (if any) used to collect the data.
Measeurement (Neuman, Cavanagh & Pande,
2001)
Implement and Refine the Measurement Process, you should have data
in hand that: Meet your data collection priorities. Were sampled
according to your plan. Reflect accurate, repeatable, reproducible, and
reliable measurement practices.
The Key Definitions: Units, Defects, and Defect
Opportunities (Neuman, Cavanagh & Pande, 2001)
The Six Sigma team needs to understand a few key terms both to collect and analyze data used to
determine the capability of its process:
♦ Unit: An item being processed, or the final product or service being delivered either to internal
customers (other employees working for the same company as the team) or external customers
(the paying customers). Examples: a car, a mortgage loan, a computer platform, a medical diagnosis,
a hotel stay, or a credit card invoice.
♦ Defect: Any failure to meet a customer requirement or performance standard. Examples: a poor
paint job, a delay in closing a mortgage loan, the wrong prescription, a lost reservation, or a
statement error.
♦ Defect Opportunity: A chance that a product or service might fail to meet a customer
requirement or performance standard.
Two guidelines prevent the use of defect
opportunities from becoming a nightmare:
♦ First, you need to focus on defects that are important to the customer.
Consider a bank that regularly makes two kinds of mistakes: (1) mailing out
monthly statements a day late, and (2) entering interest payments a day late.
Which of these defects do you think is really important to most customers?
So we link opportunities in most cases to a CTQ Tree (see p. 135).
♦ Do customers served by new waitstaff complain more often than other customers? (frequency of
the problem)
♦ What do customers of new waitstaff complain about? How does that compare to complaints
received from customers of experienced wait-staff? (type of problem observed)
♦ If customers of new waitstaff complain more, does it mean they are less likely to return? (impact
of the problem)
Your team needs to have a deep understanding of the problem in order to make sound choices
about where to spend its time and where to implement solutions. Otherwise you can end up
wasting three months fixing a problem that occurs infrequently or that has no impact on customers.
(Neuman, Cavanagh & Pande, 2001)
• Do the defects clump up in categories?
♦ First, they help you think logically about potential causes of a problem; you will
still need to gather data to verify which are the real causes of a problem.
♦ Second, their effectiveness is directly related to the creativity and depth of the
thinking that goes into creating them. That’s why these tools are best used with
your team as a whole—you want many minds brainstorming ideas so you have a
broad and deep list of potential causes.
Cause and effect Analysis
Cause-and-effect analysis lets a group start with an “effect”—a problem or, in
some cases, a desired effect or result—and create a structured list of
possible causes for it.
Benefits of cause-and-effect diagrams include:
♦ It’s a great tool for gathering group ideas and input, being basically a
“structured brainstorming” method.
♦ By establishing categories of potential causes, it helps ensure a group
thinks of many possibilities, rather than focusing on a few typical areas (e.g.,
people, bad materials).
♦ Using a cause-and-effect diagram to identify some “prime suspect”
causes gives focus to help begin process and data analysis.
♦ They help get the Analyze phase started, or keep the thought processes
moving after an initial exploration of data and the process.
Cause and effect Analysis (Neuman, Cavanagh
& Pande, 2001)
Analyzing Complex Systems: The Relations
Diagram (Neuman, Cavanagh & Pande, 2001)
Relations diagram (sometimes called an interrelations
diagram or digraph) may be a more appropriate cause analysis tool
than cause and effect diagram.
Interpreting a relations diagram is a matter of counting the number of
“in” and “out” arrows for each potential cause: those with the most
“out” arrows are underlying or potential root causes. In this banking
team’s example, two of the top boxes—low pay scale and lack of
opportunities for advancement—have the most “out” arrows, and
therefore warrant further investigation. (In contrast, “poor job
satisfaction” and “stressful work environment” have a lot of “in”
arrows—that means they are the effect of other underlying causes.)
Analyzing Complex Systems: The Relations
Diagram (Neuman, Cavanagh & Pande, 2001)
Statistical Verification of Causes (Neuman,
Cavanagh & Pande, 2001)
There are two basic approaches to using statistics to determine cause-
and-effect relationships:
♦ Judging the degree to which a cause (X) and an output (Y) are
correlated. This can be an approximate assessment using the patterns
seen in a scatter plot (see below), or an actual calculation using
regression and correlation formulas (see the end of this chapter).
Putting together a detailed process map is always a lively and rewarding experience
for team members. They come to respect the amount of work other people do in
the process, and they also begin to realize just how much variation there is in the
methods that people use, particularly in service processes.
Another effective tool at this stage is a deployment flowchart, which adds a unique
element to a normal flowchart: who is responsible for which activities. Deployment
flowcharts are particularly helpful when working in a process that has a lot of
handoffs between individuals or groups
First-Level Analysis: Identifying Obvious Process
Problem (Neuman, Cavanagh & Pande, 2001)
♦ Disconnects: Steps in the process where there are breakdowns in communications
between shifts, between customers and suppliers, or manager and employees.
♦ Bottlenecks: Points in the process where volume of work often overwhelms capacity,
slowing the entire process downstream. (If work has to wait for someone to return from
vacation, you’re probably looking at a bottleneck.)
♦ Redundancies: Steps in the process that duplicate activities or results elsewhere in the
process; for example, the same information coming from two steps and going to the same
place with the same information.
♦ Rework loops: Points where units with missing parts or information have to be sent
back upstream or delayed at one step until the necessary work is done. Inspection steps
often trigger rework loops.
♦ Decision/Inspection points: Process steps where a variety of checks and appraisals are
made, creating delays and rework loops. In organizations plagued by high defect levels,
these points abound.
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
These kinds of questions confront Six Sigma teams when they need to
♦ Test that there is a meaningful difference between sets of data.
♦ Create a valid hypothesis about the cause of the problem.
♦ Validate or disprove various hypotheses about causes.
♦ Prove to someone who insists on numbers, not just graphs, to show
the level of correlation and causation.
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
To answer these questions, statisticians have come up with standards
that apply to most Six Sigma projects: they have operationally defined
as “statistically significant” anything that has less than a 5% probability
of happening by chance alone. To test for this probability, statisticians
have devised a number of tests, including …
♦ chi-square tests
♦ t-tests
♦ analysis of variance (ANOVA)
♦ multivariate analysis
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
• Another set of tools that Six Sigma teams might use to test causal
theories are correlation and regression analysis. These are tests to
show the numerical measure of correlation between X variables and Y
outputs. If the team has paired data, regression analysis can help
measure the degree to which different variables influence the
outcomes. For example, a Six Sigma team finds an apparent positive
correlation between the speed that phone orders are taken by
salespeople and number of defects in the orders taken. By calculating
the “correlation coefficient,” the team discovers that only about 25%
of the defects correlate to the speed with which the orders are taken.
This is a powerful clue, but the team will need to keep probing for
other causes.
GETTING READY FOR IMPROVE (Neuman,
Cavanagh & Pande, 2001)
By the end of the Analyze phase of DMAIC, your team should have confirmed
the root causes of problems with your processes, products, and services. And
now that you’ve pinpointed the cause, you stand a better chance of
implementing changes that will have a lasting effect. Before you move on,
however, there are a few last tasks to complete:
1. Document the verified causes.
2. Update your project storyboard.
3. Create a plan for Improve.
4. Prepare for the tollgate review by your Sponsor or Leadership Council.
5. Celebrate.
Celebrate (Neuman, Cavanagh & Pande,
2001)
As before, take time to celebrate the work and progress on your Six
Sigma projects. Be sure to point out particular challenges that the team
handled well in its Analyze work, for example:
♦ Innovative data analyses that might set a precedent for other teams
in your organization.
♦ Patience in working through process analyses.
♦ People maintaining their commitment, carrying through on
assignments.
Once these steps are complete, your team is ready for Improve.
The Six Sigma Way Team
Fieldbook: An Implementation
Guide for Process Improvement
Teams
There are two kinds of progress the team should be making by the time
it gets to Analyze. The first has to do with progress on the Six Sigma
project and how to get the word out to others in the organization. The
other has to do with the progress the team has made internally,
evolving from a group of people into something resembling a team.
Why this happens: Of all the challenges facing Six Sigma team members, reaching this
step without having already made up their minds about the cause of the problem/defect may be
one of the toughest! After all, team members were most likely chosen because they have some
relevant experience or knowledge about the process or problem—which means they’ve likely been
living with the “defect” for some time, and naturally have their own theories about what’s going on.
And in the long run, having well-informed ideas about causes is what’s going to lead to a permanent
solution.
How to prevent it: While you can’t really prevent people from making up their minds early in the
project, you can prevent their decisions from harming the project by insisting that the team have
data to back up its conclusions. Remind the team that they will be asked by their Sponsor and
others to show what led them to their conclusions. Make sure that you encourage open-minded
thinking during brainstorming discussions, especially when it comes to creating a cause-and-effect
diagram, for instance.
Improve Stage
Remember the following guidelines:
(Neuman, Cavanagh & Pande, 2001)
♦ Whatever the team selects as a solution should address the root causes
of the problem and the goal the team set for itself in the Project Charter.
♦ Although the team will brainstorm many possible solutions, one or two
will be better than the others; the team must decide which are the best
options and determine what it will take to make them work.
♦ The solutions must not cost so much or be so disruptive that the
expenses outweigh the benefits in the long run.
♦ The chosen solutions must be tested to prove their effectiveness before
they are completely implemented.
But even before the team reaches the point of implementing solutions to
make improvement, it must struggle with the poverty of imagination that
often follows the brilliance of analysis, as the following vignette shows.
STEPS TO WORKABLE, EFFECTIVE SOLUTIONS
(Neuman, Cavanagh & Pande, 2001)
There are five steps for reaching that goal:
1. Generate creative solution ideas.
2. Cook the raw ideas.
3. Select a solution.
4. Pilot test.
5. Implement full-scale.
Generate creative solution ideas. (Neuman,
Cavanagh & Pande, 2001)
1. Be clear about what your brainstorm is supposed to produce. The team needs to be focused on the target.
2. Set a quota of ideas. Left alone to brainstorm, most people come up with three or four ideas and quit. If
asked for 15 ideas, most people double the “average” output. Go for quantity!
3. Play off other people’s suggestions. Not easy to do this when you’re trying to be brilliant, but even Edison
got many good tips from other people. Listen carefully!
4. List ideas without comment, discussion, or criticism. Tearing up each idea as it gets written down is not only
time-consuming, it’s depressing. Rather than spending 10 minutes talking about the first idea to pop out, get
10 ideas and discuss them afterward. Having people write down 10 ideas on separate sticky notes before
slapping them on the wall will discourage premature discussion. Keep the storm moving!
5. Challenge assumptions and go a little crazy. Easier said than done, unless you can think like a three-year-old
and keeping asking “why” about everything you’re working on. Why are tires underneath the car? Why can’t
they go on top of the car? That would save wear and tear on the treads, especially if they were filled with
helium! Get crazy before you get practical!
6. Brainstorm one day; check back the next day. The old notion of “let me sleep on it” makes a lot of sense
around brainstorming. Ideas just seem to get a little crazier and better if you come back to them the next day.
Identifying Process Changes (Neuman,
Cavanagh & Pande, 2001)
Simplification.
Straight-line processing.
Parallel Processing
Alternative Paths
Bottleneck Management
Front-loaded decision making
Standardized Options
Single point of contact or multiple contacts
STEP 2: COOK THE RAW IDEAS: SYNTHESIZING
SOLUTION IDEAS (Neuman, Cavanagh & Pande,
2001)
Refine the brainstormed list
Identify what portions of the problem each of the individual solution
ideas will address
Use the information from B to generate “complete solution” ideas
Document the “full solution” ideas
Step 3 SELECT A
SOLUTION (Neuman,
Cavanagh & Pande,
2001)
When you’re unpacking the new computer, which one do you read, the two-inch thick manual or the 10-page “Highlights for
Operating Startup”? (Or maybe you don’t read either one!) Keep it brief!
Anticipate problems and warning signs that cropped up during the pilot phase. These are the document equivalent of those glass
panels that say “Break in Case of Fire and Remove Axe.”
If someone has to take the company shuttle over to the third floor of building 16 or spend an hour on the computer to dig out
required procedures, they probably won’t bother, and undesired variation will creep back into the process.
If the process is too complicated, people will stop updating what they’re doing, until what the documents say and what people
actually do become very different things. Some companies handle this problem by having a department specially devoted to
“documentation control,” but we recommend you keep control of documentation in the hands of people who actually manage the
process and know what needs to updated or dropped.
PART 3. KEEPING SCORE: ESTABLISHING ONGOING
PROCESS MEASURES (Neuman, Cavanagh & Pande,
2001)
There are three obvious starting points where your team should look for measures:
a. Examine a SIPOC map of the improved process. As usual, customer
requirements are the starting point. Obviously the Six Sigma team must measure
process outcomes for conformance to customer requirements, defects, and
process variation.
b. Decide which upstream process measures are linked to the improvements,
measures that will predict problems downstream in the outputs. For example, if
“on time” is a key customer requirement and process measurement shows that two
of the key process steps are taking longer and longer, the team needs to find and
eliminate the cause of the trend before it causes the output to be late—a defect.
c. Look at critical input measures that help to predict the quality of process steps
and key outputs
PART 4. GOING THE NEXT STEP: BUILDING A
PROCESS MANAGEMENT PLAN (Neuman,
Cavanagh & Pande, 2001)
Process management plan can cover:
♦ Current process map. The manager of the improved process needs to see at a glance the flow of
activities and decisions in the process. A concise process map provides this visual aid.
♦ Action alarms. Develop a process response plan that clearly marks the points in the process
where measures can take the pulse of inputs, process operation, and outputs. With thresholds
indicating when the quality of any of these is degrading, the process response plan lets the process
manager know when to take action. For example, the failure to complete three customer orders on
time could signal the need to go to a contingency plan for deliveries.
♦ Emergency fixes. Once an action alarm goes off, it’s important to have emergency fixes or back-
up plans already spelled out so that employees don’t have to improvise them. Contrary to the old
saying, very few people work well under pressure.
♦ Plans for continuous improvement. By tracking the occurrence of problems in the process, the
process response plan also gives a basis for deciding on the need to have a Six Sigma team overhaul
weak parts of the process. Once enough process response plans are in place, and information from
them is collected and analyzed, managers will have an extensive pool of projects to be attacked by
future Six Sigma teams.
ENDING THE PROJECT (Neuman, Cavanagh &
Pande, 2001)
As tired as team members get of the project, sometimes it’s hard to
bring everything to a close. After all, by this time most teams have a
high level of camaraderie and are likely working very well as a unit. Still,
even good things must come to an end. In this case, that means:
• 1. Completing your storyboard.
• 2. Preparing for the final tollgate review.
• 3. Celebrating the end!
CONTROL DO’S (Neuman, Cavanagh & Pande,
2001)
Do
• Document the improved process. Keep the documents brief, clear,
and visual wherever possible.
• Balance your measures. Select key input and process measures to
predict and interpret output measures.
• Create channels of information back from the customer to you.
CONTROL DONT’S (Neuman, Cavanagh &
Pande, 2001)
• File the updated process documents away. If you do, people will
follow their own ideas, variation will increase, problems will occur …
you know the story.
• Expect the data you collect to confirm your assumptions. It’s easy to
accept data that confirms your pet theories. Be prepared—and open-
minded—when the data refute what you expected to see.
• Forget the process maps. They are worth thousands and thousands
of unread words in three-ring binders.
Failure Modes and Effects Analysis (FMEA)
(George, 2002)
Like several other tools described previously in this chapter, failure modes and effects analysis
(FMEA) is a primarily a tool of focus. FMEA is used to prioritize risks to the project and document
recommended actions. Each potential type of failure of a product or process is assessed relative to
three criteria on a scale of 1 to 10:
• The likelihood that something will go wrong (1 = not likely; 10 = almost certain).
• The severity of a failure (1 = little impact; 10 = extreme impact, such as personal injury or high
financial loss).
• The three scores for each potential failure are multiplied together to produce a combined rating
known as the Risk Priority Number (RPN): those with the highest RPNs provide the focus for
further process/redesign efforts.
Gage Repeatability and Reproducibility (R&R)
(George, 2002)
In some ways, it might be argued that gage repeatability and
reproducibility (gage R&R) should appear first on everyone's tool list,
because it's of fundamental importance. Implicit in our discussion is
the assumption that the measurements being taken are accurate and
consistent. But this assumption is not always true. Gage R&R is the
method by which physical measurement processes are studied and
adjusted to improve their reliability. "Repeatability" means that
someone taking the same measurement on the same item with the
same instrument will get the same answer. "Reproducibility" means
that different people measuring the same item with the same
instrument will get the same answer.
Run Charts (George, 2002)
Run Charts. By definition, a process is something that is repeated
periodically over time. It stands to reason, therefore, that much of the data a
team collects will be produced over time as well—such as key process
measures taken each shift, number of defects produced per hour or per day,
total lead time each day, and so on.
There is a special subset of tools useful for displaying and analyzing data that
is time-ordered, the simplest of which is called a run chart
You can learn a lot simply by plotting data in time order, such as …
The general range of scatter (variation) in the points.
Whether the data points are generally stable around some mean or if there
are clear trends upward or downward.
Control Charts (George, 2002)
Control Charts. Control charts are the high-power version of run charts. The purpose of a control chart is to
help a team determine whether the variation seen in the data points is a normal part of the process (known as
"chance" or "common cause" variation) or if something different or noticeable is happening ("special cause" or
"assignable" variation). There are different improvement strategies depending on which type of variation is
present (common or special cause), so it is important for a team to know the difference. There are several
simple statistical rules used to analyze the patterns and trends on the control chart to determine whether
special cause variation is present.
The basic structure of a control chart is always the same. The charts show the following:
Data points plotted in time order.
A centerline that indicates the average.
Control limits (lines drawn approximately 3 standard deviations from the average) that indicate the expected
amount of variation in the process.
What differs from chart to chart is the type of data plotted on the chart and the specific formulas used to
calculate the control limits. Being able to know what kind of data to collect and the best way to calculate
control limits is a skill that a black belt will develop only through special training or under the guidance of a
master black belt or other statistical expert.
Control Charts (George, 2002)
It takes time and effort to create a control chart, so the first and most important decision to make is
when to create one. When control charts are used as part of a DMAIC project, that decision should
be fairly clear: you want to monitor variation in characteristics of the process and/or its output that
are critical to quality in terms of your project goals. In other words, don't have black belts create
control charts just because they can! Pick and choose where to use these tools.
In and of itself, creating a control chart does you no good. You have to understand what the chart is
telling you and take appropriate action. We create control charts for one purpose: to help us
distinguish between two types of variation: common cause variation and special cause variation
(also known as assignable variation).
Common cause variation is inherent in the process; it is present all the time to a greater or lesser
extent.
Special cause variation is a change that occurs because of something different or unusual in the
process.
As mentioned above, the reason we need to tell the difference between special and common cause
variation is because there are different strategies for reacting to them.
DFSS (Morgan & Brenig-Jones, 2012)
Design is a funny word. Some people think Design means how it looks. But, of course, if you
dig deeper, it’s really how it works.
—Steve Jobs
Design for Six Sigma (DfSS) is a philosophy for designing new products, services and
processes often with high customer involvement from the outset, though that won’t
always be so – consider inventions by people such as Jobs and Dyson, for example.
When redesigning a process, we focus on the customer. When designing a new service or
product, there may not be a customer yet. In this case, we concentrate on the demands of
the (potential) marketplace.
Where the customer is involved, we mean both end user customers and internal business
stakeholders and users. Customer requirements and the resulting CTQs (Critical To Quality
– see Chapter 4) are established early on and the DMADV framework rigorously ensures
that these requirements are satisfied in the final product, service or process.
SIMPLIFIED FMEA (Brussee, 2012)
Manufacturing. Before implementing any new design, process, or change, do a simplified FMEA. An
FMEA converts qualitative concerns into specific actions. You need input on what possible negative
effects could occur.
Sales and marketing. A change in a sales or marketing strategy can affect other products or lead to
an aggressive response by a competitor.
A simplified FMEA is one way to make sure that all the possible ramifications are understood and
addressed.
Accounting and software development. The introduction of a new software package or a different
accounting procedure sometimes causes unexpected problems for those affected. A simplified
FMEA will reduce unforeseen problems and trauma.
Receivables. How receivables are handled can affect future sales. A simplified FMEA will help
everyone involved to understand the concerns of both customers and internal salespeople and to
identify approaches that minimize future sales risks while reducing overdue receivables.
Insurance. The balance between profits and servicing customers with insurance claims is dynamic. A
simplified FMEA helps keep people attuned to the risks associated with any actions under
consideration.
SIMPLIFIED FMEA (Brussee, 2012)
A simplified FMEA is a method for reviewing things that can go wrong even if a
proposed project, task, or modification is completed as expected. Often a project
generates so much support and enthusiasm that it lacks a healthy number of
skeptics, especially with regard to any effects that the project may have on things
that are not directly related to it. Everyone is working on the details of getting the
project going, and little effort is being spent on looking at the ramifications beyond
the specific task!
The simplified FMEA form is a way of taking a critical look at a project before it is
implemented; it often saves a lot of cost and embarrassment. In doing a simplified
FMEA, it is assumed that all inherent components of the direct project will be done
correctly. (They should have been covered in regular project reviews.) The
emphasis in a simplified FMEA is on identifying affected components or issues
downstream or in tangentially related processes in which issues may arise because
of the project.
SIMPLIFIED FMEA
(Brussee, 2012)
A negative number means that the solution actually makes the concern worse.
SIMPLIFIED FMEA (Brussee, 2012)
Enter this value in the upper half of the block beneath the solution
item and opposite the concern. After these ratings are complete,
multiply each rating by the concern value on the left. Enter this product
in the lower half of each box. Add all the values in the lower half of the
boxes in each column and enter the sum in the Totals row indicated
near the bottom of the form. These are then prioritized, with the
solution with the highest value being the number one consideration for
implementation.
ANALYSIS IMPROVEMENT CONTROL
Potential
Detectability 2
Occurrence 2
Detectability
Occurrence
Severity 2
failure modes Effect failure Potential
Severity
RPN 2
RPN
No.
• QFD is fundamentally a quality planning and management process to drive to the best possible
product and service solutions.
• QFD helps enable management to evaluate whether the product plans are worth the investment.
Working through the QFD process together provides the important benefit of alignment—within
the project team, and to management desires.
Quality Function Deployment (Cohen &
Ficalora, 2009)
Any place there are voices to be heard, analyzed, and driven into development activities, QFD can
be utilized. In general, there are four broad categories of voices in the Six Sigma community that
may be linked to development activities involving QFD:
1. Voice of the Customer (VOC): the voices of those who buy or receive the output of a process,
clearly an important voice in product and process development
2. Voice of the Business (VOB): the voices of funding and sponsoring managers for marketing and
development activities
3. Voice of the Employee (VOE): the voices of those employees who work in your company and/or
develop new products, services, and technologies
4. Voice of the Market (VOM): the voices of trendsetting lead-user or early-adopter market
segments, or market-defining volume purchasers
Quality
Function
Deployment
(Brussee,
2012)
Quality
Function
Deployment
(Brussee,
2012)
Quality Function Deployment (Brussee, 2012)
Here are some examples of how a QFD is used.
Manufacturing. Use the simplified QFD to get input from customers on their needs at the start of
every new design or before any change in process or equipment.
Sales and marketing. Before any new sales initiative, do a simplified QFD, inviting potential
customers, salespeople, advertisement suppliers, and other such groups to give input.
Accounting and software development. Before implementing a new program language or software
package, do a simplified QFD. A customer’s input is essential for a seamless introduction.
Receivables. Do a simplified QFD on whether your approach to collecting receivables is optimized.
In addition to those who are directly involved in collections, invite customers who are overdue on
receivables to participate. (You may have to give them some debt relief to get their cooperation.)
Insurance. Do a simplified QFD with customers to see what they look for when they pick an
insurance company or what it would take to make them switch. (Brussee, 2012)
Quality Function Deployment (Brussee, 2012)
QFD originally stood for quality function deployment. Years ago, when quality departments were
generally much larger than they are now, quality engineers were “deployed” to the customers to
rigorously probe a customer’s needs and then create a series of forms that made the transition from
those customer needs to a set of actions that the supplier could take. The simplified QFD attempts
to accomplish the same task in a condensed manner.
What is presented here is a simplified version of the QFDs that are likely to be described in many Six
Sigma classes. Some descriptions of these traditional QFDs and the rationale for the simplification
will be given later in this chapter. The simplified QFD is usually used in the Define or the Improve
step of the DMAIC process.
A simplified QFD is a Six Sigma tool that does not require any statistics. However, it is usually
necessary to do a simplified QFD to understand what actions are needed to address a problem or
implement a project. The specific actions that are identified in the QFD, or in any of the other
qualitative tools, are often what trigger the application of the statistically based Six Sigma tools.
Quality Function Deployment (Brussee, 2012)
Many product, process, and service issues are caused by not incorporating
inputs from customers and/or from suppliers of components and raw
materials early in a design or process change. Often the manufacturers’
decision makers just assume that they and the people from whom they
source the components and raw materials already know what the customers
want.
The customers in this case include everyone who will touch the product
while or after it is made. This includes employees in production, packaging,
shipping, and sales as well as the end users. They are all influenced by any
design or process change. The people who operate equipment, who do
service work, or who implement can be both customers and suppliers.
Quality Function Deployment (Brussee, 2012)
The most difficult (and important) step in doing any QFD is getting the
suppliers, operators, and customers together to prepare the required
QFD form(s). Every group that is affected by the project should be
represented. The desires of one group will sometimes lead to
limitations on others, and simultaneous discussions among the factions
will often identify options that were not previously considered, to
arrive at the best possible overall solution.
Quality Function Deployment (Brussee, 2012)
Quality Function Deployment (Brussee, 2012)
The simplified QFD form is a way of quantifying design options, always measuring these options
against customer needs. The first step in preparing the simplified QFD form is to make a list of the
customer needs. Then, assign a value of from 1 to 5 to each need:
A negative number means that it is detrimental to that customer need. (A negative number
is not that unusual, since a solution to one need sometimes interferes with another need!)
Quality Function Deployment (Brussee, 2012)
Put the rating in the upper half of the block beneath the design item and opposite
the need. Then, multiply the design rating times the value assigned to the
corresponding customer need. Enter this result in the lower half of the square
under the design action item rating. These values will have a possible high of 25.
Once all the design items have been rated against every customer need, add up the
values in the lower half of the boxes under each design item and enter the sums
into the Totals row at the bottom of the sheet. The solutions with the highest
values are usually the preferred design solutions to address the customer needs.
Upon reviewing these totals, someone may feel that something is awry and want to
go back and look again at some ratings or design solutions. This second (or third)
review is extremely valuable. Also, the customer 5 ratings should be discussed one
at a time to make sure that they are being addressed sufficiently.
Quality Function Deployment (Brussee, 2012)
In the simplified QFD form, near the bottom, design action items are grouped when only
one of the options can be done. The priorities within a group are only among the items in
that group.
In this case, the priorities showed that the supplier should cast a plastic license plate cover
with a built-in plastic lens. This precludes the need for a separate lens, which is why NA
(not applicable) is shown for both items in that grouping. The unit should be mounted
using plastic screws, with holes for all plates cast in. Gold or silver plating is an option that
can be applied to the rim of the plastic.
Note that some of the luxury items (like steel casting) weren’t picked because other factors
were deemed to be more important. Having the customers there for these discussions is
very critical, so that they don’t feel that the supplier is giving them something other than
what they really want. Often customers will start out with a wish list of items that is then
trimmed down to the critical few.
Quality Function Deployment (Brussee, 2012)
The form can be tweaked to make it more applicable to the product,
process, or service that it is addressing. What is important is getting
involvement from as many affected parties as possible and using the
simplified QFD form to drive the direction of the design.
The simplified QFD form should be completed for all new designs or for
process modifications. The time and cost involved in holding the
required meetings will be more than offset by making the correct
decisions up front, rather than having to make multiple changes later.