Download as pdf or txt
Download as pdf or txt
You are on page 1of 557

Lean Six

Sigma
Green Belt
Mohammad Ali Toosy
Lean Six Sigma Master
Black Belt
Trap of Success (Hays, 2018)
What are your reasons
In Lean Six Sigma, avoiding the tendency to jump to conclusions and make
assumptions about things is crucial. However are
You’re contemplating applying Lean Six Sigma in your business or organisation,
and you need to understand what you’re getting yourself into.
Your business is implementing Lean Six Sigma and you need to get up to speed.
Perhaps you’ve been lined up to participate in the programme in some way.
Your business has already implemented either Lean or Six Sigma and you’re
intrigued by what you might be missing.
You’re considering a career or job change and feel that your CV or resume will
look much better if you can somehow incorporate Lean or Six Sigma into it.
You’re a student in business, operations or industrial engineering, for example,
and you realise that Lean Six Sigma could help shape your future.
(Morgan & Brenig-Jones, 2016)
Learning Objectives
Learning objectives for this Training is to enable you to:
• Understand why a company should utilize the Six Sigma and Lean
methodologies
• Learn the roles of measurement and statistics in Six Sigma
• Gain exposure to a range of tools, from simple to advanced
• Understand the value of combining Six Sigma with Lean methodology
• Understand when and why to apply Six Sigma and Lean tools
• Engage in a step-by-step application of the methodology and tools
(Fliedner, 2015)

There is nothing more difficult to plan, more


doubtful of success, nor more dangerous to
manage than the creation of a new system. For
the initiator has the enmity of all who profit by
the preservation of the old system and merely
lukewarm defenders in those who would gain
by the new one.
Machiavelli (1469–1527)
Walter Shewhart
(1891-1967) Working at Western Electric Company,
Shewhart developed control chart techniques that
helped to distinguish between "assignable-cause"
and "chance-cause“ variation. Shewhart stressed
that bringing a production process into a state of
"statistical control" is necessary to predict future
output and to manage a process economically.
W. EDWARDS DEMING’S 14 POINTS
1. Create constancy of purpose for improving products and services.
2. Adopt the new philosophy.
3. Cease dependence on inspection to achieve quality.
4. End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier.
5. Improve constantly and forever every process for planning, production and service.
6. Institute training on the job.
7. Adopt and institute leadership.
8. Drive out fear.
9. Break down barriers between staff areas.
10. Eliminate slogans, exhortations and targets for the workforce.
11. Eliminate numerical quotas for the workforce and numerical goals for management.
12. Remove barriers that rob people of pride of workmanship, and eliminate the annual rating or merit system.
13. Institute a vigorous program of education and self-improvement for everyone.
14. Put everybody in the company to work accomplishing the transformation.
Joseph Juran
(1904-) Juran worked at Western Electric 3. Organize to reach the goals.
with Deming and Shewhart. His 4. Provide training throughout the
contributions included the development organization
and implementation of strategic quality 5. Carry out projects to solve problems.
planning including: 6. Report progress.
Juran's ten steps of quality improvement 7. Give recognition.
process are:
· Quality improvement 8. Communicate results.
· Quality planning 9. Keep score.
· Quality control 10. Maintain momentum by making annual
improvement part of the regular systems
1. Build awareness of the need and and processes of the company.
opportunity for improvement.
2. Set goals for improvement.
Steve Jobs on
Joseph Juran
and Quality
Genichi Taguchi (1924-2012)
• The Robust Design Method, emphasizing quality through
design rather than through inspection

• The Quality Loss Function which provides a financial value


for customers’ increasing dissatisfaction of product
performance. This function also gives a financial value for
increasing costs as product performance rises above the
desired target performance.
(1915-1989) Dr. Kaoru Ishikawa
considered the father of Japanese quality
control efforts and was involved with the
Japanese quality movement from its
inception. He was instrumental in making
the quality movement a nationwide
phenomenon through his educational
efforts and his work with the Union of
Japanese Scientists and Engineers. Ishikawa
states that quality control is the practice of
developing, designing, producing and
servicing a quality product that is most
economical, most useful and always
satisfactory to the consumer. Ishikawa
contributed to the development of a cause-
and-effect analysis diagram known as a
"fishbone diagram" or "affinity diagram".
Armand Feigenbaum (1920-2014)
His quality philosophy emphasizes the need for everyone in the organization to focus
obsessively on serving the external and internal customers. To this end, Total Quality
Control provides Four management fundamentals of total quality:

1. Make quality a full and equal partner, with innovation starting from the inception
of product development.
2. Emphasize getting high-quality product design and process matches upstream,
before manufacturing planning has frozen the alternatives.
3. Make full-service suppliers a quality partner at the beginning of design rather
than implementing a quality surveillance program later.
4. Make the acceleration of new product introduction a primary measure of the
effectiveness of a company’s quality program.
Philip Crosby (1926-
2001)
 He was one of ITT’s first vice presidents of corporate
quality, and gained prominence in the quality field after
publishing Quality Is Free in 1979. Subsequently, he
founded Philip Crosby Associates, a quality management
consulting firm, and the Quality College, an institute that
provides quality training for top management.

 One of Crosby’s major contributions was making quality


meaningful and accessible to American executives. He
promoted addressing quality problems through existing
management and organizational structures rather than
from a statistical basis.
Jack Welch (1935-2020)

Welch is the former CEO of General Electric


who transformed the corporation by
bringing in Mikel Harry together with Six
Sigma as a core business strategy.
Mikel Harry
(1951-2017)
Founder of the Six Sigma
Academy, Harry is also
known "as the principal
architect of Six Sigma
Bob Emiliani on Lean
Any employee who has experienced REAL Lean, knows that it:
1. Humanizes the workplace and improve cooperation,
communication, and enthusiasm for work
2. Focuses and energizes me
3. Adds skills to my repertoire
4. Increases my knowledge
5. Increases my creativity
6. Makes my job more valuable and secure
If these are not the outcomes, they you’re not doing Lean
management. And you’re causing harm to people. That is not
in your job description. So why do you do it?
https://bobemiliani.com/eliminating-the-six-criticisms-of-
lean/?fbclid=IwAR3YDxMTXOAmnss6VoUsIJyLGef-Pnb3_-
nw4_53JrKLEc6lmsgkdZdbMNM
Common The three great essentials to
achieve anything worthwhile
Common sense is instinct.
Enough of it is genius .
Sense (Kelly, are: Hard work, Stick-to-
itiveness, and Common sense .
George Bernard Shaw

2018) Thomas A. Edison


Application of common sense
is Intelligence .
Common sense is seeing things
Ramana Pemmaraju
as they are; and doing things
as they ought to be. 
Harriet Beecher Stowe
Has anyone reading this had a service horror story
you would like to share? Put another way, is there
anyone who has never had a terrible, frustrating
DO WE NEED experience? Recalling these abysmal experiences
is easy. We wonder how many examples of
LEAN IN excellent service our readers can come up with.
SERVICES? Have you called your cable service provider, or
your electric company, or the Internal Revenue
(Ross & Liker, Service, and quickly talked to a real person who
graciously and efficiently solved your problem?
2016) The answer may be that service was excellent
when you were dealing with a salesperson, but
terrible once you made the purchase and then
had a problem.
Do power outages get efficiently resolved? Is road
repair efficient? When was the last time you
remember passing a road repair operation and
DO WE NEED seeing people actually working? Anyone waited
on the plane parked on the tarmac after the pilot
LEAN IN explained there was a minor issue that would take
SERVICES? minutes to fix only to find that nobody had
arrived at the plane for 30 minutes or more? (I
(Ross & Liker, [Jeffrey] am doing that as I write this.) Read any
interesting magazines in an office of a doctor,
2016) dentist, or your auto repair facility while you
looked at your watch wondering when you would
be served? Waited for a contractor or inspector
on that exciting new construction project?
Do power outages get efficiently resolved? Is road
repair efficient? When was the last time you
remember passing a road repair operation and
DO WE NEED seeing people actually working? Anyone waited
on the plane parked on the tarmac after the pilot
LEAN IN explained there was a minor issue that would take
SERVICES? minutes to fix only to find that nobody had
arrived at the plane for 30 minutes or more? (I
(Ross & Liker, [Jeffrey] am doing that as I write this.) Read any
interesting magazines in an office of a doctor,
2016) dentist, or your auto repair facility while you
looked at your watch wondering when you would
be served? Waited for a contractor or inspector
on that exciting new construction project?
Accenture conducted an enlightening study of
the life insurance industry—that bastion of
DO WE NEED service excellence. According to the study,
about $470 billion of insurance business
LEAN IN globally will be up for grabs because of
SERVICES? unhappy customers. According to a survey of
more then 13,000 customers in 33 countries,
(Ross & Liker, 16 percent (less than 1 in 7) said they would
2016) “definitely buy more products from their
current insurer.” Only 27 percent rated highly
their insurance provider’s “trustworthiness.”
(Peters, 2010)

Excellent firms don’t


believe in excellence—
only in constant
improvement and
constant change.
(Attolico, 2018)

He that does not foresee


things in the distant future,
exposes himself to
unhappiness in the near
future.

Confucius
Lean Transformation
(Kelly, 2018)
• So, what is a “transformation”? Generically, a
transformation is thorough or dramatic change in the
form or appearance of an item.

• Therefore, an Enterprise Lean Transformation is the


end-the-end transformation of a company’s order
fulfillment process: i.e. a Lean Transformation of the
order fulfillment process would maximize customer
value while minimizing waste throughout the process
and sub-processes
Lean Transformation
(Kelly, 2018)
• The focus of the Lean Transformation should be to
provide maximum value to the customer through
perfect processes that consist of zero waste.

• A Lean Culture Transformation changes the focus


from the optimization of segregated activities,
technologies, assets, and vertically structured
functional departments (i.e. departmental silos) to the
optimization of the flow of products and/or services
through value streams that flow horizontally to
customers across activities, technologies, assets, and
departments
To transform your business, the focus should be on the
processes, not the individual functions, as a value stream is
no more than a series of related processes that deliver a
product or a service. The best way to ensure that you
deliver a quality product or service is to control the
process, and the first step in the Lean Transformation is to
Lean identify and remove inefficiencies (a.k.a. waste) from the
process (and organization). Eliminating waste throughout
Transformation the value streams, instead of at isolated activities, creates
processes that require fewer operators and assets and less
(Kelly, 2018) space and time to make products, or that enable the
delivery of services at far lower costs and with far fewer
defects compared with traditional business thinking. Lean
Transformed companies are more responsive to changing
customer desires and are of higher quality, lower cost, and
shorter throughput times. In addition, information
management becomes simpler and more accurate.
Enterprise Lean
Transformation
Model (Kelly, 2018)
Appetite for
Change
(Kelly, 2018)
Maximising Value for Customers (Kelly, 2018)

raising quality levels – zero defects, Six Sigma;

raising delivery and service levels – 100% on-time-in-full (OTIF), 100% supply
reliability, transparent collaborating;

reducing costs: win–win cost reduction sharing, managing supply chain inventory
and cost;

reducing response time – shortening process lead time, shortening overall order
fulfillment times, shortening replenishment times.
Since the 1980s, organizations have attempted to
introduce quality methodologies with varying degrees of
success. The reasons for their success or failure are
numerous, but one reason stands out more than any
other: the lack of a sustainable quality culture. With Lean
Six Sigma (LSS), an organization is setting out on a distinct
(Shaffie & journey using common operating mechanisms, training, an
organizational structure, objectives, and a common
Shahbazi, language. The fact that LSS is increasingly the approach to
quality chosen by both large and small organizations is in
2012) large part because it has staying power; it is a proven
methodology that has produced measurable financial
results over two decades. If an organization and all its
various divisions have accepted the introduction of LSS,
they are all working toward a common strategic goal, and
they all understand the path to attaining this goal, then the
effectiveness of LSS is greatly enhanced.
The Lean Mindset: Ask the Right Question
(Poppendieck & Poppendieck, 2013)
Start with an inspiring purpose, and overcome the curse of short-term thinking

Energize teams by providing well-framed challenges, larger purposes, and a direct line of sight between their
work and the achievement of those purposes

Delight customers by gaining unprecedented insight into their real needs, and building products and services
that fully anticipate those needs

Achieve authentic, sustainable efficiency without layoffs, rock-bottom cost focus, or totalitarian work systems

Develop breakthrough innovations by moving beyond predictability to experimentation, beyond globalization


to decentralization, beyond productivity to impact
The Purpose of Business (Poppendieck & Poppendieck, 2013)

The Purpose of Business emphasizes the principle Optimize the Whole, taking the Shareholder Value Theory to
task for the short-term thinking it produces. The alternative is to Focus on Customers, whose loyalty
determines the long-term success of any business. It is one thing for business leaders to have a vision of who
their customers are, but quite another to put the work systems in place to serve those customers well. In the
end, the front-line workers in a company are the ones who make or break the customer experience.

It turns out that the “rational” thinking behind the Shareholder Value Theory has had a strong influence on the
way workers are treated. It all boils down to Douglas McGregor’s Theory X and Theory Y. Theory X assumes that
people don’t like work and will do as little as possible. Theory Y assumes the opposite: Most people are eager
to work and want to do a good job. The lean principle Energize Workers is solidly based on Theory Y—start with
the assumption that workers care about their company and their customers, and this will be a self-fulfilling
prophecy. The principle of reciprocity is at work here—if you treat workers well, they will treat customers well,
and customers will reward the company with their business.
Energize Workers (Poppendieck &
Poppendieck, 2013)
Energized Workers is based on the work of Mihaly Csikszentmihalyi, who found that the most energizing human experience is
pursuing a well-framed challenge. Energized workers have a purpose that is larger than the company and a direct line of sight
between their effort and achieving that purpose. They strive to reach their full potential through challenging work that requires
increasing skill and expertise. They thrive on the right kind of challenge—a challenge that is not so easy as to be boring and not so
hard as to be discouraging, a challenge that appeals to aspirations or to duty, depending on the “regulatory fit.”

Regulatory fit is a theory that says some people (and some companies—startups, for example) are biased toward action and
experimentation and respond well to aspirational challenges. Other people (and companies—big ones, for example) prefer to be safe
rather than sorry. For them, challenges that focus on duty and failure prevention are more inspiring. But either way, a challenge that
is well matched to the people and the situation is one of the best ways to energize workers.

One of the most important challenges in a lean environment is to Constantly Improve. Whether it is a long-term journey to improve
product development practices or an ongoing fault injection practice to hone emergency response skills, striving to constantly get
better engages teams and brings out the best in people.
Delight Customers (Poppendieck &
Poppendieck, 2013)
Delighted Customers urges readers to Focus on Customers, understand what they really need, and make sure that the right products
and services are developed. This is the first step in the quest to Eliminate Waste, especially in software development, where building
the wrong thing is the biggest waste of all.

Some products present extraordinary technical challenges—inventing the airplane or finding wicked problems in a large data
management system. Other products need insightful design in order to really solve customer problems. Before diving into
development, it is important to Learn First to understand the essential system issues and customer problems before attempting to
solve them.

When developing a product, it is important to look beyond what customers ask for, because working from a list of requirements is not
likely to create products that customers love. Instead, leaders like GE Healthcare’s Doug Dietz, who saw a terrified child approach his
MRI scanner, understand that a product is not finished until the customer experience is as well designed as the hardware and
software.

Great products are designed by teams that are able to empathize with customers, ask the right questions, identify critical problems,
examine multiple possibilities, and then develop products and services that delight customers.
Genuine Efficiency (Poppendieck &
Poppendieck, 2013)
Genuine Efficiency starts by emphasizing that authentic, sustainable efficiency does not mean layoffs, low costs, and controlling work
systems. Development is only a small portion of a product’s life cycle, but it has a massive influence on the product’s success. It is
folly to cut corners in development only to end up with costly or underperforming products in the end. Those who Optimize the
Whole understand that in product development, efficiency is first and foremost about building the right thing.

Two case studies from Ericsson Networks demonstrate that small batches, rapid flow, autonomous feature teams, and pull from the
market can dramatically increase both predictability and time to market on large products. Here we see the lean principles of Focus
on Customers, Deliver Fast, Energize Workers, and Build Quality In at work.

A case study from CareerBuilder further emphasizes how focusing on the principle of Deliver Fast leads to every other lean principle,
especially Build Quality In and Focus on Customers. A look at Lean Startup techniques shows that constant experiments by the
product team can rapidly refine the business model for a new product as well as uncover its most important features. Here the lean
principles of Optimize the Whole, Deliver Fast, and Keep Getting Better are particularly apparent.
Breakthrough Innovation (Poppendieck &
Poppendieck, 2013)
Breakthrough Innovation starts with a cautionary tale about how vulnerable businesses are—even
simple businesses like newspapers can lose their major source of revenue seemingly overnight. But
disruptive technologies don’t usually change things quite that fast; threatened companies are
usually blind to the threat until it’s too late. How can it be that industry after industry is overrun
with disruptive innovation and incumbent companies are unable to respond?

The problem, it seems, is too much focus on today’s operations—maybe even too much focus on
the lean principle of Eliminate Waste—and not enough focus on the bigger picture, on Optimize the
Whole. Too much focus on adding features for today’s customers and not enough focus on potential
customers who need lower prices and fewer features. Too much focus on predictability and not
enough focus on experimentation. Too much focus on productivity and not enough focus on impact.
Too much focus on the efficiency of centralization and not enough appreciation for the resiliency of
decentralization.
Lean Mindset (Poppendieck & Poppendieck,
2013)
• Lean organizations appreciate that the real knowledge resides at the place where work is done, in
the teams that develop the products, in the customers who are struggling with problems. Several
case studies—including Harman, Intuit, and GE Healthcare—show how the lean principles of
Focus on Customers, Energize Workers, Learn First, and Deliver Fast help companies develop
breakthrough innovations before they get blindsided by someone else’s disruptive innovations.

• Developing a lean mindset is a process that takes time and deliberate practice, just like
developing any other kind of expertise. No matter how well you “know” the ideas presented in
this book, actually using them in your work on a day-to-day basis requires that you spend time
trying the ideas out, experimenting with them, making mistakes, and learning.

• Cultivating a lean mindset—especially in an organization—is a continuing journey. We hope this


book brings you another step along the path.
Creating a sustainable quality culture
Lean Six Sigma Bringing clarity to “invisible” processes and
enhancing control
can help an Providing a corporate strategy for
organization differentiation
achieve Becoming a world-class service or product
quantifiable provider at the lowest cost
improvements Providing customers with what they value
(Shaffie & Reducing the hidden costs associated with
poor-quality products or services
Shahbazi, 2012) Disproving the perception that Lean Six Sigma
applies only in manufacturing
1. LSS projects are typically linked to business-
critical issues. This ensures that the LSS teams
are assigned to address challenging issues and
LSS’s deliver quantifiable benefits, and that they get
strength in the level of attention and support required for
long-term success.
promoting 2. LSS provides a standard approach to
sustainability problem solving. Management and executives
can be sure that the appropriate level of rigor
(Shaffie & has been applied and that the team has
Shahbazi, worked to find the root cause. In short, LSS
stops employees from jumping from problem
2012) statement to solution.
The Lean methodology is an operational
philosophy with a focus on identifying and
eliminating all waste in an organization. Lean
Lean (Voehl, principles include zero inventory, batch to
flow, cutting batch size, line balancing, zero
Harrington & wait time, pull instead of push production
control systems, work area layout, time and
Charron, motion studies, and cutting cycle time. The
2013) concepts are applied to production, support,
and service applications. Lean focuses on
eliminating waste from processes and
increasing process speed by focusing on what
customers actually consider quality, and
working backwards from that.
The Six Sigma methodology is a business-management
strategy designed to improve the quality of process
outputs by minimizing variation and causes of defects in
processes. It is a subset of the TQM methodology with a
heavy focus on statistical applications used to reduce costs
Six Sigma and improve quality. It sets up a special infrastructure
within the organization that is specifically trained in
(Voehl, statistical methods and problem solution approaches that
serve as the experts in these approaches. The two
Harrington & approaches that these experts use in their problem
analysis and solution activities are Define, Measure,
Charron, Analyze, Improve, and Control (DMAIC) and Define,
Measure, Analyze, Design, and Verify (DMADV).Six Sigma
2013) aims to eliminate process variation and make process
improvements based on the customer definition of quality,
and by measuring process performance and process
change effects.
Lean Six Sigma (LSS): The LSS methodology is
Lean Six an organization-wide operational philosophy
that combines two of today’s most popular
Sigma performance improvement methodologies:
(Voehl, Lean methods and the Six Sigma approach.
The objective of these approaches is to
Harrington & eliminate nine kinds of wastes (classified as
Charron, defects, overproduction, transportation,
waiting, inventory, motion, overprocessing,
2013) underutilized employees, and behavior waste)
and provide goods and services at a rate of 3.4
defects per million opportunities (DPMO)
• Regorous adherence to proven methodologies, tools and
techniques that will enable a Leader to confidently solve
process problems using data-driven approaches so that
the problems stay solved (Game Change, UK)

• Lean Six Sigma is a synergistic process that creates a


Lean Six value stream map of the process identifying value add
and non-value add costs, and captures the Voice of the
Sigma is …… customer to define the customer Critical To Quality
issues. Projects within the process are then prioritized
based on the delay time they inject. This prioritization
process inevitably pinpoints activities with high defect
rates (Six Sigma tools) or long setups, downtime (Lean
tools), with the result often yielding savings of $250,000
and a payback ratio between 4-1 and 20-1. Michael
George
Lean Six Sigma is…
• Lean Six Sigma is the synthesizing agent of business performance
improvement that, like an alloy, is the unification of proven tools,
methodologies, and concepts, which forms a unique approach to
deliver rapid and sustainable cost reduction. (George, 2010)
(Oppenheim, Felbur, 2014)
Lean originated in car production at Toyota. In the 1990s, it came to the
United States and other Western countries, first in manufacturing, where it
has established itself as the new paradigm, and soon expanded to many
other industries—some having nothing to do with manufacturing such as
administration and government [Carter, 2008], government finance [Reagan,
2011], healthcare delivery [Graban, 2009], supply chain management
[Bozdogan, 2004], education [Emiliani, 2004], project management
[Oehmen, 2012], product development [Ward, 2007], engineering [Murman,
2002], systems engineering [Oppenheim, 2011], defense work [LAI], and
enterprise management [Jones, 2006]. We now know with absolute certainty
that Lean applies to any type of work, in any industry, nonprofit or
government organization, in any country, and in any economic and political
system. Lean can be applied to any human activity because it is grounded in
the commonality of human behavior.
(Williams & Duray, 2012)
Current View of Six Sigma (Snee & Hoerl,
2018)
We live in a very different world from 1987 when Six Sigma was first introduced by
Motorola, and from the early 2000s when the first books on Six Sigma and Lean Six Sigma
deployment appeared. How should one think about continuous improvement in a modern
world? Is Lean Six Sigma the best approach to take for all problems, including large,
complex, unstructured problems, such as climate change or the Millennial Development
Goals? The authors argue that a different paradigm is needed to take continuous
improvement to a new level in the world in which we now find ourselves. Snee and Hoerl
refer to this paradigm as Holistic Improvement, and suggest Lean Six Sigma 2.0, the next
version of Lean Six Sigma, as the best methodology based on this paradigm. Holistic
improvement incorporates a diverse array of improvement methods, beyond Lean and Six
Sigma, to ensure that organizations can apply the most relevant improvement method to a
specific problem.
Dramatically boosts profit margins,
earnings, and ultimately enterprise value
by

Engaging every employee in a culture of


Lean is more continuous improvements where

than a tactic
(Byrne, 2016) Every person takes ownership for
problem-solving and learning in order to

Deliver more value to the customer by


identifying and removing waste—
permanently.
TPS is a holistic process that entails five steps that
start with the customer:
Listen to the customer.
Design the product.
The Toyota Coordinate the supply chain.
Production Produce the product from order to delivery.

System (Dahl, Manage the combined enterprise.


These five fairly simple steps make up the “Lean” way
2020) of manufacturing. For Lean manufacturing to work, a
company needs to adopt the broad principles of the
Lean mindset, which entails truly changing the way
you think about your products/services.
Lean Mindset (Dahl, 2020)
The Lean mindset emphasizes four important principles:
• Listening to your customers and respecting others
• Removing bottlenecks and waste in the system
• Committing to relentless continuous improvement
• Creating value for the customer and company
Possessing a Lean mindset pushes Lean leaders to create innovative new
products/services that create customer-driven value, introducing disruption and
pushing others to also continuously improve and evolve to stay relevant. The Lean
mindset is a result of practicing a more specific strategy known as Lean thinking.
• Exploiting iterative and incremental value
creation

• Learning through short feedback loops that


encourage ingenuity and passion
Lean Thinking
(Dahl, 2020) • Creating uninterrupted flow through the
elimination of waste

• Achieving uncompromised quality

• Emphasizing the use of empirical scientific


methods and measurements to gauge progress
Lean thinking is scientific thinking, which means using
scientific methods grounded in empirical fact and
evidence-based processes to:
1. Observe a problem
2. Form a question around the problem
Lean 3. Develop the hypothesis

Thinking 4.
5.
Conduct an experiment
Analyze the data to draw conclusions
(Dahl, 2020) 6. Document the methods and findings
and…repeat
It’s a process that systematically challenges everything
continuously, and it must be performed throughout
the entire Lean enterprise.
Modern Lean
Framework
(Dahl, 2020)
Understand that leaders are both born and made.

Seven Things
That Must be Expect some degree of failure.

True to Shift your mindset from training to developing yourself and


others.

Develop Make leadership development an objective, measurable process.


Yourself into a
Lean Leader Empower others to lead.

(Dahl, 2020) Build a culture that encourages Lean leaders to flourish.

Realize that your system of leadership at every level is a


competitive advantage.
Very Successful
Cases Vs Less
Successful
Cases (Hoerl &
Snee, 2018)
• We can't solve problems using the same kind of thinking which
created them.
Albert Einstein

We will not put into our establishment anything useless.


Henery Ford

Lean is Defined as the Elimination of all Non Value Activity or Waste


53 percent use some form of Lean.
A 2009
American Only 4 percent had fully implemented Lean.
Society for
Quality study 42 percent had some form of Six Sigma.
of 77 Hospitals
Only 8 percent had fully implemented Six
(Arthur, 2011) Sigma.

11 percent were unfamiliar with either Lean


or Six Sigma.
“The U.S. healthcare system wastes $700 billion
annually on the kinds of systemic inefficiencies that
would make a quality management guru cringe,” says
Robert Kelly, vice president of healthcare analytics at
Thomson Reuters. “About one-third of the country’s
total healthcare spending may be for unnecessary
Cost to USA treatments, medical errors, redundant tests,
Economy administrative inefficiencies and fraud.” And that’s
just the cost to the healthcare industry; it doesn’t
(Arthur, include the cost to patients, their families, and society,
which is perhaps 10 times higher ($10 trillion).
2011) Hospitals represent almost a third of healthcare costs.
Hospitals are expected to deliver $113 billion of the
$196 billion in savings mandated by the healthcare
reform bill. Each of the nation’s hospitals must cut
$2.6 million a year.
"So what is the strategic significance of Lean Six
Sigma? I want us to invest in the knowledge in
people's heads. I'm not asking for capital or
computers. I'm asking for an investment in people so
we can have long-term sustainability of the kinds of
results we've seen already."

Strategic Mike Joyce, VP of LM21, Lockheed Martin (George,


2003)
Significance
"The lack of initial Six Sigma emphasis in the non-
manufacturing areas was a mistake that cost Motorola
at least $5 billion over a four year period."
Bob Galvin, former CEO of Motorola (George, 2003)
• An organization-wide operational philosophy
that combines two of today’s most popular
performance improvement methodologies:
Lean methods and the Six Sigma approach.
The objective of these approaches is to
eliminate nine kinds of wastes (classified as
Lean Six defects, overproduction, transportation,
Sigma waiting, inventory, motion, overprocessing,
Methodology underutilized employees, and behavior
waste) and provide goods and services at a
rate of 3.4 defects per million opportunities
(DPMO).
(Voehl, Harrington & Charron, 2013)
The more we have tested and implemented the central tenets,
tactics, and tools of the combined Lean Six Sigma methodology,
the more convinced we've become that both are essential to
rapid and sustainable cost-cutting. The integration of Lean and
Six Sigma is one of the most effective methods for consistently
improving cost, speed, and quality, with broad successes in

Why Choose service as well as manufacturing functions. Companies have


experienced unprecedented cost savings in diverse areas:

Lean Six Feeding higher-quality leads into the sales funnel at a fraction of
the cost.
Sigma to Reducing developmental timelines for new products by 20 to 50
percent while nearly eliminating the high cost of defects.
Reduce Cost Slicing away complexity and variability throughout the supply
chain to yield 10 to 30 percent cost savings while shortening
process lead time by as much as 80 percent.
(George, 2010)
Lean Six Sigma is the synthesizing agent of
business performance improvement that, like
an alloy, is the unification of proven tools,
Lean Six methodologies, and concepts, which forms a
Sigma unique approach to deliver rapid and
sustainable cost reduction.
(George, 2010)
Ironically, Six Sigma and Lean have often been
regarded as rival initiatives—Lean enthusiasts
noting that Six Sigma pays little attention to
anything related to speed and flow, Six Sigma
supporters pointing out that Lean fails to
Lean V Six address key concepts like customer needs and
variation. Both sides are right. Yet these
Sigma arguments are more often used to advocate
choosing one over the other, rather than to
support the more logical conclusion that we
need to blend Lean and Six Sigma.
(Bentley & Davis, 2009)
• emphasizes the need to recognize opportunities and
eliminate defects as defined by customers
• recognizes that variation hinders our ability to
reliably deliver high-quality services
• requires data-driven decisions and incorporates a
comprehensive set of quality tools under a powerful
framework for effective problem solving
Six Sigma • provides a highly prescriptive cultural infrastructure
effective in obtaining sustainable results
• when implemented correctly, promises and delivers
$500,000+ of improved operating profit per Black Belt
per year (a hard dollar figure many companies
consistently achieve)
Six Sigma is a highly disciplined approach used
to reduce the process variation to such a great
extent that the level of defects is drastically
reduced to less than 3.4 per million process,
Six Sigma product or service opportunities. The
approach relies heavily on statistical tools
which, though known earlier, were primarily
limited to use by statisticians and quality
professionals. (Motiwani, 2012)
Michael George defines Lean Six Sigma as “a
methodology that maximizes shareholder
value by achieving the fastest rate of
improvement in customer satisfaction, cost,
quality, process speed and invested capital”.
Lean Six One of the important principles of Lean Six
Sigma is that “the activities that cause
Sigma hindrances in customer's critical-to-quality
(CTQ) requirements and create the longest
time delays in any process offer the greatest
opportunity for improvement”.
(Urdhwareshe, 2011)
Lean
focuses on maximizing process velocity
• provides tools for analyzing process flow and delay times at each
activity in a process
• centers on the separation of "value-added" from "non-value-added"
work with tools to eliminate the root causes of non-value-add
activities and their cost
• provides a means for quantifying and eliminating the cost of
complexity
Six Sigma is a systematic methodology to
home in on the key factors that drive the
performance of a process, set them at the
best levels, and hold them there for all time.
(Wedgwood,
2016) Lean is a systematic methodology to reduce
the complexity and streamline a process by
identifying and eliminating sources of waste in
the process—waste that typically causes a lack
of flow.
Lean Six Sigma (Wedgwood, 2016)
• The two methodologies interact and reinforce one another, such that
percentage gains in Return on Invested Capital (ROIC%) is much faster if
Lean and Six Sigma are implemented together. (Some people might
question whether ROIC is a valuable metric for service businesses, and the
answer is yes: Many service businesses—hotels, airlines, restaurants,
health care—are very capital intensive. In most other service business—
software development, financial services, government, etc.—the biggest
costs are salaries/benefits, so invested capital is really the "cost of
people.")
• In short, what sets Lean Six Sigma apart from its individual components is
the recognition that you can't do "just quality" or "just speed."
(George,
Rowlands &
Kastle, 2005)
In manufacturing businesses, a significant investment in equipment may be
The Strategic required to improve labor productivity. In contrast, service operations are
primarily driven by intellectual capital. According to Warren Buffett, "the best

Imperative of
kind of investment to make is one in which a huge return results from a very
small increment of invested capital".

Investing in By application of Lean Six Sigma, the numerator of the ROIC equation can be
increased without increasing financial investment. At Lockheed Martin's

Lean Six Sigma


procurement center, for example, the key investment that enabled a
reduction of 50% of procurement cost had a 5 month payback.

(Berkshire At Stanford Hospital and Clinics, big savings came from bringing together a
group of surgeons without any capital investment at all (details were
Hathaway, provided earlier in this chapter). If Buffett likes this kind of investment, so will
your shareholders.

1984) The concept of linking Lean Six Sigma efforts to shareholder value is critically
important but seldom discussed. If the link isn't made, your organization may
realize some gains, but it will be a crapshoot as to whether your investment
in Lean Six Sigma will help drive your strategic goals.
Lean Six Sigma for services is about getting results rapidly. The kind of results
that can be tracked to the bottom line in support of strategic objectives. The
kind that leaves delighted customers wanting to do more business, that
creates value for your shareholders, and that energizes employees.

Lean Six Sigma


for Services What accounts for the rapid results? Lean Six Sigma incorporates Lean's
principles of speed and immediate action into the Six Sigma improvement
process itself, increasing the velocity of improvement projects and hence

(George, 2003) results. Lean Six Sigma also incorporates the Six Sigma view of the evils of
variation and reduces its impact on queue times. Finally, Lean Six Sigma
uniquely attacks the hidden costs of complexity of your offering.

Combine Lean Six Sigma's ability to achieve service improvements with its
focus on shareholder value and you have a powerful tool for executing the
CEO's strategy, and a tactical tool for P&L managers to achieve their annual
and quarterly goals. How you do that is the subject of the rest of this book.
From the start, the promoters of Lean assumed
they knew what business leaders wanted. They
had hypotheses about leaders, their wants and
needs, that turned out to be wrong. Why?
Because they lacked a factual understanding of
Respect for how leaders think and what is most important to
them. Simply put, they did not bother to
People understand the current state of the people who
lead organizations. The assumption, still in use
today, has proven to be a big mistake. - Bob
Emiliani

https://bobemiliani.com/lean-got-a-bad-start/
Lean Six Sigma

The basic idea behind Lean Six Sigma is to


blend the two root methodologies into one
approach that optimizes the quality, speed,
and cost of doing business.
You should not view Lean and Six Sigma as
separate methodologies achieving different
objectives—although, generally speaking,
Lean is time-driven, and Six Sigma is
quality-driven. Picture the wheel of Lean
and Six Sigma spinning so fast that the two
methods become blurred into one
approach.
(DeCarlo, 2007)
(DeCarlo,
2007)

• The first principle you need to get when selecting a Lean Six Sigma
project is criticality, meaning you have to select and define a project that
is critical to the satisfaction of both your customers and your business.
Otherwise, it’s not worth your time, effort, or resources.
1. Service processes are usually slow processes, which are
expensive processes. Slow processes are prone to poor quality…
which drives costs up… and drives down customer satisfaction and
hence revenue. The result of slow processes: more than half the
cost in service applications is non-value-add waste.

Why Service 2. Service processes are slow because there is far too much "work-

functions
in-process" (WIP), often the result of unnecessary complexity in
the service/product offering. It doesn't matter whether the WIP is
reports waiting on a desk, emails in an electronic in-box, or sales
need Lean orders in a database. When there is too much WIP, work can spend
more than 90% of its time waiting, which doesn't help your

Six Sigma customers at all and, in fact, creates or inflicts substantial waste
(non-value-add costs) in the process.

3. In any slow process, 80% of the delay is caused by less than 20%
of the activities. We only need find and improve the speed of 20%
of the process steps to effect an 80% reduction in cycle time and
achieve greater than 99% on-time delivery.
1) CEO & Managerial Engagement.

Core Elements
of the Six 2) You have to allocate appropriate
resources (= staff and time commitments)
Sigma to high-priority projects.

Prescription 3) Everyone affected by or involved in Six


Sigma should receive some level of
training.

4) Variation has to be eliminated.


Understanding the Process Basics
(Morgan & Brenig-Jones, 2016)
People: Those working in or around the process. Do you have the right number in the right place, at the right time and possessing the right skills
for the job? And do they feel supported and motivated?

Equipment: The various items needed for the work. Items can be as simple as a stapler or as complicated as a lathe used in manufacturing.
Consider whether you have the right equipment, located in an appropriate and convenient place, and being properly maintained and serviced.

Method: How the work needs to be actioned – the process steps, procedures, tasks and activities involved.

Materials: The things necessary to do the work – for example, the raw materials needed to make a product.

Environment: The working area – perhaps a room or surface needs to be dust-free, or room temperature must be within defined parameters.
Considering the Key Principles
of Lean Six Sigma (Morgan &
Brenig-Jones, 2016)
1. Focus on Customer
2. Identify and understand how the work gets done
3. Manage Improve and Smooth the process flow
4. Remove non value added steps and waste
5. Manage by fact and reduce variation
6. Involve and equip people in the process
7. Undertake improvement activity in a Systematic way

Less is usually more. Tackle problems in bite-sized chunks and


never jump to conclusions or solutions.
Must-bes: These are sometimes
referred to as the unspoken
customer requirements. To the Kano Model
customer, they’re so obviously
required that she doesn’t expect to
have to spell them out. Meeting
these requirements will not
increase customer satisfaction;
they’re the absolute minimum the
customer is expecting – so miss
them at your peril. The must-be
requirements are also sometimes
referred to as dissatisfiers.
One-dimensionals: The more of
these requirements that are met,
the higher the customer
satisfaction. The requirements
might relate to product features or
elements of service delivery, or
both. One-dimensionals are
sometimes referred to as satisfiers.
Delighters: Here, the customer is
surprised and delighted by
something you’ve done, a wow
factor, and her satisfaction
increases, even if some other
(Morgan & Brenig-Jones, 2016)
elements haven’t been delivered as
well as they might.
Organization Structure for LSS
Implementation (Shaffie & Shahbazi, 2012)
The LSSBB methodology has been designed to provide a growth path for the
people using it. The following outlines this growth path along with the
prerequisites for each step in the path, starting at the beginning level:
Six Sigma White Belt—No prerequisite
Six Sigma Yellow Belt—No prerequisite
Six Sigma Green Belt—Yellow Belt prerequisite
Lean Green Belt—No prerequisite
Six Sigma Black Belt—Six Sigma Green Belt or equivalent
Lean Six Sigma Black Belt—Six Sigma Green Belt and Lean Green Belt or
equivalent
Master Black Belt—Lean Six Sigma Black Belt or equivalent
Organization Structure for LSS Implementation
(Shaffie & Shahbazi, 2012)
Four Pillers of LSS Quality Culture (Shaffie &
Shahbazi, 2012)
Organizational Commitment (Shaffie &
Shahbazi, 2012)
This implies buy-in from everyone, from top-level management all the
way down to the salaried employee. Commitment means that
everyone believes in the quality journey, understands its importance,
and uses the methodology to solve everyday business issues, and that
management monitors the benefits attained from the projects and
activities. Commitment also means that management and executives
understand their roles and responsibilities in making the continuous
improvement culture live well beyond their tenure, so they have to
demand excellence and reward people when it is attained.
Becoming a data-rich organization (Shaffie &
Shahbazi, 2012)
In order to make business decisions based on data, management and
employees need access to timely and accurate process and customer
data. Getting to this point can be a tedious process. While there may be
copious amounts of data in an organization, the accuracy of those data
may be in question, or they may not be exactly what is needed to
monitor the particular issue that is being analyzed. Lean Six Sigma
projects can help build the required infrastructure to get to the right
data. Collection of accurate and current data requires robust business
processes
Developing unbiased and predictable business
metrics (Shaffie & Shahbazi, 2012)
Metrics translate the strategic goals of the business into measurable
activities from the highest to the lowest level of the organization. If
developed correctly, metrics allow an organization to drive a culture of
accountability. The biggest challenge with developing metrics is lack of
the required data.
Creating a culture of accountability and
acceptance (Shaffie & Shahbazi, 2012)
Accountability gives meaning and focus to the metrics of the business.
However, to mitigate the risks of cultural backlash, the assistance of the
human resources department should be sought. HR can help in the
development and implementation of the required compensation,
evaluation, training, and communication plans to achieve employee
acceptance.
Transformation from Traditional to Lean Six
Sigma Thinking (Shaffie & Shahbazi, 2012)
Start from the need (Garbon, 2016)
Taiichi Ohno, one of the creators of the Toyota Production System,
wrote that organizations must “start from need” and that “needs and
opportunities are always there.”1 In 2014, John Shook, CEO of the Lean
Enterprise Institute and the first American to work for Toyota in Japan,
said we should start by asking, “What is the purpose of the change and
what problem are we trying to solve?”2 John Toussaint, MD, former
CEO at ThedaCare (Wisconsin), emphasizes that Lean activity must be
“focused on a … problem that is important to the organization.”
Lean Concepts (Mignosa, Voehl, Harrington &
Charron 2013)
Standard work: a systematic way to complete value-added activities. Having standard work activities
is a fundamental requirement of Lean Six Sigma organizations.
Value stream: A conceptual path horizontally across your organization that encompasses the entire
breadth of your external customer response activities. That is, anything that transpires from the
time your organization realizes you have an external customer request until that external customer
receives its product or service.
Value stream management: A systematic and standardized management approach utilizing Lean Six
Sigma concepts and tools. Value stream management results in an external customer-focused
response to managing value-added activities.
Continuous flow: The Holy Grail of manufacturing, often referred to as make one-move one. One-
piece flow or continuous flow processing is a concept that means items are processed and moved
directly from one processing step to the next, one piece at a time. One-piece flow helps to
maximum utilization of resources, shorten lead times, and identify problems and communication
between operations. During any process improvement activity, the first thing on your mind should
be: “Is what I’m about to do going to increase flow?” Achieving continuous flow typically requires
the least amount of resources (materials, labor, facilities, and time) to add value for the customer.
Achieving continuous flow has been credited with the highest levels of quality, productivity, and
profitability.
Lean Concepts (Mignosa, Voehl, Harrington &
Charron 2013)
Pull systems: Systems that only replenish materials consumed by external customer
demand. These systems naturally guide purchasing and production activities, directing
employees to only produce what the external customer is buying. The Kanban tool is used
to achieve this Lean concept.
Point-of-use storage (POUS): Locating materials at the point of value-adding activities.
Quality @ source: Building quality into value-adding processes as they are completed. This
is in contrast to trying to “inspect in quality,” which only catches mistakes after they have
been made. An effective quality @ source campaign can minimize or eliminate much of the
expense associated with traditional quality control or quality assurance programs.
Takt time: The demand rate of your external customer for your products. It signifies how
fast you have to make products to meet your customer demand. Once you calculate Takt
time, you can effectively set your value-added processes to meet this customer demand. In
essence, Takt time is used to pace lines in the production environments. Takt time is an
essential component of cellular manufacturing.
Lean Concepts (Mignosa, Voehl, Harrington &
Charron 2013)
Just-in-time (JIT): A concept that espouses materials sourcing and consumption to meet external customer
demand. Properly executed, it helps to eliminate several wastes, including excess inventory, waiting, motion,
and transportation.
Kaizen: The Japanese term for improvement; continuing improvement involving everyone—managers and
workers. In manufacturing Kaizen relates to finding and eliminating waste in machinery, labor, or production
methods. Kaizen is a versatile and systematic approach to change + improve all processes. In virtually all Lean
Six Sigma organizations the concept of Kaizen is practiced by all employees at all levels of the organization.
Materials, machines, manpower, methods, and measurements (5M’s): The five key process inputs. The pursuit
of Lean Six Sigma is an exercise in the effective use of the 5M’s to achieve customer requirements and overall
organization performance.
Lean accounting: A method of accounting that is aligned horizontally across your organization with the value
stream. Traditional costing structures can be a significant obstacle to Lean Six Sigma deployment. A value
stream costing methodology simplifies the accounting process to give everyone real information in a basic
understandable format. By isolating all fixed costs along with direct labor we can easily apply manufacturing
resources as a value per square footage utilized by a particular cell or value stream. This methodology of
factoring gives a true picture of cellular consumption to value-added throughput for each value stream
company-wide. Now you can easily focus improvement Kaizen events where actual problems exist for faster
calculated benefits and sustainability.
Lean Concepts (Mignosa, Voehl, Harrington &
Charron 2013)
Lean supply chain: The process of extending your Lean Six Sigma activities to your supply
chain by partnering with suppliers to adopt one or more of the Lean concepts or tools.
Lean metric: Lean metrics allow companies to measure, evaluate, and respond to their
performance in a balanced way, without sacrificing the quality to meet quantity objectives,
or increasing inventory levels to achieve machine efficiencies. The type of lean metric
depends on the organization and can be of the following categories: financial performance,
behavioral performance, and core process performance.
Toyota Production System: The Toyota Production System is a technology of
comprehensive production management. The basic idea of this system is to maintain a
continuous flow of products in factories in order to flexibly adapt to demand changes. The
realization of such production flow is called just-in-time production, which means
producing only necessary units in a necessary quantity at a necessary time. As a result, the
excess inventories and the excess workforce will be naturally diminished, thereby achieving
the purposes of increased productivity and cost reduction.
Lean Tools (Mignosa, Voehl, Harrington &
Charron 2013)
5S: A methodology for organizing, cleaning, developing, and sustaining a productive work
environment. Improved safety, ownership of work space, improved productivity, and
improved maintenance are some of the benefits of the 5S program.
Overall equipment effectiveness (OEE): An effective tool to assess, control, and improve
equipment availability, performance, and quality. This is especially important if there is a
constraining piece of equipment.
Mistake (error) proofing: A structured approach to ensure a quality and error-free
manufacturing environment. Error proofing assures that defects will never be passed to the
next operation. This tool drives the organization toward the concept of quality @ source.
Cellular manufacturing: A tool used to produce your product in the least amount of time
using the least amount of resources. When applying the cellular manufacturing tool, you
group products by value-adding process steps, assess the customer demand rate (Takt
time), and then configure the cell using Lean Six Sigma concepts and tools. This is a
powerful tool to allow the use of many Lean concepts and tools together to achieve
dramatic process improvements. (See cell example at the end of this chapter.)
Lean Tools (Mignosa, Voehl, Harrington &
Charron 2013)
Kanban: A Kanban is a “signal” for employees to take action. It can be a
card with instructions of what product to make in what quantity, a cart
that needs to be moved to a new location, or the absence of a cart that
indicates that an action needs to be taken to replenish a product. This
is a fundamental tool used to establish a “more continuous flow.”
Kanban is a simple parts-movement system that depends on cards and
boxes/containers to take parts from one workstation to another on a
production line. The essence of the Kanban concept is that a supplier
or the warehouse should only deliver components to the production
line as and when they are needed, so that there is no storage in the
production area. Kanban can also be effective in Lean supply chain
management.
Lean Tools (Mignosa, Voehl, Harrington &
Charron 2013)
Visual controls: Visual controls are tools that tell employees “what to do
next,” what actions are required. These often eliminate the need for complex
standard operating procedures and promote continuous flow by eliminating
conditions that would interrupt flow before it happens.
Single-minute exchange of dies (SMED): or quick changeover SMED is an
approach to reduce output and quality losses due to changeovers. Quick
changeover is a technique to analyze and reduce resources needed for
equipment setup, including exchange of tools and dies.
Total Productive Maintenance (TPM): TPM is a maintenance program
concept that brings maintenance into focus in order to minimize downtimes
and maximize equipment usage. The goal of TPM is to avoid emergency
repairs and keep unscheduled maintenance to a minimum. TPM programs
are typically coupled with OEE activities, which identify where to focus your
TPM activities.
Visual Controls (Mignosa, Voehl, Harrington &
Charron 2013)
Lean Concepts (Mignosa, Voehl, Harrington &
Charron 2013)
Waste: Anything in your processes that your customer is unwilling to pay for: extra space,
time, materials quality issues, etc.
Value-added: All activities that create value for the external customer.
No-value-added: All activities that create no value for the external customer.
Business-value-added: Activities that do not create value to the external customer, but are
required to maintain your business operations.
Waste identification: The primary fundamental Lean concept is the ability to see waste in
your organization. This encompasses being able to readily identify the nine wastes and
consistently manage to avoid these occurrences. More importantly, from a management
standpoint, it means possessing the ability to proactively lead and mentor employees to
conduct waste-free activities daily. To do this the Lean manager must master the concept
of waste identification by understanding waste creation in all its forms.
Waste elimination: The ability to apply Lean Six Sigma concepts and tools to eliminate
identified wastes. This entails continuous learning of how to apply concepts and tools
either independently or in groups to achieve process improvement results.
The Lean Strategy: Using Lean to Create Competitive Advantage,
Unleash Innovation, and Deliver Sustainable Growth

Lean Strategy harnesses that power and delivers a new way of creating
value from lean. Leading lean experts address popular misconceptions
about the basics of lean/TPS, showing the true purpose of tools,
methods, and attitudes that leverage the intelligence of every
employee doing the work. You’ll learn how to think—and then act—
differently, tapping the power of every person in your organization in a
disciplined manner that generates unparalleled, sustainable success
that is responsive to today’s most pressing challenges

(Balle, Jones, Chaize & Fiume, 2017)


Lean Strategy

Lean strategy is about learning to


compete—adopting a
fundamentally different way of
thinking at the workplace, one that
is all about developing the capacity
to discover and learn. Daily practice
of this approach, at every level,
creates resilient organizations that
are better able to adapt and grow
with a greater mindfulness in all
things large and small.
(Balle, Jones, Chaize & Fiume,
2017)
Think Differently (Balle, Jones, Chaize &
Fiume, 2017)
Lead From the Ground Up(Balle, Jones, Chaize
& Fiume, 2017)
• Lean leadership fights these 1. Improve customer satisfaction.
bureaucratic challenges daily by
sustaining local kaizen and learning
within day-to-day activities. The 2. Improve the flow of work.
goal of a Lean thinking leader is to
engage all individuals in their
organization in making their job 3. Make it easier to get tasks done
better. How? By knowing what to right the first time.
ask for, and knowing how to
ask. This means always practicing
Lean in a visible manner every day 4. Improve relationships.
and keeping people focused on a
clear improvement strategy based
on these principles:
Lead From the Ground Up(Balle, Jones, Chaize
& Fiume, 2017) 3. Try, see, think, try, see, think, to learn faster. Actively seek
out new ideas, and whenever someone does come up with
one, encourage her to try small, somewhere, with materials
These improvement directions are navigated by Lean at hand or scrounged resources and see what the impact is
leaders with the following attitudes: like, and then think about it. After she has tried, you commit
to help her convince her colleagues and get the financing if
1. Go and see for yourself. Go to the source, and see necessary to go bigger when appropriate. By encouraging
the facts firsthand with customers using the product small steps and supporting people through the struggle of
or service and the people doing the work. In doing learning by doing, you can engage staff in doing something
so, you can make sure that people agree on the without succumbing to the temptation to invest in untested
ideas. Making it easy for workplace employees to try out
problem before jumping to solutions. Better ideas is more than an attitude. It’s a skill—and a stepping
observation and better discussion lead to better stone to your own discovery of the reality of the workplace.
outcomes.
4. Intensify collaboration. The quality of problem solving,
2. Put problems first. Leaders who are adamant about initiatives, and indeed, the ability to reach real
putting problems first and do not blame people for breakthroughs is largely linked to the quality of the
them are living proof of respect for people and their collaboration. Collaboration is the ability to bounce off of
opinions and experience. Listening without blaming each other’s ideas, to take different perspectives in and take
people, making an effort to hear their point of view them one step further, and to do quick back-and-forth until
something difficult finally works. To better collaborate,
(no matter how weird or biased), is a foundational teams need both greater clarity in the purpose of what
attitude in building mutual trust. they’re asked to do and in the confidence that the team is a
safe environment to speak out in one’s own voice, propose
new ideas, and challenge existing problems.
Framing for Learning (Balle, Jones, Chaize &
Fiume, 2017)
Establish a list of key indicators to reflect the safety, quality, lead
time, productivity, energy performance, and morale challenges in
each plant.
Build pull systems to handle variety through better planning at takt
time, created continuous flow cells, and implemented logistics
internal pull with a small train and kanban cards.
Work with each production manager to stop at every quality problem
and to make equipment and materials progressively more reliable to
assemble more seamlessly.
Work on the basic stability of each of the production halls, through
5S, team stability, and daily problem solving.
Framing for Learning (Balle, Jones, Chaize &
Fiume, 2017)
Built-in quality has two essential dimensions:

1 Design robustness: Features need to prove their robustness before being included in the final
product, which means largely relying on known engineering standards and a cautious approach to
innovation. Toyota is known as a “fast follower” because it follows the market in terms of
innovation, and it is very cautious in adding innovative features until they have been fully tested and
mastered. The built-in quality frame applies at all levels of the system from the largest—don’t put
on the market a feature you’re not 100 percent sure of—to the most detailed—don’t pass on to the
next worker a job you’re not 100 percent sure of.

2 Stop-at-every-defect: In assembly, every operation is inspected through a “finishing touch,” and


whenever the operator has a doubt or the machine’s auto test shows a slight problem, the line is
stopped, and the problem studied so that every product comes out confidently within standards.
This, in turn, creates a huge database of what kind of problems to expect at every stage of the
process, which both feeds back into engineering and into training and inspection checklists for
further assembly.
Framing for Learning (Balle, Jones, Chaize &
Fiume, 2017)
• 1. You cannot force people to have ideas. You can only encourage them to
do so and support them when they do.
• 2. You cannot tell people what kind of ideas they should have. You can only
show them the sort of suggestions that fit within the wider scope of what
you’re trying to do.
• 3. You cannot focus on error avoidance alone. It is useful, in order to delve
into technical issues, but people also need some thinking space for
initiative and creativity.
• 4. You can’t expect people to experiment and learn during their normal
daily work if you don’t create specific space for them to think and try new
things.
Organise for Learning (Balle, Jones, Chaize &
Fiume, 2017)
Formula for Growth (Balle, Jones, Chaize &
Fiume, 2017)
A strategy is a high-level plan to achieve competitive advantage in conditions
of uncertainty. In this sense, the Lean strategy to grow a business is clear,
and it rests on three main strategic intents:
1. Challenge yourself to halve the bad and double the good. Whatever the
current situation is, challenge yourself to find the operational levers to
improve the performance of your business radically. By facing key problems
and choosing dimensions for improvement, we can offer customers and
society at large attractive alternatives competitors will have to follow.
Challenging oneself beyond the minimal need to “stay in the game” is the
key to managing one’s own learning curve, and putting pressure on
competitors in the process (as they have to catch up, they’ll find managing
their own learning curve harder and costlier).
Formula for Growth (Balle, Jones, Chaize &
Fiume, 2017)
2. Create a culture of “problems first.” Problems are the day-to-day material
that Lean managers use to run their area, from business-level problems
expressed as challenges, to detailed, workplace-level obstacles that
employees encounter. Problems are what Lean works with. Managers are
taught to go to the source to find facts and listen directly to what customers
and employees have to say. They have to admit that they don’t know
everything, and they have to be willing to have their long-held assumptions
about how things work overturned. They are also taught how to visualize
issues and ask “why?” repeatedly until root causes are surfaced and
countermeasures implemented. This means that unfavorable information
must be welcomed rather than concealed, and managers must thank
employees for the problems they bring to light rather than blame the
messenger and dismiss or ignore people’s concerns. This also means that
improvement initiatives must be nurtured and supported and that managers
must learn to learn from them.
Formula for Growth (Balle, Jones, Chaize &
Fiume, 2017)
3. Free capacity to develop new products and/or services. By solving problems that
create waste, you free up capacity (people, machines, and spaces) to be able to
grow without adding new capacity. As the business grows and that growth is
satisfied with existing resources, the only significant added cost of the next unit of
sales is its material content. The traditional return on investment (ROI) calculation
multiplies operational efficiency (that is, margin) by capital efficiency (that is, assets
used to create output). A Lean strategy changes the financial frame of how to
improve the ROI by solving problems through continuous kaizen experiments,
which leads to continuous learning, which leads to continuous simultaneous
improvements in both operational efficiency and capital efficiency. This freed-up
capacity creates space in which to introduce new products and try innovations
without carrying the financial risk of dedicated production facilities. Eliminating
waste from increasing flexibility (and solving all problems that entails) is the key to
concretely sustaining growth with a steady stream of innovative new offerings.
Reusable Learning for Continuously Growing Value
(Balle, Jones, Chaize & Fiume, 2017)
Lean, embodies a fundamentally different way of thinking, a cognitive
revolution that changes how one organizes, finances, and acts. The
elements of a Lean strategy differ radically from conventional wisdom
in everything from a different formula for growth to a different way of
leading and managing. In particular, the elements of a Lean strategy
include the following:
1. Increased perceived quality to drive sales
2. Intensity of kaizen efforts to reduce costs
3. New product introduction pursued as key to sustainable growth
4. Reduced lead times as key to increasing margins and generating cash
Accelerate the Gains (Balle, Jones, Chaize &
Fiume, 2017)
Unfortunately, traditional Taylorist companies and consultants have misinterpreted
Lean, largely missing what is innovative, new, and frankly, exciting, and they have
reduced it to just another operational excellence program based on three broad
operational efforts:
1. A program of specialist-led improvement activities, frequently aimed at
“reducing cost through removing waste,” rarely at improving quality or reducing
lead time, with the twin purpose of (a) generating savings and (b) training work-
level teams to manage their performance and solve problems
2. Mindless (that is, not mindful) overuse of management routines such as daily
five-minute stand-up meetings and workplace maintenance practices such as 5S
(keeping the workplace organized at all times)
3. Progress reviews and maturity audits to measure both accrued savings from cost-
reduction activities and progress in acquisition of “best practices” across the
company
Accelerate the Gains (Balle, Jones, Chaize &
Fiume, 2017)
Lean thinking is the key to sustainable growth as it creates the basis for
these:
• Better individual products that are more robust and include more features
as they evolve organically by improving the same product base step by step
at a regular cadence, in step with what customers go for (or don’t)
• The opportunity to push some really innovative features (as opposed to
innovative “products” as a whole) on stable products and thus create original
offers to customers that are new and reliable
• A wider, clearer range of products where customers can find what they
need according to their use (the same customer can well buy several distinct
products) without cannibalization and with the protection of the margin of
each individual product
From Kaizen to Innovation (Balle, Jones,
Chaize & Fiume, 2017)
Experiments come in two forms:
• Assimilation: Putting new information into the context of what is
already known to extend the domain of what we know
• Accommodation: Changing the assumptions underlying what we
know (or think we know) in order to accommodate the new
information
From Kaizen to Innovation (Balle, Jones,
Chaize & Fiume, 2017)
From a customer value perspective, an innovative product or service
will do the following:
1. Help customers do better something they want to get done
2. In a way that hasn’t been done before and is
3. Pleasing to use and cost-effective
4. And ultimately benefits society
From Kaizen to Innovation (Balle, Jones,
Chaize & Fiume, 2017)
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Lean thinking creates meaning by linking individual and team-level kaizen with the
strategic challenges of the business as a whole. Doing so, however, requires a deep
mental managerial shift in the way we think about jobs, in terms of how we look at
people and work, and how we look at companies and markets.

Lean’s promise is that aligning individual success and company success makes for a
better-performing business model. By supporting individual employees in writing
their own story and in helping you with your overall objectives for the business, you
can change the story of the business and, by putting pressure on your competitors,
the story of the industry as a whole. Becoming a market leader through challenging
yourself (and forcing your competitors to catch up) is the key to sustainable and
profitable growth, even in times of heightened disruption. Lean thinking is a
structured method to learn how to do this.
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Lean thinking doesn’t focus exclusively on the study of work (Taylorism), nor on the
study of people (as with motivation programs), nor indeed the study of financial
management. Instead, Lean thinking focuses on looking specifically at
the relationship employees have with their work. How do employees understand
their work? Its purpose? How do they feel about it? How do they cope with
problems that appear? How well do they collaborate with their colleagues? This is
hard. As we look at any work situation, we have been trained to look at one of
these:
• The process: For example, how well does it flow, where are the obstacles, and
what is its cost?
• The people: For example, what is their attitude, are they competent, are they
motivated, how experienced are they, and what are their personality traits?
But we rarely look at how the people think and feel about the job—the invisible
cartoon bubble on the top of their heads that explains what they’re going to do
next.
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Changing the focus of management to seeing how people think about their work and deepening
their relationship with their job redefines the role to quite an extent:
• Is the work environment conductive to doing good work? Or is the friction of day-to-day obstacles
to getting the job done such that people give up on doing the best they can?
• Are teams stable and supportive? Do people look forward to seeing their coworkers in the
morning, and do they feel they can be themselves, drop the “company face,” and discuss issues and
take initiatives with their colleagues without risk of blame or criticism?
• Is there a clear line of sight to the greater purpose and overall plan? Beyond giving meaning to
the job, a clear understanding of desired outcomes (as opposed to immediate output) enables
autonomous decision-making, initiatives in unexpected conditions, and creative thinking for
improvement that contributes to the overall result.
• Can they learn and progress while doing their daily job? Do people find opportunities to practice
the skills they’re interested in or new skills they want to acquire in an environment that recognizes
their effort and hard work and tutors them into deepening their own mastery of their job?
• Do leaders learn from local improvements? And do they demonstrate to all how their efforts
contribute to the common good? Is their contribution and effort recognized and fairly rewarded?
Change your Mind (Balle, Jones, Chaize &
Fiume, 2017)
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
The Lean strategy involves a revolutionary mindset change. For more
than a quarter of a century, strategy has been shaped by five key
questions captured by Michael Porter:
1. How do you respond to the bargaining power of customers?
2. How do you increase our bargaining power over suppliers?
3. How do you counter the threat of substitute products or services?
4. How do you deal with the threat of new entrants?
5. How do you better jockey for position among current competitors?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
The Lean strategy involves a revolutionary mindset change. For more
than a quarter of a century, strategy has been shaped by five key
questions captured by Michael Porter:
1. How do you respond to the bargaining power of customers?
2. How do you increase our bargaining power over suppliers?
3. How do you counter the threat of substitute products or services?
4. How do you deal with the threat of new entrants?
5. How do you better jockey for position among current competitors?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
Toyota has grown to dominate the automotive market by following a
radically different approach—indeed, a “non-strategy” according to Wall
Street analysts. Toyota’s leaders chose to respond to five different questions:
1. How do you increase customer satisfaction to build brand loyalty?
2. How do you develop individual know-how to increase labor productivity?
3. How do you improve collaboration across functions (and other partners)
to boost organizational productivity?
4. How do you encourage problem solving to better engage employees and
grow human capital?
5. How do you support an environment conducive to mutual trust and
developing great teams to nurture social capital?
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
This strategic learning occurs at three levels:
• Team-based agility makes it easier to pick up customer signals and
respond faster to adapt continuous processes to customer real-life usages.
• Kaizen activities and managing learning curves creates a reflexive learning
environment where development topics are tackled deliberately in order to
reinvest gains from improvement into technical and teamwork learning.
• These two levels of learning activities build up a strategic capability for
learning to learn: how to go to the market with experiments, and follow
them up quickly if they work or backtrack without losses if they don’t, but
also the ability to face difficult issues and explore new domains to keep
offering new, undiscovered value to customers.
Towards a Waste Free Society (Balle, Jones,
Chaize & Fiume, 2017)
20-year study of lean transformation efforts has shown that CEOs who succeed
with Lean all share three rare traits:
1. They find a sensei: After experimenting with improvement workshops and
projects, at some point they look for and find a sensei with whom they can work.
2. They accept the exercises the sensei prescribes: Although some of the tasks seem
counterintuitive or do not relate easily to what is currently perceived as urgent,
they agree to explore and find out, and get involved in practical exercises to learn
by doing.
3. They commit explicitly to learning—their teams and themselves: At some point,
they look beyond immediate results from kaizen activities (which remain important
as a sign of better response by the teams) and they look for learning (whether the
experiment has succeeded or failed, what has this taught whom and what is the
next step?)
The Lean Strategy: Using Lean to Create Competitive Advantage,
Unleash Innovation, and Deliver Sustainable Growth

A lean strategy is about gaining a competitive edge by offering better


quality products at competitive prices and making a sustainable profit
by eliminating waste through engaging employees in discovering
deeper ways to think about their own jobs and smarter ways of
working together. In its current form, lean has been radically effective,
but its true powers have yet to be harnessed.

(Balle, Jones, Chaize & Fiume, 2017)


WHEN SIX SIGMA IS NOT RIGHT FOR AN
ORGANIZATION (Cavanagh, Pande & Neuman,
2014)
• You already have in place a strong, effective performance and
process improvement effort.

• Current changes already are overwhelming your people and/or


resources.

• The potential gains are not there


SUMMARIZING THE ASSESSMENT: THREE KEY
QUESTIONS (Cavanagh, Pande & Neuman, 2014)
1. Is change (whether broad or targeted) a critical business need now,
based on bottom-line, cultural, or competitive needs?
2. Can we come up with a strong strategic rationale for applying Six
Sigma to our business? (Which is another way of saying, “Will it get and
hold the commitment of business leadership?”)
3. Will our existing improvement systems and methods be capable of
achieving the degree of change needed to keep us a successful,
competitive organization?
If your answers are Yes, Yes, and No, you may well be ready to explore
further how to adopt Six Sigma in your organization.
The Costs of Six Sigma Implementation
(Cavanagh, Pande & Neuman, 2014)
• Direct payroll. Individuals dedicated to the effort full time. (See Chapter 8, Preparing Black Belts
and Other Key Roles.)

• Indirect payroll. The time devoted by executives, team members, process owners, and others to
such activities as measurement, voice of the customer data gathering, and improvement projects.

• Training and consulting. Teaching people Six Sigma skills and getting advice (from folks like us) on
how to make the effort successful can be a significant investment, too.

• Improvement implementation costs. Expenses to install new solutions or process designs can
range from a few thousand dollars to millions—especially for IT-driven solutions. (But keep in
mind that you’re likely already making these investments, and their payoff may not be very
positive.)
KEYS TO SUCCESS (Cavanagh, Pande &
Neuman, 2014)
• Tie Six Sigma Improvement Efforts to Business Strategy and Priorities

• Position Six Sigma as an Improved Way to Manage for Today

• Keep the Message Simple and Clear

• Develop Your Own Path to Six Sigma

• Focus on Short-Term Results

• Focus on Long-Term Growth and Development


KEYS TO SUCCESS (Cavanagh, Pande &
Neuman, 2014)
Publicize Results, Admit Setbacks, and Learn from Both

Make an Investment to Make It Happen

Use Six Sigma Tools Wisely

Link Customers, Process, Data, and Innovation to Build the Six Sigma System

Make Top Leaders Responsible and Accountable

Make Learning an Ongoing Activity


Is Six Sigma Right for Us Now? (Cavanagh,
Pande & Neuman, 2014)
Embarking on a Six Sigma initiative begins with a decision to change—
specifically, to learn and adopt methods that can boost the performance of
your organization. In its most ambitious applications, Six Sigma can be a
more fundamental change than, say, a major acquisition or a new systems
implementation, because Six Sigma affects how you run the business. The
depth of impact on your management processes and skills will vary, of
course, with how extensively you want to apply Six Sigma tools and the
results you are seeking.
The starting point in gearing up for Six Sigma is to verify that you’re ready
to—or need to—embrace a change that says “There’s a better way to run
our organization.” This decision shouldn’t be a rote, number-crunching-
based decision; you will need to consider a number of essential questions
and facts in making your readiness assessment
(Cavanagh, Pande & Neuman, 2014)
1. Assess the Outlook and Future Path of the Business
2. Evaluate Your Current Performance
3. Review Systems and Capacity for Change and Improvement
The Big Four Questions That
Shape a Deployment (Watson-
Hemphill, 2016)

There are hundreds of decisions that go into building the


foundation for a strong Lean Six Sigma program. The most
important ones fall into the Big Four categories—basic and
seemingly simple questions whose answers define the
framework for a deployment:

Why?

What?

How?

Who?
BIG QUESTION 1: WHY? (Watson-Hemphill,
2016)
Here are three examples that illustrate different ways in which the why question has been answered:

A financial services company had grown rapidly through acquisition and needed a methodology to unify the culture and streamline its
core processes. Lean Six Sigma provided a foundation and methodology that established a common language concerning process
improvement and product design. It also helped the company create a culture of continuous improvement. By using Lean Six Sigma to
remove waste, the company was able to decouple its cost curve from its growth curve—meaning that it could generate much more
value for the amount of effort invested, significantly increasing profitability.

A manufacturing company was experiencing significant margin pressure as a result of overseas competition. The leadership team
chose to deploy Lean Six Sigma to reduce operational expenses and leveraged the design methodologies to enhance the customer’s
experience and introduce new products. They also were able to use Lean Six Sigma to increase the skill level of the workforce and
prepare people for future leadership positions within the company.

A large hospital was interested in improving its patient satisfaction scores. Using Lean Six Sigma, the employees focused their
improvement efforts on patient-facing processes that historically had caused patient frustration and dissatisfaction. The initiative was
driven not by a need to reduce costs, but by a desire to better satisfy the hospital’s customers. Ratings for the hospital improved in
every category by 20 to 40 percent.
BIG QUESTION 2: WHAT? (Watson-Hemphill,
2016)
Factors that go into the most robust processes incorporate:

Both top-down and bottom-up methods for identifying potential


projects

Identification of diverse criteria for both benefit and effort that can
be used to score the projects against the business priorities

A prioritization process that evaluates each project against the


weighted criteria
BIG QUESTION 2: WHAT? (Watson-Hemphill,
2016)
As you can probably predict, the focus of the project identification process will vary by organization.
Here are two examples illustrating key differences:

To identify the highest-value projects for its Lean Six Sigma rollout, a multinational manufacturer
conducted a series of assessments at multiple plants, in the supply chain operations, in the sales
organization, and in the regulatory functions. They tied their project selection and measurements to
established operational targets involving revenue generation, quality, cost, and productivity. Looking
across multiple areas for ideas that met the criteria led to a diversity of projects. For example, high-
value opportunities were identified in improving plant efficiencies, streamlining the human resource
process, reducing days’ sales outstanding, and optimizing the sales process.

A healthcare company began by identifying their core processes. They then linked their annual goals
to improvement needs in those core processes as a way to select target project opportunities. Some
projects directly improved patient care. Others improved satisfaction with the call centers. Some
streamlined bureaucratic processes that delayed payment to all parties. Still others reduced costs,
such as the cost of expired materials and obsolete medical supplies.
BIG QUESTION 3: HOW? (Watson-Hemphill,
2016)
Knowledge about when to use which method and which road map is more sophisticated today, and
organizations that fail to take the differences into account can waste time and effort. The principles
we recommend are:

Develop expertise in both the Lean and Six Sigma methodologies so that you understand which
toolset is appropriate when. A few people still hold out for a pure Lean or a pure Six Sigma
approach, but they are in the minority.

Use the DMAIC (Define–Measure–Analyze–Improve–Control) structure as the road map for problem
solving and process improvement. There are two basic types of DMAIC projects:

The traditional Lean Six Sigma project team approach, where a group meets regularly over a period
of time to solve a difficult problem that has no obvious solution. The emphasis is typically on tools
that focus on understanding the voice of the customer, collecting the right data, and analyzing the
data with statistical methods to identify the true root cause of the problem.
BIG QUESTION 3: HOW? (Watson-Hemphill,
2016)
Kaizen projects, where a group of selected team members are brought together for
an intense one-week period to complete a cycle of rapid improvement on a
problem with a smaller scope. This structure is best used in situations where
process waste or inefficiencies are a problem and the Lean toolset is more
appropriate.

Use the DMEDI (Define–Measure–Explore–Develop–Implement) road map for


situations in which you need to design or substantially redesign a product, service,
or process. The emphasis in DMEDI is on understanding customer needs, creatively
innovating and exploring design alternatives, and optimizing the design.

Consider using business process management (BPM) to establish a solid foundation


of knowledge about the needs and functions of core processes. BPM emphasizes
measurement, documentation, and control.
BIG QUESTION 4: WHO? (Watson-Hemphill, 2016)

Organizations that follow these guidelines have reaped a secondary benefit: the experience with
Lean Six Sigma has helped their managers and managerial candidates become better and more
capable leaders. Here are three examples:

• A large hotel chain trained their manager candidates in Lean Six Sigma methods to ensure that
future leaders were more data-driven, process-driven, and customer-focused.

• A pharmaceutical company wanted to broaden the skills of its high-potential employees and
better enable them to work cross-functionally across the business. Lean Six Sigma gave them a
new toolkit, and also provided opportunities for professional development.

• A financial services company that was experiencing rapid expansion needed to develop more of a
process focus to keep up with its growth. The CEO selected Lean Six Sigma to develop a
foundation of greater analytic abilities and data-based decision making within his management
team.
Leading and Managing Lean (Fliedner, 2015)
Whether the setting is manufacturing, service, administration, health
care, education, politics, or something else, it must be understood that
lean management must possess a systems perspective. A survey of
practitioners suggested that the single most important lean skill,
knowledge, or expertise item is the possession of a systems view and
thinking. Over many years, this result has maintained its consistency in
conversations with practitioners. Companies that have implemented
successful lean programs have commonly taken into account the entire
enterprise, ranging from suppliers to customers and everything in
between.
Leading and Managing Lean (Fliedner, 2015)
Lean must be viewed as a comprehensive system consisting of
leadership, culture, team, and practices and tools. A system is simply a
set of integrated parts sharing a clearly defined goal. In a system, if
changes are made to optimal values for only a few elements, the
system will not likely come close to achieving all the benefits that are
available through a fully coordinated move and may even have negative
payoffs. A firm must implement lean as part of a systematic and
comprehensive transformation of production and operation
procedures. If only a select few of the system elements reach optimal
levels, then the full benefits of change might be diminished.
Leading and Managing Lean (Fliedner, 2015)
Lean management must be viewed as an integral system of
four, interdependent elements: leadership, culture, team, and practices
and tools. Each of these necessary components affects the
effectiveness of the other components. For example, lean leaders must
be able to rely upon a supportive organizational culture. Lean leaders
are responsible for creating that culture. In order for a transformation
process to produce and to eliminate waste, it takes an immediate
response from every functional discipline, accounting, finance,
purchasing, and so on, when opportunities or issues arise. It takes a
coordinated effort of a team to achieve goals. Respect for team,
people, and their ideas for improvement are a necessary component of
lean management.
Leading and Managing Lean (Fliedner, 2015)
In the survey of practitioners, the second-most important ranking lean
system element was “human relations skills,” which was identified as
consisting of leadership, change management, and team problem
solving.4 This was followed by real-world knowledge and experiences,
lean culture, and then lean practices and tools among many others.
Lean & Change Management (Whiton,
Protzman & Protzman, 2018)
Change management is not only a large part of Lean; it is an essential part of
becoming Lean. As you move from accepting new ideas and then transitioning
them into reality, one should not mitigate the importance of this component to
successfully disseminate, deploy, and sustain Lean. We have a saying when
implementing Lean: 50% is task and 50% is people. Fifty percent is applying the
Lean tools to any process, which is the scientific management part of Lean. The
other 50% is what we call the people piece, or change management. There must be
a balance between these two pieces. Listed below are various change tools to
consider when implementing Lean systems. Keep in mind that people mainly fear
changes they perceive as negative. None of us resist changes we perceive as
positive. Would any of you object to the change of increasing your pay by 10%?
Even if the change was made without telling you ahead of time? We would all view
this increase in pay as positive. Positive changes or changes that fit our paradigms
pass through our filters easily. It is only the changes we perceive as negative that
generate resistance.
The Change Acceleration Process (Whiton,
Protzman & Protzman, 2018)
The change acceleration process (CAP) model was popularized by GE
and used at AlliedSignal (now Honeywell). The model is:
Quality×Acceptance=Effectiveness or Q×A=E.Quality×Acceptance=Effect
iveness or Q×A=E.
If the change you are trying to deploy is not accepted, then you will not
achieve an effective result. One must understand (leverage
stakeholders analysis tools) and deal with resistance from key
stakeholders, build an effective influence strategy and communication
plan for the change, and determine its effectiveness. Again, the
relationship in the equation is multiplicative, which suggests if any of
the components are zero, the change will not be effective.
The Change Acceleration Process (Whiton,
Protzman & Protzman, 2018)
This process involves building a predictive model by assessing each key
stakeholder’s anticipated resistance to change and building a strategy to increase
their acceptance. Following a more formal-change roadmap will help ensure the
success of the change. The roadmap will help by deploying and aiding risk
mitigation with ongoing communication and barrier removal throughout the effort.
The change acceleration process (CAP) is outlined below:
1. Creating a shared need
2. Shaping a shared vision
3. Mobilizing commitment
4. Making change last
5. Monitoring progress
6. Changing systems and structures
Lean Impact (Chang & Ries, 2018)
Lean Impact is an approach to maximizing social benefit in the face of the complex challenges in our society. It
builds upon the best practices for innovation from the Lean Startup and beyond, while introducing new
techniques tailored to the unique nature of the mission-driven arena. By combining scientific rigor with
entrepreneurial agility, we can dramatically increase both the depth and breadth of our impact.

The essence of Lean Impact is captured by three core guiding principles. Throughout this book, I’ll demonstrate
the power of this new mindset and how to translate it into practical action to fuel social innovation.

Think big. Be audacious in the difference you aspire to make, basing your goals on the size of the real need in
the world rather than what seems incrementally achievable.
Start small. Between a desire to help people who are suffering today and pressure from funders to hit delivery
targets, interventions often scale too soon. Starting small and staying small makes it far easier to learn and
adapt – setting you on a path to greater impact over time.
Relentlessly seek impact. Whether due to excitement, attachment, or the requirements imposed by a funder,
we can become wedded to our intervention, technology, or institution. To make the biggest impact, fall in love
with the problem, not your solution.
Virtuous Cycle
(Watson-
Hemphill, 2016)
Behaviour (Charron, Harrington; Voehl &
Mignosa 2013)
If we are to become true LSS practitioners, we need to gain a basic
understanding of the ninth waste, behavior waste, and how we apply
Kaizen to eliminate this and other wastes and facilitate change in our
organization. The ninth waste, behavior waste, teaches us that change
and improvement begin with us as individuals (personal waste) and
collectively (people waste) as members of an organization. Eliminating
non-Lean beliefs and behaviors across the organization requires the
sustained application of Kaizen.
Most organizations are familiar with
Kaizen, Kaizen (“small continuous”)
changes;
Kaikaku &
Kakushin
however, Kaikaku (“a
(Charron, transformation of mind”), which

Harrington; demonstrates large change or


radical change,
Voehl &
Mignosa and Kakushin (“innovation”) are
2013) equally important in your Lean
transformation.
Employees Factors importance in Lean Six
Sigma Concept (Brkic & Tomic, 2016)
Findings – The first finding of this study is that the reward system and training are
significant predictors of Lean Six Sigma activities. The second part of findings shows
that Lean Six Sigma dimensions, such as Define, Measure, Analyze, Improve, and
Control/Define, Measure, Analyze, Design, and Validate, 5S and Kanban positively
influence employees’ performance, described by employee satisfaction, absenteeism,
salaries and benefits, employees’ commitment, and employee turnover rate.

Practical implications – This survey answers the need for Lean and Six Sigma unified
methodology achievement in soft factors area and gives applicable results for
companies in the supply chain that produces low-volume, high-complexity products.

Originality/value – Original and valuable conclusion is that employees’ factors are


both predictor and response variables of the Lean Six Sigma concept application.
Lean Six Sigma for the healthcare
sector (Bhat, Antony, Gijo & Cudney, 2020)
Findings – Patients perceive that waiting in queue harms their health, which can be rectified by
addressing the cycle time of the system. The research also found that effective leadership,
availability of data, involvement of cross-functional team and effective communication are critical to
the success of LSS projects. In addition, control charts, cause and effect diagram, 5S, gemba, two-
sample t-test, standardization, waste analysis and value stream mapping are some of the common
tools used to improve healthcare systems.

Research limitations/implications – The research was restricted to studying the impact of LSS on
the workflow and resource consumption of the MRD in Indian allopathic hospitals only. The validity
of the results can be improved by including more hospitals and more case studies from the
healthcare sector in different countries.

Originality/value – The findings will enable researchers, academicians and practitioners to


incorporate the results of the study in LSS implementation within the healthcare system to increase
the likelihood of successful deployment. This will provide greater stimulus across other departments
in the hospital sector for wider and broader application of LSS for creating and sustaining process
improvements.
Impact of Industry4.0/ICTs, Lean Six Sigma and quality management
systems on organisational performance (Yadev, Shankar & Sing, 2020)

Findings – The study confirmed statistically significant difference among 20


organisational performance indicators under different combinations of QMS, LSS
and ICTs. These indicators include quality performance, delivery performance, sales
turnover, inventory level and so forth. However, for two indicators, namely,
absenteeism and throughput, significant difference in responses was not
established.

Practical implications – Using this study, practitioners can identify which LSS,
Quality System and ICT combination results in best performance and quick success.
On theoretical front, the study confirms impact of LSS and QMS on organisational
performance.

Originality/value – This study evaluates organisational performance under several


possible combinations of QMS, LSS, and emerging ICTs, which was so far
unexplored .
Lean Six Sigma for public sector organizations: is it a
myth or reality? (Antony, Rodgers & Cudney,2017)
Findings – LSS methodology can be embraced by all public sector organizations to create efficient
and effective processes to provide enhanced customer experience and value at reduced operational
costs.
Research limitations/implications – This paper seeks to contribute to and broaden the limited body
of evidence of the applicability of LSS to public sector organizations and identifies areas for further
research and review.
Practical implications – LSS will continue to grow across many public sector organizations in Europe
and other parts of the world over the forthcoming years. However, what will eventually determine if
LSS is viewed by public sector organizations as just a passing management fad or not largely
depends on the leadership and success of its execution. If LSS is deployed in its true sense across
the public sector organizations at a global level, the hard cash savings generated can reach several
billions.
Originality/value – The paper yields an immense value to both research scholars and practitioners
who are engaged in the introduction of LSS as a business process improvement strategy to achieve
and sustain competitive advantage. Moreover, this paper makes an attempt to dispel the myth of
LSS which have been quite prevalent in many public sector organizations around us today.
Can Lean Six Sigma make UK public sector organisations more
efficient and effective? (Antony, Rodgers & Gijo, 2016)
Findings – This paper concludes that while Lean Six Sigma is applicable to the UK public sector
additional work is required to better evidence the benefits and return on investment that can be
delivered as well as considering more holistic approaches on an agency wide basis.

Research limitations/implications – This paper seeks to contribute to and broaden the limited body
of evidence of the applicability of Lean Six Sigma to the UK public sector and identifies areas for
further research and review.

Practical implications – Understanding the applicability of Lean Six Sigma affords opportunities to
public sector agencies in the current budget climate but additionally affords ways in which quality of
service can be enhanced. In some cases, it provides opportunities to meet new statutory
requirements around community empowerment.

Originality/value – The paper contributes to the body of evidence that demonstrates the
effectiveness of Lean Six Sigma within the public sector and suggests opportunity for those agencies
to meet funding challenges faced across the UK.
Exploring Lean Six Sigma implementation barriers in
Information Technology industry (Shamsi & Alam, 2018)
Purpose – The purpose of this paper is to present critical barriers and obstacles faced by Information Technology (IT)
industry in the implementation of Lean Six Sigma (LSS) as the business improvement methodology.

Design/methodology/approach – A literature review of peer-reviewed journal articles, master and doctoral theses,
paradigmatic books with managerial impact and survey reports was used to identify distinct barriers. An empirical
survey, using 400 self-administered questionnaires, was then conducted. Data about 11 LSS barriers from 256 usable
questionnaires, with a response rate of 64 per cent, were collected and analyzed by means of statistical data analysis
software.

Findings – The challenges of “part-time involvement in Lean Six Sigma projects”, “time consuming”, “staff turnover in
middle of project”, “difficulty in data collection” and “difficulty in identifying project scope” emerged as the most
critical barriers in the context of IT industry. This research work advocates the development of a strategy for
addressing the most critical barrier instead of focusing on all for successful implementation.

Originality/value – This paper will prove to be a fantastic resource for many researchers and practitioners who are
engaged in research and applications of LSS in the IT industry. Moreover, the scarcity of literature specific to LSS in IT
industry will be addressed to some extent.
Lean Six Sigma effect on Jordanian pharmaceutical industry’s
performance (Alkunsol, Sharabati, AlSalhi and El-Tamimi, 2019)
Findings – The results show that there is an agreement on high implementation of Lean Six Sigma variables among Jordanian
Pharmaceutical Manufacturing organizations; there are strong relationships among Lean Six Sigma variables, except between non-
utilized talent and transportation; there are strong relationships between Lean Six Sigma variables and business performance. All
Lean Six Sigma variables have effect on business performance, except extra processing and waiting time.

Research limitations/implications – This study was carried out on the pharmaceutical industry in Jordan, generalizing results of one
industry and/or one country to other industries and/or countries may be questionable. Extending the analyses to other industries and
countries represents future research opportunities.

Practical implications – Implementing Lean Six Sigma variables in all Jordanian Pharmaceutical Manufacturing organizations can
improve their business performance; also, it can be applied to other manufacturing industry.

Social implications – The aim of all organizations is to reduce waste, which leads to reserve the natural resources, which is considered
as a corporate social responsibility.

Originality/value – Only few studies related to Lean or Six Sigma have been carried out in pharmaceutical industry in Jordan.
Therefore, this study might be considered as an initiative study, which studies the effect of both Lean and Six Sigma on
pharmaceutical industry in Jordan.
Hospital management from a high reliability organizational change
perspective A Swedish case on Lean and Six Sigma (Eriksson, 2017)
Findings – The nurses perceived that Lean worked better than Six Sigma, because of its bottom-up
approach, and its similarities with nurses’ well-known work qualities. Nurses coordinate patients
care, collaborate in teams and take leadership roles. To maintain high reliability and to become
quality developers, nurses need stable resources. However, professional’s logic collides with
management’s logic. Expert knowledge (top-down approach) without nurses’ local knowledge
(bottom-up approach) can lead to problems. Healthcare quality methods are standardized but must
be used with flexibility. However, HROs ensue not only from method quality but also from work
attitudes, commitment and continuous work-improvement.

Practical implications – Management can support personnel in developmental work with:


continuous education, training, teamwork, knowledge sharing and cooperation. Authoritarian
method structures that limit the healthcare professionals’ autonomy should be softened or
abandoned.

Originality/value – The study uses theoretical concepts from HROs, which were developed for
unexpected events, to explain the consequences of implementing Lean and Six Sigma in healthcare.
Lean Six-Sigma: the means to healing an ailing
NHS? (Bancroft, Saha, Li, Lukacs & Pierron 2018)
Findings – The model produced a robust positive impact when Lean Six-Sigma is adopted,
increasing the likelihood of A&E dependents meeting their performance objective to see
and treat patients in 4 h or less.
Research limitations/implications – Further variables such as staffing levels, A&E admission
type could be considered in future studies. Additionally, it would add further clarity to
analyse hospitals and trusts individually, to gauge which are struggling.
Practical implications – Should the NHS further its understanding and adoption of Lean Six-
Sigma, it is believed this could have significant improvements in productivity, patient care
and cost reduction.
Social implications – Productivity improvements will allow the NHS to do more with an
equal amount of funding, therefore improving capacity and patient care.
Originality/value – Through observing A&E and its ability to treat patients in a timely
fashion it is clear the NHS is struggling to meet its performance objectives, the
recommendation of Six-Sigma in A&E should improve the reliability and quality of care
offered to patients.
Lean Six Sigma applications in the textile industry: a
case study (Adikorley, Rothenberg & Guillory, 2017)
Findings – Three successful projects, two on changeover time reduction and one on
metal contamination, were completed. Additional findings from this study suggest
that strategic partnerships with other high-performing companies and storytelling
are two critical success factors. Also, it is critical for management to convey a clear
vision for LSS that can be operationalized within a company for successful
deployment of LSS textile projects.
Research limitations/implications – The findings from this case study cannot be
generalized.
Originality/value – The literature on LSS in small- and medium-sized businesses is
limited. The literature on the use of LSS in the textile and apparel industry is even
more limited. This paper shows various processes within the textile complex where
LSS has been deployed successfully, yielding economic impacts. By using qualitative
methods, the value of strategic partnerships, storytelling and a vision was seen.
Business survival and market performance through Lean Six Sigma in the chemical
manufacturing industry (Muganyi, Madanhire & Mbohwa, 2019)

Findings – The research findings were mainly based on the inferences obtained from a chemical
product manufacturing concern in South Africa, to distinguish the efficacy and relevance of Lean Six
Sigma as strategic business survival tool and imputing strategic resonance to corporate strategy.
Research limitations/implications – This research was limited to distinguishing Lean Six Sigma as a
business survival strategic tool and an ultimate enhancer of market performance for a chemical
product manufacturing entity. The implementation and evaluation of the Lean Six Sigma
methodology as a business survival strategic and market performance enhancement option for the
case study organization was entailed as the corollary of deductive resemblance to similar entities.
Practical implications – This study enables continuous improvement practitioners to evaluate the
Lean and Six Sigma practices. The advantages posed by the simultaneous and optimized application
of the two approaches versus individual application were assessed and verified to produce
enhanced continuous improvement. This poses further challenges to scholars and academics to
pursue further researches on the practicality of applying Lean Six Sigma as a strategic option.
Originality/value – The paper prompts the efficacy of well publicized methodologies and evaluates
their implementation for strategic performance for manufacturing organizations. The practical
application, constraints and resultant effects of deploying Lean Six Sigma were reviewed to give
impetus to the methodology.
Reducing medication errors using lean six sigma methodology in a Thai
hospital: an action research study (Trakulsunti, Antony, Dempsey &
Brennan, 2020)
Findings – The number of dispensing errors decreased from 6 to 2 incidents per
20,000 inpatient days per month between April 2018 and August 2019 representing
a 66.66% reduction. The project has improved the dispensing process performance
resulting in dispensing error reduction and improved patient safety. The
communication channels between the hospital pharmacy and the pharmacy
technicians have also been improved.
Research limitations/implications – This study was conducted in an inpatient
pharmacy of a teaching hospital in Thailand. Therefore, the findings from this study
cannot be generalized beyond the specific setting. However, the findings are
applicable in the case of similar contexts and/or situations.
Originality/value – This is the first study that employs a continuous improvement
methodology for the purpose of improving the dispensing process and the quality
of care in a hospital. This study contributes to an understanding of how the
application of action research can save patients’ lives, improve patient safety and
increase work satisfaction in the pharmacy service.
Resistance to Change (Charron, Harrington;
Voehl & Mignosa 2013)
Organizational change is often more difficult than it first appears. Why
is individual change so difficult? What is it that makes organizational
change so difficult? These are fundamental questions that must be at
the forefront of any LSS transformation process.

There are three fundamental aspects that prevent, hinder, or inhibit


employees from participating in change management. First, there is
fear of the unknown. Second is the measurement system. Finally, there
are individual beliefs and collective organizational beliefs that present
resistance barriers.
Fear of the Unknown (Charron, Harrington;
Voehl & Mignosa 2013)
When we say fear, we don’t necessarily mean fear of retribution from a
manager or another employee in the organization. What employees
typically fear is the unknown associated with any change to their
environment. The change from where they are now and what they are
familiar with to where the organization is going can cause trepidation
on the part of many employees. Often changes are not clearly or
completely defined. When there is uncertainty, people may be hesitant
to walk into the unknown. Addressing this employee fear is a
component of any successful LSS organization.
Measurement Systems (Charron, Harrington;
Voehl & Mignosa 2013)
It has often been said that measures drive behavior • Customer-related performance measures
and bad measures drive bad behavior, or non-Lean
measures drive non-Lean behavior. In either event, • Cost measures
the measurement system can be a significant source • Corporate measures
of resistance among employees, departments, or
divisions of the organization. When embarking on any • Stakeholder measures
LSS project, understanding the nature of the
measurement system that’s in place is a critical factor • Regulatory agency measures
to the success of the project. A review of measures • Government regulation measures
may help in the formulation of your LSS project.
Some types of measures that should be considered • Risk management measures
include:
• Liability measures
• Individual employee performance measures
• Contract measures
• Department measures
Whether the ability to change these measures is real
• Division measures or just perceived, the above list can have a
substantial influence on the resistance to change that
• Bonus performance measures you face on your LSS project. Clearly, many of those
• Process measures described can deter an employee on an LSS team
from taking action.
• Productivity measure
Beliefs (Charron, Harrington; Voehl & Mignosa
2013)
Human beings have a fundamental desire to feel that the actions they
are taking every day are correct. These actions are derived from each
individual’s beliefs. We normally don’t look at ourselves as individuals
who are deliberately doing something that is wrong. We generally
believe that our behaviors and actions are “the right thing to do.”
Because of this, people tend to hold on to their beliefs and behaviors
tightly, making it difficult to change a behavior pattern that is not
producing effective performance of the organization.
Kaizen (Charron, Harrington; Voehl & Mignosa
2013)
Where to start Kaizen? If you are not sure where to start Kaizen in your
organization, use the four K’s of Kaizen; observe your organization, and
start with one of the following:
• Kusai: Things that smell bad.
• Kitsui: Things that are hard to do or are in dark areas.
• Kitanai: Things that are dirty.
• Kiken: Things that are dangerous.
Kaizen and You Method (Charron, Harrington;
Voehl & Mignosa 2013)
Kaizen is change + improve. That means • There are three laws that make the
making simple small incremental simple concept of Kaizen work, but
improvements that any employee can you absolutely have to do all three or
complete. That’s all there is to it. With you will never be good at Kaizen.
the Kaizen and you method each • Surface: Write the idea down.
employee can begin today to “change
your point of view,” “change the way • Implement: You make the change.
you work,” and “change the way you • Share: Post it, review it, and talk
think.” How can you start Kaizen today about it.
in your work area?
• Stop doing unnecessary things. • With Kaizen you can change yourself
and you can change your workplace.
• Reduce: If you can’t stop, reduce Kaizen should be done to benefit you.
them somehow. The person who gains the most from
• Change: Try another way. the Kaizen is the person who does the
Kaizen. Start doing Kaizen today!
Kaizen for Process Troubleshooting (Charron,
Harrington; Voehl & Mignosa 2013)
Everyone in your organization should completely understand and be able to
successfully complete Kaizen using the following five-step process.
Employees who cannot complete this process cannot improve their
individual areas, and consequently, you cannot improve the organization.
• Step 1: When a problem (abnormality) arises, first, go to Gemba.
• Step 2: Check the Gembutsu (relevant objects surrounding the problem).
• Step 3: Take temporary countermeasures “on the spot.”
• Step 4: Find the root cause(s).
• Step 5: Standardize to prevent recurrence.
Step 1: Go to Gemba (Charron, Harrington;
Voehl & Mignosa 2013)
Gemba is the most important place in your company. It is where all of
the value is created for your customers. This concept was so important
that Soichiro Honda, founder of Honda Motor Company, did not have a
president’s office. He was always found somewhere in Gemba. Gemba
means the “real place,” or where the “action” is. When a problem
arises, go to Gemba first and look to solve specific problems.

Step 2: Conduct Gembutsu (Charron,
Harrington; Voehl & Mignosa 2013)
Gembutsu means to assess all the relevant information in Gemba that
surrounds the problem. Interview several employees; ask questions of
what was happening when the problem occurred. Seek information in a
nonthreatening way; the employees in the area want the problem to be
resolved as much as you do. Don’t place blame, insinuate wrongdoing,
or belittle an employee’s work performance. This is a search for the
facts of what happened. Remember to gather information regarding all
5M’s: materials, machinery, manpower, measurements, and methods.
Step 3: Take Temporary Countermeasures “on the
Spot” (Charron, Harrington; Voehl & Mignosa
2013)
There is nothing more reassuring to an employee than knowing that
management will support employees when problems arise. This is best
demonstrated by a manager that takes sound action immediately. Your
ability to remain calm in a critical situation and gather relevant
information, understand the situation, and take action on the spot is
one sign of good leadership and will be respected by all employees.
Most importantly, it should be recognized by all, especially
management, that these are temporary measures. The most common
mistake is that organizations stop here at step 3; they never find the
root causes, and consequently the organization lives with many “band
aid” solutions that never get resolved and result in constant nagging
poor-quality and poor-performance issues.
Step 4: Find Root Causes (Charron, Harrington;
Voehl & Mignosa 2013)
Once temporary measures are in place, root cause analysis must be
conducted. It is imperative that the root causes are found and
eliminated. These can be conducted using techniques such as the five
whys, cause-and-effect analysis, or failure mode and effects analysis
(FMEA). This is a critical step, for if the root cause(s) is not found, the
organization is doomed to repeatedly revisit the problem.
Step 5: Standardize to Prevent Recurrence
(Charron, Harrington; Voehl & Mignosa 2013)
Standardize means to put in a control system to prevent the problem from
appearing again. Depending on the nature of the problem, this often
requires management tools such as a revised maintenance schedule, revised
standard operation procedures or visual work instructions, or process control
charts. These preventative measures must be reviewed regularly to assure
that the problem has been eliminated. During standardize you must:

Eliminate root causes


Implement a permanent solution
Confirm the effectiveness of the permanent solution
Standardize the use of the new procedure
Similar to Kaizen, Kaikaku has been defined in many
ways. Several definitions follow to give you a range of
interpretations of the term Kaikaku.

KAIKAKU—
TRANSFORMATION Kaikaku: Change + radical.

OF MIND (Charron,
Harrington; Voehl Kai: To take apart and make new. Kaku: Radically alter.
& Mignosa 2013)
Kaikaku: Transformation of mind.

Kaikaku: Can also mean innovation, although the


authors will discuss innovation later in this chapter as
Kakushin.
How Do We Recognize Kaikaku (Transformation of
Mind)? (Charron, Harrington; Voehl & Mignosa
2013)
Kaikaku is the result of successive Lean learning and Lean doing until Lean
becomes a part of you. Looking back, I am not even sure when it first
occurred in me. One day I was training in an organization when it became
clear to me that my mind was completely rewired with Lean beliefs and
behaviors. Since that day I have been on a mission to help employees at all
levels transition from a traditional belief system to a Lean belief system.
In Kaizen we talked about small incremental changes. Many of the tools
presented in this handbook are used in Kaizen. Some, however, are tailored
to Kaikaku in that they require a more complex set of activities, a more
comprehensive set of Lean beliefs and behaviors, or simply put, a
“transformation of mind” has taken place in order to properly deploy the
Lean tool. This entire handbook is about trying to bring about this
transformation of mind in all your employees.
How Do We Recognize Kaikaku (Transformation of
Mind)? (Charron, Harrington; Voehl & Mignosa
2013)
The misconception of management in most Western companies is that Lean
is just a set of tools. Management and employees learn a few Lean tools,
conduct some Kaizen, and believe that their organization is Lean. Until the
transformation of mind has occurred in employees at all levels, your
organization cannot reap the true benefits of Lean.
It is only when an employee reaches a significant point in the Lean journey
that he/she begins to recognize Kaikaku in himself or herself. Their own
personal transformation of mind from traditional beliefs to Lean beliefs is
occurring on a daily basis. Those in your organization still practicing
traditional beliefs will neither recognize your efforts nor consider them to
have any value. Do not let this deter you. You must continue on your journey
and help them to see and live Lean beliefs and behaviors. Two examples of
Lean tools that can reflect the presence of Kaikaku are cell design and facility
layout.
KAKUSHIN (INNOVATION) (Charron,
Harrington; Voehl & Mignosa 2013)
Virtually all process improvement requires some form of change that could be
described as innovation, or at a minimum, creativity.
To compete, organizations either attempt to differentiate themselves from their
competition or attempt to achieve some relative low-cost position, and in both
cases, innovation is the key. The journey from novice to expert in any field begins
by understanding these essentials, practicing them, mastering them at one level,
and then moving on toward the limits of your potential. At some point in the
process, the best innovators rise above their profession in a multidisciplinary
manner.
Each of the six essentials of the 20-20 innovation process represents a bundle of
habits, skills, and knowledge that come together in innovation-driven problem-
solving personalities, and each personality draws its strengths from a variety of
specialties. A select number have been identified and mapped against the six
essentials as a basic guide to the interdisciplinary skill set.
The laws of Lean Six Sigma (George, Rowlands
& Kastle, 2005)
 Law #1: The Law of the Market— Customer needs define quality and are the highest priority for
improvement. You can't get sustained revenue growth without this.
 Law #2: The Law of Flexibility— The speed of any process is proportional to its flexibility (that is, how easily
people can switch between different types of tasks). If you want to be fast, you have to get rid of anything
that causes a loss of productivity anytime people want to stop what they're doing and start on something
new. (On the shop floor, inflexibility is seen in long set-up or changeover times. In service areas, inflexibility
is seen when people have to track down missing information, change from one computer system to another,
and so on.)
 Law #3: The Law of Focus— Data shows that 20% of the activities in a process cause 80% of the problems
and delay. So you'll make the most progress if you focus your efforts on those 20% (what you may hear some
people call "Time Traps").
 Law #4: The Law of Velocity (Little's Law)— The speed of any process is inversely related to the amount of
WIP (work- or things-in-process). So as WIP goes up, speed goes down. As WIP goes down, a process speeds
up. (Lesson: to make a process faster, cut down on how much work there is in process at any given time.)
 Law #5: The Law of Complexity and Cost— The complexity of the service or product offering generally adds
more costs and WIP than either poor quality (low Sigma) or slow speed (un-Lean) process problems. So one
of your early improvement targets may well be reducing the numbers or varieties of products and services
your work group is involved in. (This is a management decision that has to be based on good financial and
market information.)
Some Definitions (Brussee, 2004)
Accuracy: Accuracy is a measurement concept involving the correctness of the average reading. It is the extent
to which the average of the measurements taken agrees with a true value.

Analyze: Analyze is the third step in the DMAIC problem-solving method. The measurements/data must be
analyzed to see if they are consistent with the problem definition and also to identify a root cause. A problem
solution is then identified. Sometimes, based on the analysis, it is necessary to go back and restate the problem
definition and start the process over.

Attribute: An attribute is a qualitative characteristic that can be counted.

Attribute Data : Attribute data are data that are not continuous, that fit into categories that can be described in
terms of words (attributes). Examples: "good" or "bad," "go" or "no-go," "pass" or "fail," and "yes" or "no."

Chi-Squared Test: This test is used on variables (decimal) data to see if there was a statistically significant
change in the sigma between the population data and the current sample data. This test is done only after the
data plots have indicated that there has been no radical change in the shape of the data plots.
Some Definitions (Brussee, 2004)
Child Distributions: This term is used when interfacing with quality department data. A
child distribution refers to the sample averages and the sigma of multiple sample averages.
These are labeled and .

Confidence Tests Between Groups of Data: These tests are used to determine if there is a
statistically significant change between samples or between a sample and a population.
These are normally done at a 95% confidence level.

Continuous Data (Variables Data): Continuous data can have any value in a continuum.
They are decimal data without "steps."

Control: Control is the final step in the DMAIC problem-solving method. A verification of
control must be implemented. A robust solution (like a part change) will be easier to keep
in control than a qualitative solution.
Some Definitions (Brussee, 2004)
Control Chart : A control chart is a tool for monitoring variance in a process over
time. A traditional control chart is a chart with upper and lower control limits on
which are plotted values of some statistical measure for a series of samples or
subgroups. A traditional control chart uses both an average chart and a sigma chart.
See Simplified Control Chart.

Correlation Testing: This tool uses historical data to find what variables changed at
the same time or position as the problem time or position. These variables are then
subjected to further tests or study.

Cumulative: In probability problems, this is the sum of the probabilities of getting


"the number of successes or fewer," like getting three or fewer heads on five flips
of a coin. This option is used on "less-than" and "more-than" problems.
Some Definitions (Brussee, 2004)
Define: This is the overall problem definition step in the DMAIC problem-
solving method. This definition should be as specific as possible.

DMAIC Problem-Solving Method: DMAIC (Define, Measure, Analyze,


Improve, Control) is the Six Sigma problem-solving approach used by green
belts. This is the road map that is followed for all projects and process
improvements, with the Six Sigma tools applied as needed.

F Test: This test is used on variables (decimal) data to see if there was a
statistically significant change in the sigma between two samples. This test is
done only after the data plots have indicated that there has been no radical
change in the shape of the data plots.
Some Definitions (Brussee, 2004)
Fishbone Diagram: This Six Sigma tool uses a representation of a fish skeleton to
help trigger identification of all the variables that can be contributing to a problem.
The problem is visually shown as the fish "head" and the variables are shown on
the "bones." Once all the variables are identified, the key two or three are
highlighted for further study.

Improve: Improve is the fourth step in the DMAIC problem-solving method. Once a
solution has been analyzed, the fix must be implemented. The expected results
must be verified with independent data after solution implementation.

Labeling Averages and Standard Deviations: We label the average of a population


and the sample averages . Similarly, the standard deviation (sigma) of the
population is labeled S and the sample standard deviations (sigma) are labeled s.
Some Definitions (Brussee, 2004)
Measure: Accurate and sufficient measurements/data are needed in this second
step of the DMAIC problem-solving method.

Minimum Sample Size: The number of data points needed to enable statistically
valid comparisons or predictions.

N: This is the sample size or, in probability problems, the number of independent
trials, like the number of coin tosses, the number of parts measured, etc.

Need-Based Tolerances: This Six Sigma tool emphasizes that often tolerances are
not established based on the customer's real needs. A tolerance review offers
opportunity for both the customer and the supplier to save money.
Some Definitions (Brussee, 2004)
Normal Distributions: A bell-shaped distribution of data that is indicative of the distribution of data
from many things in nature. Information on this type of distribution is used to predict populations
based on samples of data.

Number s (or x Successes): This is the total number of "successes" that you are looking for in a
probability problem, like getting exactly three heads. This is used in Excel's BINOMDIST.

Parent Populations: This term is used when interfacing with quality department data. A parent
population refers to the individual data and their related statistical descriptions, like average and
sigma. These are labeled and S.

Plot Data: Most processes with continuous data have data plot shapes that stay consistent unless a
major change to the process has occurred. If the shapes of the data plots have changed
dramatically, then the quantitative formulas can't be used to compare the processes.
Some Definitions (Brussee, 2004)
Probability Determination: This is the likelihood of an event happening by pure
chance.

Probability p (or Probability s): Probability p or s is the probability of a "success" on


each individual trial, like the likelihood of a head on one coin flip or a defect on one
part. This is always a proportion and generally shown as a decimal, like 0.0156.

Process Flow Diagram: The process flow diagram, and specifically the locations
where data are collected, may help pinpoint possible areas contributing to a
problem.
Some Definitions (Brussee, 2004)
Proportional Data: Proportional data are based on attribute inputs, such as "good" or "bad," "yes"
or "no," etc. Examples are the proportion of defects in a process, the proportion of "yes" votes for a
candidate, and the proportion of students failing a test.

Repeatability: Repeatability is the consistency of measurements obtained when one person


measures the same parts or items multiple times using the same instrument and techniques.

Reproducibility: Reproducibility is the consistency of average measurements obtained when two or


more people measure the same parts or items using the same measuring technique.

RSS Tolerances: When establishing tolerances on stacked parts, the traditional method is to use
"worst-case" fit, even though the probability of this fit may be extremely low. The RSS method (root
sum-of-squares) of establishing tolerances takes this probability into consideration, resulting in
generally looser tolerances with no measurable reduction in quality.
Some Definitions (Brussee, 2004)
Sample Size, Proportional Data: This tool calculates the minimum sample size needed to get
representative attribute data on a process generating proportional data. Too small a sample may
cause erroneous conclusions. Excessive samples are expensive.

Sample Size, Variables Data: This tool calculates the minimum sample size needed to get
representative data on a process with variables (decimal) data. Too small a sample may cause
erroneous conclusions. Excessively large samples are often expensive.

Simplified Control Chart: A control chart—traditional or simplified—is a tool for monitoring variance
in a process over time. Traditional control charts have two graphs and are not intuitive. Simplified
control charts have one graph, are intuitive, and are operator-friendly. See Control Chart.

Simplified DOE: This Six Sigma tool enables tests on an existing process to establish optimum
settings on the key process input variables.
Some Definitions (Brussee, 2004)
Simplified FMEA: This Six Sigma tool is used to convert qualitative concerns on collateral damage to
a prioritized action plan. Unintentional collateral harm may occur to other processes due to a
planned process or product change.

Simplified Gauge Verification: This Six Sigma tool is used on variables data (decimals) to verify that
the gauge is capable of giving the required accuracy of measurements compared to the allowable
tolerance.

Simplified QFD: This Six Sigma tool is used to convert qualitative customer input into specific
prioritized action plans. The customer includes everyone who is affected by the product or process.

Simplified Transfer Function: The simplified transfer function shows the variation contribution of
each component to the total variation of an assembly or a process. This allows for component focus
to effect total variation reduction.
Some Definitions (Brussee, 2004)
t Test: This Six Sigma test is used to see if there was a statistically significant change
in the average between population data and the current sample data, or between
two samples. This test on variables data is done only after the data plots have
indicated that there has been no radical change in the shapes of the data plots and
the chi-squared test or F test shows no significant change in sigma.

Tolerance Stack-up Analysis: This is the process of evaluating the effect that the
dimensions of all components can have on an assembly. There are various methods
used, including worst case, RSS (root sum-of-squares), modified RSS, and Monte
Carlo simulations.

Variables Data: Variables data (continuous data) are generally in decimal form.
Theoretically you could look at enough decimal places to find that no two values
are exactly the same.
BASICS Model Baseline (B): (Protzman III, Keen &
Protzman, 2018)
Create the vision.
Train the leadership and implementation team in Lean.
Charter the team, scope the project.
Select the pilot area and team members.
Conduct five-day Lean training seminar.
Baseline metrics, identify the “gaps” and set targets.
Build a chronological file—take photos and videos of how it is today.
Health check.
Value-stream map: current, ideal, and future state.
Determine the customer demand and takt time (TT).
BASICS Model Assess/Analyze (A): (Protzman III,
Keen & Protzman, 2018)
Involve all the staff to analyze the process.
Process-flow analysis (PFA)—become the customer or product. This
includes a point-to-point diagram of how the product flows.
Create process block diagram.
Group tech analysis (if required).
Workflow analysis (WFA)—become the operator. This includes a
spaghetti chart of how the operator works.
Setup/changeover analysis (SMED).
BASICS Model Suggest Solutions (S): (Protzman III,
Keen & Protzman, 2018)
Update the process block diagram—one-piece flow vision for the
process.
Create the optimal layout for the process.
The ten-step process for creating master layouts.
Design the work stations.
Create standard work.
Determine the capacity and labor requirements.
Make and approve recommendations.
Train staff in the new process.
BASICS Model Suggest Implement (I): (Protzman
III, Keen & Protzman, 2018)
Implement the new process—use pilots.
Start up the new line.
Update standard work.
Determine capacity and staffing (PPCS).
Implement line balancing.
Implement line metrics.
Visual management—Incorporate 5S, visual displays, and controls.
Implement Lean materials system.
Implement mistake-proofing.
Implement total productive maintenance (TPM).
BASICS Model Suggest Check (C): (Protzman III,
Keen & Protzman, 2018)
Do you know how to check?
Check using the visual-management system.
Heijuka and scheduling.
Mixed model production.
BASICS Model Suggest Sustain (S): (Protzman III,
Keen & Protzman, 2018)
Document the business case study and results.
Create the Lean culture.
Create a sustain plan.
Upgrade the organization.
Ongoing leadership coaching.
A startup is a temporary organization
designed to search for a repeatable and
scalable business model (under extreme
uncertainty).
- Steve Blank, The Startup Owner’s Manual
Five major lessons from the lean startup Thinking
(Nir, 2018)
1) Start with the customer in mind.
2) Define and communicate the mission and vision.
3) Synthesize an integrative operating model.
4) Identify metrics that matter.
5) Pivot or persevere.
An innovation is the application of creative ideas,
knowledge, or new technology,
to a product/service, process, or business model that
is accepted by markets and society.
• Adapted from OECD 2005 and Wikipedia.

Innovation = Imagination +
Creativity +
Implementation (Execution) +
Value
The Lean Startup Principles
• Entrepreneurs are everywhere (even in well-established firms).
• Entrepreneurship is management (but different from managing
traditional firms).
• Validated learning (about customers).
• **Build-Measure-Learn feedback loop.
• Innovation accounting (using actionable metrics – mostly about
customer behaviors).
• Eric Ries, The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to
Create Radically Successful Businesses, 2011.

** http://theleanstartup.com/principles
Resources
• Why the Lean Startup Changes Everything
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
• Eric Ries, The Lean Startup: How Today's Entrepreneurs Use
Continuous Innovation to Create Radically Successful Businesses,
September 13, 2011. http://theleanstartup.com/principles
• **Lean Startup-Visual Summary (link)
• The Lean Approach (The Kauffman Founders School)
• http://www.entrepreneurship.org/Founders-School/The-Lean-Approach
• *Eric Ries: "The Lean Startup" | Talks at Google (video) ~ 1
hour
• **Steve Blank How to Build a Startup: The Lean LaunchPad
free course on Udacity (link) or (YouTube video list)
Case Studies
• Lean Startup Cases
• **IMVU Case Study (link)
• http://theleanstartup.com/casestudies
• Rent the Runway Case
• ***The Supply & Demand of Rent the Runway | Jennifer
Hyman https://www.youtube.com/watch?v=JYFWpJvq6DU (about
testing business ideas)
• Introduction Chapter of the Innovation Method
http://innovatorsdna.com/wp-
content/uploads/2014/09/Innovators-Method-Sample.pdf
Customer Development Process

http://www.spikelab.org/img/blog/4steps-f6b16b56.png

Steve Blank video at http://www.youtube.com/watch?v=6t0t-CXPpyM


The Lean
Start-Up

Read this

Steve Blank, "Why The Lean Start-up Change Everything," HBR, May 2013. (link)
Lean Startup Cycle
* *

Legend:

*Process

* Outcome

http://www.agileacademy.co/wp-content/uploads/2014/09/LeanStartupCycle.png
http://entrepreneurship.saddleback.edu/Resources/Lean%20Startup/Lean%20Startup-Visual%20Summary.pdf
Lean Startup: 3 Components

The Lean Startup method consists of three parts:


• The Business Model Canvas – to frame hypotheses.
• Customer Development – to test those hypotheses
in front of customers.
• Agile Engineering – to build Minimum Viable
Products to maximize learning.
Rent the
Runway Case

Todd Heisler/The New York Times

Nathan Furr and Jeff Dyer, The Innovator’s Method: Beginning the Lean Startup into Your
Organization, Harvard Business Review Press, 2014. (read this introduction (Rent the
Runway case is an great example illustrating Lean Startup principles) and Chapter 1 at
http://innovatorsdna.com/wp-content/uploads/2014/09/Innovators-Method-Sample.pdf
Business Plan
Business Plan would identify the customer need, describe the
product or service, estimate the size of the market, and
estimate the revenues and profits based on projections of
pricing, costs, and unit volume growth.
Show
**Rent the Runway: An inside look at the tech startup's success
https://www.youtube.com/watch?v=cyvfsi3MX-M

Fill out the Business Model Canvas based on the RTR case presented
in the video
Rent the Runway Case
• Rent the Runway Case
• ***The Supply & Demand of Rent the Runway | Jennifer
Hyman https://www.youtube.com/watch?v=JYFWpJvq6DU (about testing business
ideas)
• Introduction Chapter of the Innovation Method
http://innovatorsdna.com/wp-content/uploads/2014/09/Innovators-Method-
Sample.pdf
Business Model Canvas

https://fivewhys.files.wordpress.com/2012/02/canvas1.jpg
(Video)
Source: “Why the Lean Startup Changes Everything“ HBR 2013
Food on the Table
• The Concierge Minimum Viable Product (MVP)
• http://mealplanning.food.com/

• Food on the Table will ask you for your preferences—the types of
foods you like and dishes you enjoy, and the grocery stores you shop
at. Then it will recommend dishes for you for the week, helping you
get out of food ruts or scrambling to figure out what to make for
dinner.
• The app helps you save money by listing the items on sale each week
at your chosen grocery stores and recipes based on those sale items.
Add the recipes to your meal plan and the foods to your grocery list.
The mobile app syncs up with your account on the website.
http://www.macworld.com/article/2109727/food-on-the-table-makes-meal-planning-and-grocery-shopping-a-piece-of-cake.html
Food On The Table’s Lean Startup
Food on the Table (FotT) began life with a single customer.
Instead of supporting thousands of grocery stores around the
country as it does today, FotT supported just one. How did
the company choose which store to support? The founders
didn’t—until they had their first customer. Similarly, they
began life with no recipes whatsoever—until their first
customer was ready to begin her meal planning. In fact, the
company served its first customer without building any
software, without signing any business development
partnerships, and without hiring any chefs.

This case exemplifies Paul Graham’s argument


that at the beginning, a startup company should
“Do Things That Don't Scale”
http://paulgraham.com/ds.html
Do Things That Don't Scale
• Do Things That Don't Scale http://paulgraham.com/ds.html
• Startup Class Lecture 8 http://startupclass.samaltman.com/ or
https://www.youtube.com/watch?v=oQOC-qy-GDY
Paul Graham artcle

(link)
MVP
In product development, the Minimum Viable Product
(MVP) is a strategy used for fast and quantitative
market testing of a product or product features. It is
an iterative process of idea generation, prototyping,
presentation, data collection, analysis and learning.
An MVP Example
Why is Dropbox more popular than other programs with
similar functionality?
•Well, let’s take a step back and think about the sync problem and what
the ideal solution for it would do:
• There would be a folder.
• You’d put your stuff in it.
• It would sync.

Source:
http://michaelrwolfe.com/2013/10/19/why-
is-dropbox-more-popular-than-other-
programs-with-similar-functionality/
MVP and Product/Market Fit
• MVP: A product that includes just enough features to allow
useful feedback from early adopters
• Customers will use the MVP and pay for it.
• Minimum Viable Product, or MVP, a bare-bones offering
that allows the team to collect customer feedback and to
validate concepts and assumptions that underlie the
business idea.
• Product/Market Fit—the idea that they were crafting a
solution the market wants, one that customers are willing to
pay for, and one that’ll scale into a large, profitable
business.
https://steveblank.com/2014/07/30/driving-corporate-innovation-design-thinking-customer-development/
Learning Cycle

http://startupclass.samaltman.com/ Lecture #1
http://startupclass.samaltman.com/ Lecture #1
http://startupclass.samaltman.com/ Lecture #1
Hypotheses Testing and Insight
Identify Business Model
Hypotheses

Gain
based on
Facts

Collect Data/Facts
By conducting experiments
Lean Startup- Assumptions and Experiments Mapping
• Map the key assumptions & attached them to the various parts of your business
model.
• Develop quick and inexpensive experiments to validate or invalidate the
assumptions quickly. Pivot if assumptions are invalidated.
• This is the central idea of the Lean Startup Movement.

http://www.alexandercowan.com/business-model-canvas-templates/
Business Model Design Meets Customer Development

https://steveblank.com/2010/11/11/get-out-of-the-building-and-win-50000/
http://blog.startupinstitute.com/2015-3-3-what-does-pivot-mean/
What Is Pivot?
• Pivot: A change to business model component based on
customer feedbacks. A pivot is not a failure.

• Pivot is a structured course correction designed to test a


new fundamental hypothesis about the product, strategy,
engine of growth, etc.
• Eric Ries, The Lean Startup, 2013.
• Pivots are vision driven.
Pivot a Lean Startup
• The popular view of a real entrepreneur is someone
with a big vision, and a stubborn determination to
charge straight ahead through any obstacle and
make it happen.
• The vision part is fine, but successful entrepreneurs
have found that the extreme uncertainty of a new
product or service usually requires many course
corrections, or “pivots” to find a successful formula.
• Use continuous innovation to create radically
successful

Source: Top 10 Ways Entrepreneurs Pivot a Lean Startup (link)


Top Ten 1. Zoom-in pivot
Types of Pivots 2. Zoom-out pivot
3. Customer segment pivot
4. Customer need pivot
5. Platform pivot
6. Business architecture pivot
7. Value capture pivot
8. Engine of growth pivot
9. Channel pivot
10.Technology pivot.

Source: Top 10 Ways Entrepreneurs Pivot a Lean Startup (link)


• This startup search process is the business model /
Pivot customer development / agile development solution stack.
• This solution stack proposes that entrepreneurs should first
map their assumptions in their business model and then
test whether these hypotheses are accurate, outside in the
field (customer development) and then use an iterative and
incremental development methodology (agile development)
to build the product.
• When founders discover their assumptions are wrong, as
they inevitably will, the result isn’t a crisis, it’s a learning
event called a pivot — and an opportunity to revise/update
the business model.

https://steveblank.com/2010/11/11/get-out-of-the-building-and-win-50000/
Learning and Assumptions Testing
Produce Evidence with a Call to Action (CTA)
Use experiments to test if customers are interested, what preferences
they have, and if they are willing to pay for what you have to offer. Get
them to perform a call to action (CTA) as much as possible in order to
engage them and produce evidence of what works and what doesn’t.
Use Experiments to Test

• Interest and Relevance

• Priorities and Preferences

• Willingness to Pay
Test Customer Interest with Google AdWords
We use Google Ad Words to illustrate this technique because it’s
particularly well suited for testing based on its use of search terms for
advertising (other services such as LinkedIn and Facebook also work
well).
• Select search terms. Select search terms that best represent what you
want to test (e.g., the existence of a customer job, pain, or gain or the
interest for a value proposition).
• Design your ad/test. Design your test ad with a headline, link to a
landing page, and blurb. Make sure it represents what you want to
test.
• Launch your campaign. Define a budget for your ad/ testing campaign
and launch it. Pay only for clicks on your ad, which represent interest.
• Measure clicks. Learn how many people click on your ad. No clicks
may indicate a lack of interest.
Tracking User’s Actions
• “Fabricate” a unique link. Make a
unique and trackable link to more
detailed information about your ideas
(e.g., a download, landing page) with
a service such as goo.gl.
• Track if the customer used the link or
not. If the link wasn’t used, it may
indicate lack of interest or more
important jobs, pains, and gains than
those that your idea addresses.
Funnel
• Split testing, also known as A/B testing, is a technique to compare the
performance of two or more options.
• We can apply the technique to compare the performance of
alternative value propositions with customers or to learn more about
jobs, pains, and gains.

Split
Testing
What to Test?
Here are some elements that you can easily test with A/B testing:
• Alternative features
• Pricing
• Discounts
• Copy text
• Packaging Website variations . . .
Call to Action: How many of the test subjects perform the CTA?
• Purchase /Rent
• E-mail sign-up
• Click on button
• Survey
• Completion of any other task

•Osterwalder, Alexander; Pigneur, Yves; Bernarda, Gregory; Smith, Alan (2015-


01-23). Value Proposition Design: How to Create Products and Services
Customers Want (Strategyzer) (Kindle Locations 3196-3202). Wiley. Kindle
Edition.
More startups fail from a lack of customers than
from a failure of product development

1. Try many times before you get it right.


2. It is OK to fail so plan to learn from it.
3. Only move to the next stage when you learn
enough and reach the “escape velocity”
http://www.ctinnovations.com/images/resources/Startup%20Owners%20Manual%20-%20BlankDorf.pdf
• Product/market fit (PMF) means being in a good market
with a product that can satisfy that market.
• You can always feel when product/market fit isn't
happening. The customers aren't quite getting value out of
the product, word of mouth isn't spreading, usage isn't
growing that fast, press reviews are kind of "blah", the sales
cycle takes too long, and lots of deals never close.
• I believe that the life of any startup can be divided into two
parts: before product/market fit (call this "BPMF") and after
product/market fit("APMF").
• Marc Andreessen ***(link)

http://andrewchen.co/when-has-a-consumer-startup-hit-productmarket-fit/
Customer Development Process & 3 Stages of Fit
Problem/Solution Product/Market Business Model
Fit Fit Fit
(Scaling)
OODA
https://en.wikipedia.org/wiki/OODA_loop

** https://foundrmag.com/3-proven-startup-strategies-for-success/
• Observation –
collecting data with
your senses
• Orientation –
analyzing and
synthesizing the
information to form a
perspective
• Decision – choosing
a course of action
based on that
perspective, and
• Action – taking
action on that
decision.

http://www.idea-
sandbox.com/blog/deci
sion-making-like-a-
fighter-pilot/
Nail It to Scale It
PSF PMF BMF

http://www.nailthenscale.com/Nail-It-Then-Scale-It-Book-Graphics-PDF.pdf
http://www.nailthenscale.com/wp-content/uploads/2015/06/Big_Idea_Canvas_v7.4.pdf
The Minimum Feature Set is "...the
smallest or least complicated problem
the customer will pay us to solve."
Apple II & VisiCal MVP is "The minimum viable product
is that version of a new product which
allows a team to collect the maximum
Apple I amount of validated learning about
customers with the least effort."

Think about the ecosystem!!!

http://jsahni.blogspot.com/2014_02_01_archive.html
Lean
Canvas

https://leanstack.com/why-lean-canvas/
Customer Development
The process to The process to
search for the right scale the sales &
Search business model! Execution operations.
Lean Startup Cycle Ideas/Business Think with
Pivot/ your hands
Model
Persevere
• Technologies
• Social trends
Learn • Pains/Gains Build
• Insights • Prototyping
Minimize the total • Minimum viable
• Hypotheses
• 5 Whys time & resource products (MVP)
through the
Learning Cycle
Data Product/service
• Innovation Accounting • Address Job to
 Net Promoter Score be done
 AARRR
• Fit (P-S, P-M)
Measure
• Split tests Find/get real
• Observations customers
• Click streams
Business
Model Identify Design
Canvas Hypotheses Prototypes
version X & Tests

Revising &
Pivoting via
Revise, Redesign,
Speedy and Pivot current Conduct
Learning BMC
Analyze Data &
Test
Obtain
Cycles Insights

Business
Model Measure Results
Canvas & Build Metrics
version x+1
A Balancing Act of Art (Creativity) & Science
• In reality, executing a startup is a balance between
creativity/intuition/instinct and the scientific method: hypothesize >
a/b test > conclude > repeat.
• Inspiration will help you find a problem to solve.
• Creativity will allow you to brainstorm potential solutions to that
problem.
• The scientific method will guide you toward which of these solutions
will actually solve your customer’s problem.
https://blog.ycombinator.com/the-scientific-method-for-startups/
Build-Measure-Learn and MVP
• A core component of Lean Startup methodology is the build-measure-
learn feedback loop.
• The first step is figuring out the problem that needs to be solved and
then developing a minimum viable product (MVP) to begin the
process of learning as quickly as possible.
• Once the MVP is established, a startup can work on tuning the engine.
This will involve measurement and learning and must include
actionable metrics that can demonstrate cause and effect question.
(link)
• utilize an investigative development method called the "Five Whys"-
asking simple questions to study and solve problems along the way

http://theleanstartup.com/principles
Testing Your Business Model

http://businessmodelalchemist.com/blog/2011/01/methods-for-the-business-model-generation-how-bmgen-and-custdev-fit-perfectly.html
Lean Innovation

https://steveblank.files.wordpress.com/2016/08/nga-lean-innovation.png
Start Your Business Lean and Quick:
The Lean Startup Approach
Introduce an innovative method to build a new business in
the context of lean startup approach.
The application and cases of deploying the Build-Measure-
Learn cycle to measure user feedbacks and to validate
assumptions in a startup’s business model will be discussion.

“The Lean Startup method teaches you


how to drive a startup-how to steer, when
to turn, and when to persevere-and grow a
business with maximum acceleration.”
http://theleanstartup.com/principles
TECHNICAL COMPETENCY LEVELS

Individuals who have been trained to perform as members of Six Sigma Teams. They are used to
collect data, participate in problem solving, and assist in the implementation of the individual improvement
activities.

Individuals who have completed Six Sigma training, are capable of serving on Six Sigma project
teams, and managing simple Six Sigma projects.

Individuals who have had advanced training with specific emphasis on statistical applications and
problem-solving approaches. These individuals are highly competent to serve as on-site consultants and
trainers for application of Six Sigma methodologies.

Individuals who have had extensive experience in applying Six Sigma and who have mastered the Six
Sigma methodology. In addition, these individuals should be capable of teaching the Six Sigma methodology to
all levels of personnel and to deal with executive management in coaching them on culture change within the
organization.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Master Black Belt
• Certifying LSSBB and Lean Six Sigma Green Belts (LSSGBs)
• Training LSSBBs and LSSGBs
• Developing new approaches
• Communicating best practices
• Taking action on projects that the LSSBB is having problems in defining the root
causes and implementing the change
• Conducting long-term LSS projects
• Identifying LSS opportunities
• Reviewing and approving LSSBB and LSSGB project justifications and project plans
• Working with the executive team to establish new behavioral patterns that reflect
a Lean culture throughout the organization
• (Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Black Belt (LSSBB)
One LSSBB for every 100 employees is the standard practice. (Example: A small
organization with only 100 employees needs only one LSSBB or two part-time
LSSBBs.)
Their responsibilities are to lead Lean Six Sigma Teams (LSSTs) and to define and
develop the right people to coordinate and lead the LSS projects. Candidates for
LSSBB should be experienced professionals who are already highly respected
throughout the organization. They should have experience as a change agent and
be very creative. LSSBBs should generate a minimum of US$1 million in savings per
year as a result of their direct activities. LSSBBs are not coaches. They are
specialists who solve problems and support the LSSGBs and LSSYBs. They are used
as LSST managers/leaders of complex, simple, and important projects. The position
of LSSBB is a full-time job; he/she is assigned to train, lead, and support the LSST.
They serve as internal consultants and instructors.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Green Belt
Being an LSSGB is a part-time job. An LSSGB is assigned to manage a
project or work as a member of an LSST by the LSS champion and
his/her manager. Sometimes an LSSGB is the manager of the area that
is most involved in the problem. However, it is very difficult for
managers to lead or even serve on an LSST unless they are relieved of
their management duties. They will need to spend as much as 50% of
their time working on the LSS project. In most cases, it is preferable
that the LSSGB is a highly skilled professional who has a detailed
understanding of the area that is involved in the problem. LSSGBs work
as members of LSSTs that are led by LSSBBs or other LSSGBs. They also
will form LSSTs when projects are assigned to them.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Lean Six Sigma Yellow Belt (LSSYB)
One Lean Six Sigma Yellow Belt (LSSYB) for every five employees and four
LSSYBs for every LSSGB is the standard practice. (Example: A small
organization with 100 employees needs only 1 LSSBB, 5 LSSGBs, and 20
LSSYBs.)
LSSYBs will have a practical understanding of many of the basic problem-
solving tools and the DMAIC methodology. Team members are usually
classified as LSSYBs when they have completed the 2 or 3 days of LSSYB
training and passed an LSSYB exam. They will work part-time on the project
and still remain responsible for their normal work assignments. However,
they should have some of their workload re-assigned to give them time to
work on the LSST. They usually serve as the expert and coordinator on the
project for the area they are assigned to.
(Voehl, Harrington, Mignosa & Rich Charron 2014)
Understanding the People Issues (Morgan &
Brenig-Jones, 2012)
Six Sigma and Lean originated from industrial manufacturing backgrounds, with
early emphasis on tools and techniques. Now, however, most managers accept that
recognising and handling the people issues is the biggest challenge in implementing
Lean Six Sigma successfully.
Lean Six Sigma aims to make change happen in order to improve things. Human
beings, like most creatures, are cautious and sceptical about change – it spells
danger. Humans have an inbuilt resistance to change, especially if somebody tells
us it’s going to be ‘good for us’. Most people fear losing something they have as a
result of change.
Dealing with personal fear and loss is another big challenge in implementing Lean
Six Sigma, but few enthusiasts in statistical theory cover this in their extensive
training.
Understanding people is key to implementing a Lean Six Sigma project. Almost
always, if Six Sigma and Lean projects fail, people issues of one form or another are
the cause.
Working Right from the Start (Morgan &
Brenig-Jones, 2012)
Unfortunately, we don’t know an easy formula for solving the challenge
of managing people in a Lean Six Sigma project. However, in over 80
implementations of Lean Six Sigma, we have found a small number of
common factors that consistently stand out as critical for success.
Perhaps not surprisingly, leadership commitment is one of these critical
factors. Clinching buy-in at the beginning is the real challenge.
Heart of Change
(Kotter & Cohen,
2012)
Managing Change (Morgan & Brenig-Jones,
2012)
• George Eckes, a well-known writer on this subject, uses a simple but eloquent expression to express gaining
acceptance for change and overcoming resistance, whether for a whole Lean Six Sigma programme or for
the changes resulting from part of a Lean Six Sigma project:
• E=QxA
• E is the effectiveness of the change in practice: This represents the effectiveness of the implementation,
which depends on the quality of the solution and the level of acceptance.
• Q is the technical quality of the solution: The ‘hard’ tools of Lean and Six Sigma will have proven that the
solution works when tested. An ideal solution may have been identified, but its effectiveness will depend on
the degree to which it is accepted.
• A is the acceptance of the change by people: Having a high ‘score’ for A is as important as having a good-
quality solution.
• Some hardened practitioners believe that the A factor is more important than the Q factor and is the real
key to success in Lean Six Sigma. To understand how people perceive things and to win support, you need to
score well on both factors.
• If you’re in the early stages of deploying a Lean Six Sigma programme, the A factor is likely to start with
winning support from senior managers.
• Keep Q × A in mind as a simple shorthand for a highly complicated issue – dealing with the human mind.
Overcoming resistance (Morgan & Brenig-
Jones, 2012)
1. Building relationships on mutual commitment. 19. Having consistency of purpose.

2. Leading by example. 20. Having a high sense of urgency.

3. Making decisions based on facts. 21. Making decisions quickly.

4. Being open to change how we work. 22. Continuously seeking to achieve competitive advantage.

5. Having an inspiring vision of the future. 23. Continuously building customer confidence.

6. Having clear goals and targets. 24. Using measures to compare our performance with best practices.

7. Having plans that are clear and well-communicated. 25. Being honest and sincere.

8. Having a winning strategy. 26. Understanding our market, customers and competitors.

9. Establishing clear roles and responsibilities. 27. Facing up to problems quickly.

10. Dealing effectively with those resisting change. 28. Rewarding the right behaviours.

11. Having no need to fire-fight. 29. Encouraging creativity and innovation.

12. Attracting and retaining world-class people. 30. Modifying systems and structures to support business assurance.

13. Following world-class processes. 31. Achieving budgeted objectives.

14. Having clear and open communication. 32. Doing what we say we will do.

15. Learning from each other. 33. Expecting our performance standards to increase continuously.

16. Working well together across all functions. 34. Encouraging expressions of different points of view.

17. Encouraging teamwork. 35. Only reporting relevant information.

18. Finding more productive ways of working. 36. Wanting to learn from our mistakes so we don’t repeat them.
The Cultural Web (Morgan & Brenig-Jones,
2012)
Change Reaction (Morgan & Brenig-Jones,
2012)
Comparing energy and attitude (Morgan &
Brenig-Jones, 2012)
Winners are open and responsive to change. They
want to do their best and have the energy to see
things through to the end.

Deadbeats have a negative attitude towards change


and low energy levels. For them, change is a
nuisance, and they undertake tasks with reluctance.

Spectators have very good intentions. They have a


positive attitude towards change, but low energy
levels. Typically, they say the right things, but find it
hard to follow through.

Terrorists have high levels of energy but their attitude


towards change is negative. Typically, terrorists have
their own agenda. They can be very outspoken in
their opinions, and their attitude can be summed up
as ‘that will never work’.
CM Model (Hays, 2018)
1. Recognizing the need for
change and starting the change
process
2. Diagnosing what needs to be
changed and formulating a
vision of a preferred future state
3. Planning how to intervene in
order to achieve the desired
change
4. Implementing plans and
reviewing progress
5. Sustaining the change
6. leading and managing the
people issues
7. learning.
Errors made in the Past (Kotter, 2012)
Error #1: Allowing Too Much Complacency
Error #2: Failing to Create a Sufficiently Powerful Guiding Coalition
Error #3: Underestimating the Power of Vision
Error #4: Undercommunicating the Vision by a Factor of 10 (or 100 or
Even 1,000)
Error #5: Permitting Obstacles to Block the New Vision
Error #6: Failing to Create Short-Term Wins
Error #7: Declaring Victory Too Soon
Error #8: Neglecting to Anchor Changes Firmly in the Corporate Culture
Stakeholder Management (Hayes, 2018)
Positive attitude
(Potential sponsors)

Strong support
Weak support
(champions)

Low High
power power

Strong opposition
Weak opposition
(blockers)

Negative attitude
(Potential blockers)
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 2: Align the Leaders
Leadership alignment means that the organization’s leaders actively
and visibly support the program, which makes the difference between
real success and halfway to failure. It’s critical to the program’s success,
a fundamental requirement. If you lack leadership alignment, I suggest
that you do a one-off demonstration for results purposes and do not go
to the next step.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 3: Get the Best Black Belt Candidates
The third step is to screen and select your best-of-the-best people to
participate in the Six Sigma journey. We did a study of hundreds of
black belts who were trained and practicing during the mid-1990s. We
reviewed the results obtained by each of those black belts in terms of
the financial outcomes of their projects and their ability to complete
tasks for project successes. In addition, we reviewed failed black belt
candidates and successful black belts. The outcome of the study was a
selection tool, which is presented later in this chapter, with a list of 11
criteria and a rating scale for ensuring maximum probability of success.
I strongly encourage you to use this tool, which has been proven to
work for more than 50 deployment clients.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 4: Choose Projects with High Financial Impact
The next step is to select projects to solve problems that will have a
high financial impact. Let’s assume you have a list of problems in your
business and a list of your specific business functions and, by the way,
you also know the financial impact that each project could make. Now,
match your highest-impact projects with your high-potential black belt
candidates.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
Realistic Approach to Lean Six Sigma (Bruce,
2015)
Step 1: Try And Demonstrate
It enables companies implementing Six Sigma to avoid the flaws in the
conventional deployment model, specifically, the widespread changes and
financial problems that seem inevitable in immediate large-scale
implementation.
The key point here is that demonstrating the power of Six Sigma gives people
reason to believe. Even some of the naysayers actually gained a keen interest
in Six Sigma. This demonstration started to create a pull system, as opposed
to a push system. In other words, people in the organization were requesting
(pulling) Six Sigma techniques and resources, in contrast with the usual
situation in which the powers that be force (push) Six Sigma on them.
DMAIC (Bruce, 2015)
Define. The Define phase determines the project’s purpose,
objectives, and scope; collects information on the customers
and the process involved; and specifies the project deliverables to
customers (internal and external).

Measure. In the Measure phase you learn exactly what is known about
the problem and what is unknown, select one or more metrics in the
process, map the process, make the necessary accurate and sufficient
measurements, and record the results to establish the current
capability—the baseline.
DMAIC (Bruce, 2015)
Analyze. The purpose of the Analyze phase is to sort through all the
potential Xs that are causing the costly defects. It’s like inputting all the Xs
through a funnel so that the resulting output is the vital few Xs that are
causing the defects versus the trivial many.

Improve. The purpose of the Improve phase is to create the mathematical


relationship of the input Xs to the output of the process to eliminate defects.
The objective is to optimize the vital few Xs and their interrelationships to
predict certainty of the process capability to deliver expected defect-free
outputs. If the process is not predictable the probability of a defect will be
higher and as a result will be filled with uncertainty and require
improvement.
DMAIC (Bruce, 2015)
Control. After each phase comes a phase-gate, or tollgate, review. This
is a critical checkpoint in which the black belt, team members, master
black belt, and champion meet with executive managers. The team
members report on their work in the phase they’re completing, and the
managers have the opportunity to discuss the work, ask questions, and
make suggestions. The point is to determine whether the team has
performed the activities specified for that phase of the project and
whether the team has achieved the stated objectives. If all has gone
properly, the team is authorized to move on to the next phase.
Lean Six Sigma Define
Process Basics
People: Those working in or around the process. Do you have the right number in
the right place, at the right time and possessing the right skills for the job? And do
they feel supported and motivated?
Equipment: The various items needed for the work. Items can be as simple as a
stapler or as complicated as a lathe used in manufacturing. Consider whether you
have the right equipment, located in an appropriate and convenient place, and
being properly maintained and serviced.
Method: How the work needs to be actioned – the process steps, procedures, tasks
and activities involved.
Materials: The things necessary to do the work – for example, the raw materials
needed to make a product.
Environment: The working area – perhaps a room or surface needs to be dust-free,
or room temperature must be within defined parameters.
(Morgan & Breing-Jones, 2016)
A HIGH-LEVEL PROCESS MAP
Suppliers: The people, departments or organisations that provide you with the ‘inputs’ needed to
operate the process. When they send you an enquiry or order form, the external customer is also
included as a supplier in your process. Suppliers also include regulatory bodies providing
information, and companies providing you with equipment or raw materials.
Inputs: Forms or information, equipment or raw materials, or even the people you need to carry out
the work. For people, the supplier may be the human resources department or an employment
agency.
Process: In the SIPOC diagram, the P presents a picture of the process steps at a relatively high level,
usually Levels 2 or 3.
Outputs: A list of the things that your process provides to the internal and external customers in
seeking to meet their CTQs. Your outputs will become inputs to their processes.
Customers: The different internal and external customers who’ll receive your various process
outputs.
(Morgan & Breing-Jones, 2016)
Some Definations (Voehl, Harrington &
Charron, 2014)
• A customer is any user of the business process’s output. Although we
usually think of customers as those to whom the corporation sells a
product or service, most customers of business processes actually
work within the same corporation and do not actually buy the
corporation’s products or services.
• Requirements are the statement of customer needs and expectations
that a product or service must satisfy
• Quality means conformance to customer requirements. In other
words, since requirements represent the customer’s needs and
expectations, a quality focus on the business process asks that every
business process meet the needs of its customers.
Some Definations (Voehl, Harrington &
Charron, 2014)
• Definition: A business process is the organization of people,
equipment, energy, procedures, and material into the work activities
needed to produce a specified end result (work product). It is a
sequence of repeatable activities that have measurable inputs, value-
added activities, and measurable outputs.
• An effective process produces output that conforms to customer
requirements (the basic definition of quality). The lack of process
effectiveness is measured by the degree to which the process output
does not conform to customer requirements (that is, by the output’s
degree of defect). This is LSS’s first aim—to develop processes that
are effective.
Some Definations (Voehl, Harrington &
Charron, 2014)
An efficient process produces the required output at the lowest possible
(minimum) cost. That is, the process avoids waste or loss of resources in
producing the required output. Process efficiency is measured by the ratio of
required output to the cost of producing that output. This cost is expressed
in units of applied resources (dollars, hours, energy, etc.). This is LSS’s second
aim—to increase process efficiency without loss of effectiveness.
An adaptable process is designed to maintain effectiveness and efficiency as
customer requirements and the environment change. The process is deemed
adaptable when there is agreement among the suppliers, owners, and
customers that the process will meet requirements throughout the strategic
(3- to 5-year outlook) period. This is LSS’s third aim—to develop processes
that can adapt to change without loss of effectiveness and efficiency.
Some Definations (Voehl, Harrington &
Charron, 2014)
• Cross-functional focus is the effort to define the flow of work
products according to their sequence of activities, independent of
functional or operating-unit boundaries.
• Critical success factors are those few key areas of activity in which
favorable results are absolutely necessary for the process to achieve
its objectives. In other words, critical success factors are the things
that must go right or the entire process will fail. For example, payroll
as a process has numerous objectives and performs many activities,
but certainly its primary objective is to deliver paychecks on time.
NATURE OF LSS PROCESS IMPROVEMENT
(Voehl, Harrington & Charron, 2014)
LSS uses many accepted standards of What is new in process management is the
management and professional excellence. way LSS applies these standards to the core
Among these standards are: concepts of ownership and cross-functional
• Management leadership focus.
• Professionalism However, it should be noted that identifying
and defining a business process almost
• Attention to detail always precedes the selection of the
process owner.
• Excellence
That is, you first identify the process, and
• Basics of quality improvement then you identify the right person to own it.
• Measurement and control This seemingly out-of-sequence approach is
• Customer satisfaction taken in order to firmly establish and
reinforce the key concepts of LSS before
getting into how-to-do-it details.
• There is nothing so useless as doing efficiently that which should not
be done at all.
• Peter F. Drucker
(Voehl, Harrington & Charron, 2014)
Value Added (VA)
VA means all those activities that turn raw materials into value (the
product) for your customer.
VA means all those activities that are required to deliver the intended
service.
Non Value Added (NVA)
NVA is anything that the customer in not willing to pay for
Necessary Non Value Added
An activity which does not add any value to the product or service but
is nessasary.
Observation(Voehl, Harrington & Charron,
2014)
The range of what we think and do is limited by what we fail to notice. And, because we
fail to notice that we fail to notice, there is little we can do to change until we notice how
our failing to notice shapes our thought and deeds.

This approach (observation and experimentation), which has been used by science for
hundreds of years, is the key to advancing knowledge and improving our understanding of
our surroundings. We must be able to accurately observe our surroundings, document
what we see, investigate and analyze our observations to find out what is causing what we
see, and ultimately take effective action to improve our environment. —R.D. Laing

This emergence of the power of observation is a key ingredient in the formation of a


learning environment. The remainder of this chapter is about igniting the power of
observation in our employees. More importantly, it’s about learning to see waste and
variation with new eyes, eyes that know what to look for. —William Hazlitt
Practical Statistics
Data Ethics (Grus, 2019)
With the use of data comes the misuse of data. This has pretty much always been the case,
but recently this idea has been defined as “data ethics” and has featured somewhat
prominently in the news.

Algorithms are used to predict the risk that criminals will reoffend and to sentence them
accordingly. Is this more or less fair than allowing judges to determine the same?

“Data ethics” purports to provide answers to these questions, or at least a framework for
wrestling with them.

Well, let’s start with “what is ethics?” If you take the average of every definition you can
find, you end up with something like ethics is a framework for thinking about “right” and
“wrong” behavior. Data ethics, then, is a framework for thinking about right and wrong
behavior involving data.
Data Ethics (Grus, 2019)
Some people talk as if “data ethics” is (perhaps implicitly) a set of commandments about
what you may and may not do. Some of them are hard at work creating manifestos, others
crafting mandatory pledges to which they hope to make you swear. Still others are
campaigning for data ethics to be made a mandatory part of the data science curriculum
You should care about ethics whatever your job. If your job involves data, you are free to
characterize your caring as “data ethics,” but you should care just as much about ethics in
the nondata parts of your job.

Perhaps what’s different about technology jobs is that technology scales, and that
decisions made by individuals working on technology problems (whether data-related or
not) have potentially wide-reaching effects.

A tiny change to a news discovery algorithm could be the difference between millions of
people reading an article and no one reading it.
Data Ethics (Grus, 2019)
A single flawed algorithm for granting parole that’s used all over the country
systematically affects millions of people, whereas a flawed-in-its-own-way
parole board affects only the people who come before it.

So yes, in general, you should care about what effects your work has on the
world. And the broader the effects of your work, the more you need to
worry about these things.

Unfortunately, some of the discourse around data ethics involves people


trying to force their ethical conclusions on you. Whether you should care
about the same things they care about is really up to you.
Statistics (Grus, 2019)
Statistics refers to the mathematics and techniques with which we
understand data.

It is a rich, enormous field, so our discussion will necessarily not be a


deep one.

Instead, I’ll try to teach you just enough to be dangerous, and pique
your interest just enough that you’ll go off and learn more.
Surveys (Rumsey, 2019)

An observational study is one in which data are collected on individuals


in a way that doesn’t affect them. The most common observational
study is the survey. Surveys are questionnaires that are presented to
individuals who have been selected from a population of interest.

A downside of surveys is that they can only report relationships


between variables that are found; they cannot claim cause and effect.
Experiments (Rumsey, 2019)

An experiment imposes one or more treatments on the participants in such a way that clear
comparisons can be made. Once the treatments are applied, the response is recorded. For example,
to study the effect of drug dosage on blood pressure, one group might take 10 mg of the drug, and
another group might take 20 mg. Typically, a control group is also involved, where subjects each
receive a fake treatment (a sugar pill, for example).

Experiments take place in a controlled setting, and are designed to minimize biases that might
occur. Some potential problems include: researchers knowing who got what treatment; a certain
condition or characteristic wasn’t accounted for that can affect the results (such as weight of the
subject when studying drug dosage); or lack of a control group. But when designed correctly, if a
difference in the responses is found when the groups are compared, the researchers can conclude a
cause and effect relationship.

It is perhaps most important to note that no matter what the study, it has to be designed so that the
original questions can be answered in a credible way.
Colelcting Data (Rumsey, 2019)

Once a study has been designed, be it a survey or an experiment, the


subjects are chosen and the data are ready to be collected. This phase
of the process is also critical to producing good data.

SELECTING A GOOD SAMPLE

AVOIDING BIAS IN YOUR DATA


Describing Data (Rumsey, 2019)
DESCRIPTIVE STATISTICS: Data are also summarized (most often in conjunction with charts and/or graphs) by using what
statisticians call descriptive statistics. Descriptive statistics are numbers that describe a data set in terms of its important
features.

If the data are categorical (where individuals are placed into groups, such as gender or political affiliation), they are
typically summarized using the number of individuals in each group (called the frequency) or the percentage of individuals
in each group (the relative frequency).

Numerical data represent measurements or counts, where the actual numbers have meaning (such as height and weight).
With numerical data, more features can be summarized besides the number or percentage in each group. Some of these
features include measures of center (in other words, where is the “middle” of the data?); measures of spread (how diverse
or how concentrated are the data around the center?); and, if appropriate, numbers that measure the relationship
between two variables (such as height and weight).

Some descriptive statistics are better than others, and some are more appropriate than others in certain situations. For
example, if you use codes of 1 and 2 for males and females, respectively, when you go to analyze that data, you wouldn’t
want to find the average of those numbers — an “average gender” makes no sense. Similarly, using percentages to
describe the amount of time until a battery wears out is not appropriate.
CHARTS AND GRAPHS (Rumsey, 2019)
Data are summarized in a visual way using charts and/or graphs. Some of the
basic graphs used include pie charts and bar charts, which break down
variables such as gender and which applications are used on teens’
cellphones. A bar graph, for example, may display opinions on an issue using
5 bars labeled in order from “Strongly Disagree” up through “Strongly
Agree.”

But not all data fit under this umbrella. Some data are numerical, such as
height, weight, time, or amount. Data representing counts or measurements
need a different type of graph that either keeps track of the numbers
themselves or groups them into numerical groupings. One major type of
graph that is used to graph numerical data is a histogram.
Analyzing Data (Rumsey, 2019)
After the data have been collected and described using pictures and
numbers, then comes the fun part: navigating through that black box
called the statistical analysis. If the study has been designed properly,
the original questions can be answered using the appropriate analysis,
the operative word here being appropriate. Many types of analyses
exist; choosing the wrong one will lead to wrong results.
Central Tendencies (Grus, 2019)
Usually, we’ll want some notion of where our data is centered. Most
commonly we’ll use the mean (or average), which is just the sum of the data
divided by its count.

If you have two data points, the mean is simply the point halfway between
them. As you add more points, the mean shifts around, but it always
depends on the value of every point. For example, if you have 10 data points,
and you increase the value of any of them by 1, you increase the mean by
0.1. If you have two data points, the mean is simply the point halfway
between them. As you add more points, the mean shifts around, but it
always depends on the value of every point. For example, if you have 10 data
points, and you increase the value of any of them by 1, you increase the
mean by 0.1.
Central Tendencies (Grus, 2019)
We’ll also sometimes be interested in the median, which is the middle-most
value (if the number of data points is odd) or the average of the two middle-
most values (if the number of data points is even).

For instance, if we have five data points in a sorted vector x, the median is
x[5 // 2] or x[2]. If we have six data points, we want the average of x[2] (the
third point) and x[3] (the fourth point).

Notice that—unlike the mean—the median doesn’t fully depend on every


value in your data. For example, if you make the largest point larger (or the
smallest point smaller), the middle points remain unchanged, which means
so does the median.
Central Tendencies (Grus, 2019)
Clearly, the mean is simpler to compute, and it varies smoothly as our data
changes. If we have n data points and one of them increases by some small
amount e, then necessarily the mean will increase by e / n. (This makes the
mean amenable to all sorts of calculus tricks.) In order to find the median,
however, we have to sort our data. And changing one of our data points by a
small amount e might increase the median by e, by some number less than
e, or not at all (depending on the rest of the data).

A generalization of the median is the quantile, which represents the value


under which a certain percentile of the data lies (the median represents the
value under which 50% of the data lies
Dispersion (Grus, 2019)
Dispersion refers to measures of how spread out our data is. Typically they’re statistics for
which values near zero signify not spread out at all and for which large values (whatever
that means) signify very spread out. For instance, a very simple measure is the range,
which is just the difference between the largest and smallest elements

The range is zero precisely when the max and min are equal, which can only happen if the
elements of x are all the same, which means the data is as undispersed as possible.
Conversely, if the range is large, then the max is much larger than the min and the data is
more spread out.

Like the median, the range doesn’t really depend on the whole dataset. A dataset whose
points are all either 0 or 100 has the same range as a dataset whose values are 0, 100, and
lots of 50s. But it seems like the first dataset “should” be more spread out.
Correlation and Causation (Grus, 2019)
You have probably heard at some point that “correlation is not causation,” most likely from
someone looking at data that posed a challenge to parts of his worldview that he was
reluctant to question. Nonetheless, this is an important point—if x and y are strongly
correlated, that might mean that x causes y, that y causes x, that each causes the other,
that some third factor causes both, or nothing at all.

Consider the relationship between number of friends and daily minutes. It’s possible that
having more friends on the site causes DataSciencester users to spend more time on the
site. This might be the case if each friend posts a certain amount of content each day,
which means that the more friends you have, the more time it takes to stay current with
their updates.

However, it’s also possible that the more time users spend arguing in the DataSciencester
forums, the more they encounter and befriend like-minded people. That is, spending more
time on the site causes users to have more friends.
Correlation and Causation (Grus, 2019)
A third possibility is that the users who are most passionate about data
science spend more time on the site (because they find it more
interesting) and more actively collect data science friends (because
they don’t want to associate with anyone else).

One way to feel more confident about causality is by conducting


randomized trials. If you can randomly split your users into two groups
with similar demographics and give one of the groups a slightly
different experience, then you can often feel pretty good that the
different experiences are causing the different outcomes.
Probability (Grus, 2019)
The laws of probability, so true in general, so fallacious in particular. - Edward
Gibbon

It is hard to do data science without some sort of understanding of


probability and its mathematics.

For our purposes you should think of probability as a way of quantifying the
uncertainty associated with events chosen from some universe of events.
Rather than getting technical about what these terms mean, think of rolling
a die. The universe consists of all possible outcomes. And any subset of these
outcomes is an event; for example, “the die rolls a 1” or “the die rolls an
even number.”
Probability (Grus, 2019)
Notationally, we write P(E) to mean “the probability of the event E.”

We’ll use probability theory to build models. We’ll use probability


theory to evaluate models. We’ll use probability theory all over the
place.

One could, were one so inclined, get really deep into the philosophy of
what probability theory means. (This is best done over beers.) We
won’t be doing that.
Dependence and Independence (
Grus, 2019)
Roughly speaking, we say that two events E and F are dependent if knowing something about whether E
happens gives us information about whether F happens (and vice versa). Otherwise, they are independent.

For instance, if we flip a fair coin twice, knowing whether the first flip is heads gives us no information about
whether the second flip is heads. These events are independent. On the other hand, knowing whether the first
flip is heads certainly gives us information about whether both flips are tails. (If the first flip is heads, then
definitely it’s not the case that both flips are tails.) These two events are dependent.

Mathematically, we say that two events E and F are independent if the probability that they both happen is the
product of the probabilities that each one happens:

P(E,F)=P(E)P(F)

In the example, the probability of “first flip heads” is 1/2, and the probability of “both flips tails” is 1/4, but the
probability of “first flip heads and both flips tails” is 0.
Random Variables (Grus, 2019)
A random variable is a variable whose possible values have an
associated probability distribution. A very simple random variable
equals 1 if a coin flip turns up heads and 0 if the flip turns up tails. A
more complicated one might measure the number of heads you
observe when flipping a coin 10 times or a value picked from range(10)
where each number is equally likely.
Continuous Distributions
A coin flip corresponds to a discrete distribution—one that associates
positive probability with discrete outcomes. Often we’ll want to model
distributions across a continuum of outcomes. (For our purposes, these
outcomes will always be real numbers, although that’s not always the
case in real life.) For example, the uniform distribution puts equal
weight on all the numbers between 0 and 1.
The Normal Distribution
The normal distribution is the classic bell curve–shaped distribution
and is completely determined by two parameters: its mean μ (mu) and
its standard deviation σ (sigma). The mean indicates where the bell is
centered, and the standard deviation how “wide” it is.
KEY TERMS FOR DATA TYPES (Bruce, Gedeck &
Bruce, 2020)
Numeric: Data that are expressed on a numeric scale

Continuous: Data that can take on any value in an interval. (Synonyms: interval, float, numeric)

Discrete: Data that can take on only integer values, such as counts. (Synonyms: integer, count)

Categorical: Data that can take on only a specific set of values representing a set of possible
categories. (Synonyms: enums, enumerated, factors, nominal)

Binary: A special case of categorical data with just two categories of values, e.g. 0/1, true/false.
(Synonyms: dichotomous, logical, indicator, boolean)

Ordinal: Categorical data that has an explicit ordering. (Synonyms: ordered factor)
KEY TERMS FOR ESTIMATES OF LOCATION
(Bruce, Gedeck & Bruce, 2020)
Mean : The sum of all values divided by the number Weighted median: The value such that one-half of
of values. the sum of the weights lies above and below the
sorted data.
Synonyms - average

Trimmed mean: The average of all values after


Weighted mean: The sum of all values times a weight dropping a fixed number of extreme values.
divided by the sum of the weights.
Synonyms - truncated mean
Synonyms - weighted average

Robust: Not sensitive to extreme values.


Median: The value such that one-half of the data lies
above and below. Synonyms - resistant
Synonyms - 50th percentile
Outlier: A data value that is very different from most
of the data.
Percentile: The value such that P percent of the data
lies below. Synonyms - extreme value
Synonyms - quantile
Sampling (Bruce, Gedeck & Bruce, 2020)
A popular misconception holds that the era of big data means the end
of a need for sampling.
In fact, the proliferation of data of varying quality and relevance
reinforces the need for sampling as a tool to work efficiently with a
variety of data and to minimize bias.
Even in a big data project, predictive models are typically developed
and piloted with samples.
Samples are also used in tests of various sorts .
Population Vs Sample (Bruce, Gedeck &
Bruce, 2020)
KEY TERMS FOR RANDOM SAMPLING (Bruce,
Gedeck & Bruce, 2020)
Sample: A subset from a larger data set. Stratum (pl strata): A homogeneous
subgroup of a population with common
characteristics.
Population: The larger data set or idea of a
data set.
Simple random sample: The sample that
results from random sampling without
N (n): The size of the population (sample). stratifying the population.

Random sampling: Drawing elements into a Bias: Systematic error.


sample at random.
Sample bias: A sample that misrepresents
Stratified sampling: Dividing the population the population.
into strata and randomly sampling from
each strata.
Size versus Quality: When Does Size Matter?
(Bruce, Gedeck & Bruce, 2020)
In the era of big data, it is sometimes surprising that smaller is better. Time and effort spent on random
sampling not only reduce bias, but also allow greater attention to data exploration and data quality. For
example, missing data and outliers may contain useful information. It might be prohibitively expensive to track
down missing values or evaluate outliers in millions of records, but doing so in a sample of several thousand
records may be feasible. Data plotting and manual inspection bog down if there is too much data.
So when are massive amounts of data needed?
The classic scenario for the value of big data is when the data is not only big, but sparse as well. Consider the
search queries received by Google, where columns are terms, rows are individual search queries, and cell
values are either 0 or 1, depending on whether a query contains a term. The goal is to determine the best
predicted search destination for a given query. There are over 150,000 words in the English language, and
Google processes over 1 trillion queries per year. This yields a huge matrix, the vast majority of whose entries
are “0.”
This is a true big data problem—only when such enormous quantities of data are accumulated can effective
search results be returned for most queries. And the more data accumulates, the better the results. For
popular search terms this is not such a problem—effective data can be found fairly quickly for the handful of
extremely popular topics trending at a particular time. The real value of modern search technology lies in the
ability to return detailed and useful results for a huge variety of search queries, including those that occur only
with a frequency, say, of one in a million.
Sample Mean versus Population Mean (Bruce,
Gedeck & Bruce, 2020)
The symbol x¯ (pronounced x-bar) is used to represent the mean of a
sample from a population, whereas μ is used to represent the mean of
a population. Why make the distinction? Information about samples is
observed, and information about large populations is often inferred
from smaller samples. Statisticians like to keep the two things separate
in the symbology.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
Even in the era of big data, random sampling remains an important
arrow in the data scientist’s quiver.

Bias occurs when measurements or observations are systematically in


error because they are not representative of the full population.

Data quality is often more important than data quantity, and random
sampling can reduce bias and facilitate quality improvement that would
be prohibitively expensive.
Selection Bias (Bruce, Gedeck & Bruce, 2020)
Yogi Berra, “If you don’t know what you’re looking for, look hard enough and
you’ll find it.”

Selection bias: Bias resulting from the way in which observations are
selected.

Data snooping: Extensive hunting through data in search of something


interesting.

Vast search effect: Bias or nonreproducibility resulting from repeated data


modeling, or modeling data with large numbers of predictor variables.
Regression to the Mean (Bruce, Gedeck &
Bruce, 2020)
Regression to the mean refers to a phenomenon involving successive
measurements on a given variable: extreme observations tend to be
followed by more central ones. Attaching special focus and meaning to
the extreme value can lead to a form of selection bias.
Sampling Distribution of a Statistic (Bruce,
Gedeck & Bruce, 2020)
The term sampling distribution of a statistic refers to the distribution of some sample statistic, over many
samples drawn from the same population. Much of classical statistics is concerned with making inferences
from (small) samples to (very large) populations.

Sample statistic: A metric calculated for a sample of data drawn from a larger population.

Data distribution: The frequency distribution of individual values in a data set.

Sampling distribution: The frequency distribution of a sample statistic over many samples or resamples.

Central limit theorem: The tendency of the sampling distribution to take on a normal shape as sample size
rises.

Standard error: The variability (standard deviation) of a sample statistic over many samples (not to be
confused with standard deviation, which, by itself, refers to variability of individual data values).
Central Limit Theorem (Bruce, Gedeck &
Bruce, 2020)
The means drawn from multiple samples will resemble the familiar bell-shaped
normal curve (see “Normal Distribution”), even if the source population is not
normally distributed, provided that the sample size is large enough and the
departure of the data from normality is not too great. The central limit theorem
allows normal-approximation formulas like the t-distribution to be used in
calculating sampling distributions for inference—that is, confidence intervals and
hypothesis tests.

The central limit theorem receives a lot of attention in traditional statistics texts
because it underlies the machinery of hypothesis tests and confidence intervals,
which themselves consume half the space in such texts. Data scientists should be
aware of this role, but, since formal hypothesis tests and confidence intervals play a
small role in data science, and the bootstrap is available in any case, the central
limit theorem is not so central in the practice of data science.
Standard Error (Bruce, Gedeck & Bruce, 2020)
The standard error is a single metric that sums up the variability in the sampling distribution for a statistic. The
standard error can be estimated using a statistic based on the standard deviation s of the sample values, and
the sample size n: Standarderror=SE=sn−−√

As the sample size increases, the standard error decreases, corresponding to what was observed in Figure 2-6.
The relationship between standard error and sample size is sometimes referred to as the square-root of n rule:
in order to reduce the standard error by a factor of 2, the sample size must be increased by a factor of 4.

The validity of the standard error formula arises from the central limit theorem. In fact, you don’t need to rely
on the central limit theorem to understand standard error. Consider the following approach to measuring
standard error:

Collect a number of brand new samples from the population. For each new sample, calculate the statistic (e.g.,
mean).Calculate the standard deviation of the statistics computed in step 2; use this as your estimate of
standard error.
Confidence Intervals (Bruce, Gedeck & Bruce,
2020)
Frequency tables, histograms, boxplots, and standard errors are all ways to
understand the potential error in a sample estimate.

Confidence level: The percentage of confidence intervals, constructed in the same


way from the same population, expected to contain the statistic of interest.
Interval endpoints: The top and bottom of the confidence interval.

There is a natural human aversion to uncertainty; people (especially experts) say, “I


don’t know” far too rarely. Analysts and managers, while acknowledging
uncertainty, nonetheless place undue faith in an estimate when it is presented as a
single number (a point estimate). Presenting an estimate not as a single number
but as a range is one way to counteract this tendency. Confidence intervals do this
in a manner grounded in statistical sampling principles.
Normal Distribution (Bruce, Gedeck & Bruce,
2020)
The bell-shaped normal distribution is iconic in traditional
statistics. The fact that distributions of sample statistics are often
normally shaped has made it a powerful tool in the development of
mathematical formulas that approximate those distributions.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Error: The difference between a data point and a predicted or average value.

Standardize: Subtract the mean and divide by the standard deviation.

z-score: The result of standardizing an individual data point.

Standard normal: A normal distribution with mean = 0 and standard


deviation = 1.

QQ-Plot: A plot to visualize how close a sample distribution is to a specified


distribution, e.g. the normal distribution.
Student’s t-Distribution (Bruce, Gedeck &
Bruce, 2020)
The t-distribution is a normally shaped distribution, but a bit thicker and
longer on the tails. It is used extensively in depicting distributions of sample
statistics. Distributions of sample means are typically shaped like a t-
distribution, and there is a family of t-distributions that differ depending on
how large the sample is. The larger the sample, the more normally shaped
the t-distribution becomes.

The t-distribution is often called Student’s t because it was published in 1908


in Biometrika by W. S. Gosset under the name “Student.” Gosset’s employer,
the Guinness brewery, did not want competitors to know that it was using
statistical methods, so insisted that Gosset not use his name on the article.
KEY TERMS FOR STUDENT’S T-DISTRIBUTION
(Bruce, Gedeck & Bruce, 2020)
n
Sample size.

Degrees of freedom
A parameter that allows the t-distribution to adjust to different sample
sizes, statistics, and number of groups.
Note (Bruce, Gedeck & Bruce, 2020)
What do data scientists need to know about the t-distribution and the
central limit theorem? Not a whole lot. These distributions are used in
classical statistical inference, but are not as central to the purposes of
data science. Understanding and quantifying uncertainty and variation
are important to data scientists, but empirical bootstrap sampling can
answer most questions about sampling error. However, data scientists
will routinely encounter t-statistics in output from statistical software
and statistical procedures in R, for example in A-B tests and regressions,
so familiarity with its purpose is helpful.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
The t-distribution is actually a family of distributions resembling the
normal distribution, but with thicker tails.

The t-distribution is widely used as a reference basis for the distribution


of sample means, differences between two sample means, regression
parameters, and more.
KEY TERMS FOR BINOMIAL DISTRIBUTION (Bruce,
Gedeck & Bruce, 2020)
Trial: An event with a discrete Binomial trial: A trial with two
outcome (e.g., a coin flip). outcomes.

Success: The outcome of interest for Synonym - Bernoulli trial


a trial.
Binomial distribution: Distribution of
Synonyms - “1” (as opposed to “0”) number of successes in x trials.

Binomial: Having two outcomes. Synonym - Bernoulli distribution


Synonyms - yes/no, 0/1, binary
Binomial Distribution (Bruce, Gedeck & Bruce,
2020)
Yes/no (binomial) outcomes lie at the heart of analytics since they are often the
culmination of a decision or other process; buy/don’t buy, click/don’t click,
survive/die, and so on. Central to understanding the binomial distribution is the
idea of a set of trials, each trial having two possible outcomes with definite
probabilities.

The binomial distribution is the frequency distribution of the number of successes


(x) in a given number of trials (n) with specified probability (p) of success in each
trial. There is a family of binomial distributions, depending on the values of n, and
p. The binomial distribution would answer a question like:

If the probability of a click converting to a sale is 0.02, what is the probability of


observing 0 sales in 200 clicks?
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
Binomial outcomes are important to model, since they represent,
among other things, fundamental decisions (buy or don’t buy, click or
don’t click, survive or die, etc.).

A binomial trial is an experiment with two possible outcomes: one with


probability p and the other with probability 1 – p.

With large n, and provided p is not too close to 0 or 1, the binomial


distribution can be approximated by the normal distribution.
Chi-Square Distribution (Bruce, Gedeck &
Bruce, 2020)
The chi-square distribution is the distribution of this statistic under
repeated resampled draws from the null model - see “Chi-Square Test”
for a detailed algorithm, and the chi-square formula for a data table. A
low chi-square value for a set of counts indicates that they closely
follow the expected distribution. A high chi-square indicates that they
differ markedly from what is expected. There are a variety of chi-square
distributions associated with different degrees-of-freedom (e.g.
number of observations, see “Degrees of Freedom”).
Key Ideas (Bruce, Gedeck & Bruce, 2020)
• The chi-square distribution is typically concerned with counts of
subjects or items falling into categories.
• The chi-square statistic measures the extent of departure from what
you would expect in a null model.
F Distribution (Bruce, Gedeck & Bruce, 2020)
The distribution of the F-statistic is the frequency distribution of all the
values that would be produced by randomly permuting data in which
all the group means are equal (i.e. a null model). There are a variety of
F-distributions associated with different degrees-of-freedom (e.g.
numbers of groups, see “Degrees of Freedom”). The calculation of F is
illustrated in the section on ANOVA (see “ANOVA”). The F statistic is
also used in linear regression to compare the variation accounted for by
the regression model to the overall variation in the data. F-statistics are
produced automatically by R and Python as part of regression and
ANOVA routines.
Key Ideas (Bruce, Gedeck & Bruce, 2020)
• The F-distribution is used with experiments and linear models
involving measured data.

• The F-statistic compares variation due to factors of interest to overall


variation.
Poisson Distributions (Bruce, Gedeck & Bruce,
2020)
From prior data we can estimate the average number of events per unit of
time or space, but we might also want to know how different this might be
from one unit of time/space to another. The Poisson distribution tells us the
distribution of events per unit of time or space when we sample many such
units. It is useful when addressing queuing questions like “How much
capacity do we need to be 95% sure of fully processing the internet traffic
that arrives on a server in any 5-second period?”

The key parameter in a Poisson distribution is λ, or lambda. This is the mean


number of events that occurs in a specified interval of time or space. The
variance for a Poisson distribution is also λ.
Exponential Distribution (Bruce, Gedeck &
Bruce, 2020)
Using the same parameter λ that we used in the Poisson distribution, we can also
model the distribution of the time between events: time between visits to a
website or between cars arriving at a toll plaza. It is also used in engineering to
model time to failure, and in process management to model, for example, the time
required per service call. The R code to generate random numbers from an
exponential distribution takes two arguments, n (the quantity of numbers to be
generated), and rate, the number of events per time period.
A key assumption in any simulation study for either the Poisson or exponential
distribution is that the rate, λ, remains constant over the period being considered.
This is rarely reasonable in a global sense; for example, traffic on roads or data
networks varies by time of day and day of week. However, the time periods, or
areas of space, can usually be divided into segments that are sufficiently
homogeneous so that analysis or simulation within those periods is valid.
Weibull Distribution (Bruce, Gedeck & Bruce,
2020)
In many cases, the event rate does not remain constant over time. If the period over which
it changes is much longer than the typical interval between events, there is no problem;
you just subdivide the analysis into the segments where rates are relatively constant, as
mentioned before. If, however, the event rate changes over the time of the interval, the
exponential (or Poisson) distributions are no longer useful. This is likely to be the case in
mechanical failure—the risk of failure increases as time goes by. The Weibull distribution is
an extension of the exponential distribution, in which the event rate is allowed to change,
as specified by a shape parameter, β. If β > 1, the probability of an event increases over
time, if β < 1, it decreases. Because the Weibull distribution is used with time-to-failure
analysis instead of event rate, the second parameter is expressed in terms of characteristic
life, rather than in terms of the rate of events per interval. The symbol used is η, the Greek
letter eta. It is also called the scale parameter.

With the Weibull, the estimation task now includes estimation of both parameters, β and
η. Software is used to model the data and yield an estimate of the best-fitting Weibull
distribution.
A/B Testing (Bruce, Gedeck & Bruce, 2020)
An A/B test is an experiment with two groups to establish which of two treatments, products,
procedures, or the like is superior. Often one of the two treatments is the standard existing
treatment, or no treatment. If a standard (or no) treatment is used, it is called the control. A typical
hypothesis is that a new treatment is better than control.

A proper A/B test has subjects that can be assigned to one treatment or another. The subject might
be a person, a plant seed, a web visitor; the key is that the subject is exposed to the treatment.
Ideally, subjects are randomized (assigned randomly) to treatments. In this way, you know that any
difference between the treatment groups is due to one of two things:

 The effect of the different treatments

 Luck of the draw in which subjects are assigned to which treatments (i.e., the random assignment
may have resulted in the naturally better-performing subjects being concentrated in A or B)
KEY TERMS FOR A/B TESTING (Bruce, Gedeck &
Bruce, 2020)
Treatment: Something (drug, price, web headline) to which a subject is exposed.

Treatment group: A group of subjects exposed to a specific treatment.

Control group: A group of subjects exposed to no (or standard) treatment.

Randomization: The process of randomly assigning subjects to treatments.

Subjects: The items (web visitors, patients, etc.) that are exposed to treatments.

Test statistic: The metric used to measure the effect of the treatment.
Hypethesis Tests (Bruce, Gedeck & Bruce,
2020)
Hypothesis tests, also called significance tests, are ubiquitous in the traditional statistical
analysis of published research. Their purpose is to help you learn whether random chance
might be responsible for an observed effect.

An A/B test (see “A/B Testing”) is typically constructed with a hypothesis in mind. For
example, the hypothesis might be that price B produces higher profit. Why do we need a
hypothesis? Why not just look at the outcome of the experiment and go with whichever
treatment does better?

The answer lies in the tendency of the human mind to underestimate the scope of natural
random behavior. One manifestation of this is the failure to anticipate extreme events, or
so-called “black swans” (see “Long-Tailed Distributions”). Another manifestation is the
tendency to misinterpret random events as having patterns of some significance. Statistical
hypothesis testing was invented as a way to protect researchers from being fooled by
random chance.
KEY TERMS (Bruce, Gedeck & Bruce, 2020)
Null hypothesis: The hypothesis that chance is to blame.

Alternative hypothesis: Counterpoint to the null (what you hope to


prove).

One-way test: Hypothesis test that counts chance results only in one
direction.

Two-way test: Hypothesis test that counts chance results in two


directions.
MISINTERPRETING RANDOMNESS (Bruce,
Gedeck & Bruce, 2020)
You can observe the human tendency to underestimate randomness in this
experiment. Ask several friends to invent a series of 50 coin flips: have them write
down a series of random Hs and Ts. Then ask them to actually flip a coin 50 times
and write down the results. Have them put the real coin flip results in one pile, and
the made-up results in another. It is easy to tell which results are real: the real ones
will have longer runs of Hs or Ts. In a set of 50 real coin flips, it is not at all unusual
to see five or six Hs or Ts in a row. However, when most of us are inventing random
coin flips and we have gotten three or four Hs in a row, we tell ourselves that, for
the series to look random, we had better switch to T.

The other side of this coin, so to speak, is that when we do see the real-world
equivalent of six Hs in a row (e.g., when one headline outperforms another by
10%), we are inclined to attribute it to something real, not just chance.
The Null Hypothesis (Bruce, Gedeck & Bruce,
2020)
Hypothesis tests use the following logic: “Given the human tendency to react to unusual
but random behavior and interpret it as something meaningful and real, in our
experiments we will require proof that the difference between groups is more extreme
than what chance might reasonably produce.” This involves a baseline assumption that the
treatments are equivalent, and any difference between the groups is due to chance. This
baseline assumption is termed the null hypothesis. Our hope is then that we can, in fact,
prove the null hypothesis wrong, and show that the outcomes for groups A and B are more
different than what chance might produce.

One way to do this is via a resampling permutation procedure, in which we shuffle


together the results from groups A and B and then repeatedly deal out the data in groups
of similar sizes, then observe how often we get a difference as extreme as the observed
difference. The combined shuffled results from groups A and B, and the procedure of
resampling from them, embodies the null hypothesis of groups A and B being equivalent
and interchangeable, and is termed the null model. See “Resampling” for more detail.
Alternative Hypothesis (Bruce, Gedeck &
Bruce, 2020)
Hypothesis tests by their nature involve not just a null hypothesis, but also an offsetting
alternative hypothesis. Here are some examples:

Null = “no difference between the means of group A and group B,” alternative = “A is
different from B” (could be bigger or smaller)

Null = “A ≤ B,” alternative = “A > B”

Null = “B is not X% greater than A,” alternative = “B is X% greater than A”

Taken together, the null and alternative hypotheses must account for all possibilities. The
nature of the null hypothesis determines the structure of the hypothesis test.
One-Way, Two-Way Hypothesis Test (Bruce,
Gedeck & Bruce, 2020)
Often, in an A/B test, you are testing a new option (say B), against an established default option (A) and the
presumption is that you will stick with the default option unless the new option proves itself definitively better.
In such a case, you want a hypothesis test to protect you from being fooled by chance in the direction favoring
B. You don’t care about being fooled by chance in the other direction, because you would be sticking with A
unless B proves definitively better. So you want a directional alternative hypothesis (B is better than A). In such
a case, you use a one-way (or one-tail) hypothesis test. This means that extreme chance results in only one
direction count toward the p-value.

If you want a hypothesis test to protect you from being fooled by chance in either direction, the alternative
hypothesis is bidirectional (A is different from B; could be bigger or smaller). In such a case, you use a two-way
(or two-tail) hypothesis. This means that extreme chance results in either direction count toward the p-value.

A one-tail hypothesis test often fits the nature of A/B decision making, in which a decision is required and one
option is typically assigned “default” status unless the other proves better. Software, however, including R and
scipy in Python, typically provides a two-tail test in its default output, and many statisticians opt for the more
conservative two-tail test just to avoid argument. One-tail versus two-tail is a confusing subject, and not that
relevant for data science, where the precision of p-value calculations is not terribly important.
Resampling (Bruce, Gedeck & Bruce, 2020)
Resampling in statistics means to repeatedly sample values from observed
data, with a general goal of assessing random variability in a statistic. It can
also be used to assess and improve the accuracy of some machine-learning
models (e.g., the predictions from decision tree models built on multiple
bootstrapped data sets can be averaged in a process known as bagging: see
“Bagging and the Random Forest”).

There are two main types of resampling procedures: the bootstrap and
permutation tests. The bootstrap is used to assess the reliability of an
estimate; it was discussed in the previous chapter (see “The Bootstrap”).
Permutation tests are used to test hypotheses, typically involving two or
more groups
Key Terms (Bruce, Gedeck & Bruce, 2020)
Permutation test
The procedure of combining two or more samples together, and randomly (or exhaustively)
reallocating the observations to resamples.

Synonyms
Randomization test, random permutation test, exact test.

Resampling
Drawing additional samples (“resamples”) from an observed data set.

With or without replacement


In sampling, whether or not an item is returned to the sample before the next draw.
Permutation Test (Bruce, Gedeck & Bruce,
2020)
In a permutation procedure, two or more samples are involved,
typically the groups in an A/B or other hypothesis test. Permute means
to change the order of a set of values. The first step in a permutation
test of a hypothesis is to combine the results from groups A and B (and,
if used, C, D, …) together. This is the logical embodiment of the null
hypothesis that the treatments to which the groups were exposed do
not differ. We then test that hypothesis by randomly drawing groups
from this combined set, and seeing how much they differ from one
another.
The permutation procedure is as follows:
(Bruce, Gedeck & Bruce, 2020)
1) Combine the results from the different groups into a single data set.

2) Shuffle the combined data, then randomly draw (without replacement) a resample of the same size as
group A (clearly it will contain some data from the other groups).

3) From the remaining data, randomly draw (without replacement) a resample of the same size as group B.

4) Do the same for groups C, D, and so on. You have now collected one set of resamples that mirror the sizes
of the original samples.

5) Whatever statistic or estimate was calculated for the original samples (e.g., difference in group
proportions), calculate it now for the resamples, and record; this constitutes one permutation iteration.

6) Repeat the previous steps R times to yield a permutation distribution of the test statistic.
Exhaistive Permunation Test (Bruce, Gedeck &
Bruce, 2020)
In an exhaustive permutation test, instead of just randomly shuffling
and dividing the data, we actually figure out all the possible ways it
could be divided. This is practical only for relatively small sample sizes.
With a large number of repeated shufflings, the random permutation
test results approximate those of the exhaustive permutation test, and
approach them in the limit. Exhaustive permutation tests are also
sometimes called exact tests, due to their statistical property of
guaranteeing that the null model will not test as “significant” more
than the alpha level of the test (see “Statistical Significance and P-
Values”).
Bootstrap Permunation Test (Bruce, Gedeck &
Bruce, 2020)
In a bootstrap permutation test, the draws outlined in steps 2 and 3 of
the random permutation test are made with replacement instead of
without replacement. In this way the resampling procedure models not
just the random element in the assignment of treatment to subject, but
also the random element in the selection of subjects from a
population. Both procedures are encountered in statistics, and the
distinction between them is somewhat convoluted and not of
consequence in the practice of data science.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
In a permutation test, multiple samples are combined, then shuffled.

The shuffled values are then divided into resamples, and the statistic of
interest is calculated.

This process is then repeated, and the resampled statistic is tabulated.

Comparing the observed value of the statistic to the resampled


distribution allows you to judge whether an observed difference
between samples might occur by chance.
Statistical Significance and P-Values (Bruce,
Gedeck & Bruce, 2020)
Statistical significance is how statisticians measure whether an
experiment (or even a study of existing data) yields a result more
extreme than what chance might produce. If the result is beyond the
realm of chance variation, it is said to be statistically significant.
Key Terms
P-value: Given a chance model that embodies the null hypothesis, the p-
value is the probability of obtaining results as unusual or extreme as the
observed results.

Alpha: The probability threshold of “unusualness” that chance results must


surpass, for actual outcomes to be deemed statistically significant.

Type 1 error: Mistakenly concluding an effect is real (when it is due to


chance).

Type 2 error: Mistakenly concluding an effect is due to chance (when it is


real).
t-Tests (Bruce, Gedeck & Bruce, 2020)
There are numerous types of significance tests, depending on whether the
data comprises count data or measured data, how many samples there are,
and what’s being measured. A very common one is the t-test, named after
Student’s t-distribution, originally developed by W. S. Gosset to approximate
the distribution of a single sample mean (see “Student’s t-Distribution”).
All significance tests require that you specify a test statistic to measure the
effect you are interested in, and help you determine whether that observed
effect lies within the range of normal chance variation. In a resampling test
(see the discussion of permutation in “Permutation Test”), the scale of the
data does not matter. You create the reference (null hypothesis) distribution
from the data itself, and use the test statistic as is.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Test statistic: A metric for the difference or effect of interest.

t-statistic: A standardized version of common test statistics such as


means.

t-distribution: A reference distribution (in this case derived from the


null hypothesis), to which the observed t-statistic can be compared.
Key Ideas (Bruce, Gedeck & Bruce, 2020)
Before the advent of computers, resampling tests were not practical
and statisticians used standard reference distributions.

A test statistic could then be standardized and compared to the


reference distribution.

One such widely used standardized statistic is the t-statistic.


Multiple Testing (Bruce, Gedeck & Bruce,
2020)
As we’ve mentioned previously, there is a saying in statistics: “torture the data long
enough, and it will confess.” This means that if you look at the data through enough
different perspectives, and ask enough questions, you can almost invariably find a
statistically significant effect.

This issue is related to the problem of overfitting in data mining, or “fitting the
model to the noise.” The more variables you add, or the more models you run, the
greater the probability that something will emerge as “significant” just by chance.

In supervised learning tasks, a holdout set where models are assessed on data that
the model has not seen before mitigates this risk. In statistical and machine
learning tasks not involving a labeled holdout set, the risk of reaching conclusions
based on statistical noise persists.
Degrees of Freedom (Bruce, Gedeck & Bruce,
2020)
In the documentation and settings to many statistical tests and probability
distributions, you will see a reference to “degrees of freedom.” The concept is
applied to statistics calculated from sample data, and refers to the number of
values free to vary. For example, if you know the mean for a sample of 10 values,
there are 9 degrees of freedom (once you know 9 of the sample values, the 10th
can be calculated and is not free to vary). The degrees of freedom parameter, as
applied to many probability distributions, affects the shape of the distribution.
The number of degrees of freedom is an input to many statistical tests. For
example, degrees of freedom is the name given to the n – 1 denominator seen in
the calculations for variance and standard deviation. Why does it matter? When
you use a sample to estimate the variance for a population, you will end up with an
estimate that is slightly biased downward if you use n in the denominator. If you
use n – 1 in the denominator, the estimate will be free of that bias.
KEY TERMS & Ideas (Bruce, Gedeck & Bruce,
2020)
Key Terms
n or sample size
The number of observations (also called rows or records) in the data.
d.f.
Degrees of freedom.
Key Ideas
• The number of degrees of freedom (d.f.) forms part of the calculation to
standardize test statistics so they can be compared to reference
distributions (t-distribution, F-distribution, etc.).
• The concept of degrees of freedom lies behind the factoring of categorical
variables into n – 1 indicator or dummy variables when doing a regression
(to avoid multicollinearity).
ANOVA (Bruce, Gedeck & Bruce, 2020)
Suppose that, instead of an A/B test, we had a comparison of multiple
groups, say A-B-C-D, each with numeric data. The statistical procedure
that tests for a statistically significant difference among the groups is
called analysis of variance, or ANOVA.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Pairwise comparison: A hypothesis test (e.g., of means) between two groups among
multiple groups.

Omnibus test: A single hypothesis test of the overall variance among multiple group
means.

Decomposition of variance: Separation of components contributing to an individual value


(e.g., from the overall average, from a treatment mean, and from a residual error).

F-statistic: A standardized statistic that measures the extent to which differences among
group means exceeds what might be expected in a chance model.

SS: “Sum of squares,” referring to deviations from some average value.


Chi-Square Test (Bruce, Gedeck & Bruce,
2020)
Web testing often goes beyond A/B testing and tests multiple treatments at
once. The chi-square test is used with count data to test how well it fits some
expected distribution. The most common use of the chi-square statistic in
statistical practice is with r×c contingency tables, to assess whether the null
hypothesis of independence among variables is reasonable (see also “Chi-
Square Distribution”).

Chi-square statistic : A measure of the extent to which some observed data


departs from expectation.

Expectation or expected: How we would expect the data to turn out under
some assumption, typically the null hypothesis.
Chi-Square Test: Statistical Theory (Bruce,
Gedeck & Bruce, 2020)
Chi-Square Test: Statistical Theory
Asymptotic statistical theory shows that the distribution of the chi-
square statistic can be approximated by a chi-square distribution (see
“Chi-Square Distribution”). The appropriate standard chi-square
distribution is determined by the degrees of freedom (see “Degrees of
Freedom”). For a contingency table, the degrees of freedom are related
to the number of rows (r) and columns (s) as follows:

degreesoffreedom=(r−1)×(c−1)
Power (Bruce, Gedeck & Bruce, 2020)
If you run a web test, how do you decide how long it should run (i.e., how many
impressions per treatment are needed)? Despite what you may read in many
guides to web testing, there is no good general guidance—it depends, mainly, on
the frequency with which the desired goal is attained.

Effect size: The minimum size of the effect that you hope to be able to detect in a
statistical test, such as “a 20% improvement in click rates”.

Power: The probability of detecting a given effect size with a given sample size.

Significance level: The statistical significance level at which the test will be
conducted.
Sample Size (Bruce, Gedeck & Bruce, 2020)
The most common use of power calculations is to estimate how big a sample
you will need.

For example, suppose you are looking at click-through rates (clicks as a


percentage of exposures), and testing a new ad against an existing ad. How
many clicks do you need to accumulate in the study? If you are only
interested in results that show a huge difference (say a 50% difference), a
relatively small sample might do the trick. If, on the other hand, even a
minor difference would be of interest, then a much larger sample is needed.
A standard approach is to establish a policy that a new ad must do better
than an existing ad by some percentage, say 10%; otherwise, the existing ad
will remain in place. This goal, the “effect size,” then drives the sample size.
KEY IDEAS (Bruce, Gedeck & Bruce, 2020)
• Finding out how big a sample size you need requires thinking ahead to the
statistical test you plan to conduct.

• You must specify the minimum size of the effect that you want to detect.

• You must also specify the required probability of detecting that effect size
(power).

• Finally, you must specify the significance level (alpha) at which the test will
be conducted.
KEY TERMS FOR SIMPLE LINEAR REGRESSION
(Bruce, Gedeck & Bruce, 2020)
Response: The variable we are trying to predict. Regression coefficient: The slope of the regression line.
Synonyms - dependent variable, Y-variable, target, outcome Synonyms - slope, b1, β1, parameter estimates, weights

Fitted values: The estimates Yˆi obtained from the


Independent variable: The variable used to predict the regression line.
response. Synonyms - predicted values
Synonyms - X-variable, feature, attribute
Residuals: The difference between the observed values and
the fitted values.
Record: The vector of predictor and outcome values for a Synonyms - errors
specific individual or case.
Synonyms -row, case, instance, example
Least squares: The method of fitting a regression by
minimizing the sum of squared residuals.
Synonyms - ordinary least squares
Intercept: The intercept of the regression line—that is, the
predicted value when X=0.
Synonyms - b0, β0
The Regression Equation (Bruce, Gedeck &
Bruce, 2020)
Simple linear regression estimates how much Y will change when X changes by a certain
amount. With the correlation coefficient, the variables X and Y are interchangeable. With
regression, we are trying to predict the Y variable from X using a linear relationship (i.e., a
line):

Y=b0+b1X

We read this as “Y equals b1 times X, plus a constant b0.” The symbol b0 is known as the
intercept (or constant), and the symbol b1 as the slope for X. Both appear in R output as
coefficients, though in general use the term coefficient is often reserved for b1. The Y
variable is known as the response or dependent variable since it depends on X. The X
variable is known as the predictor or independent variable. The machine learning
community tends to use other terms, calling Y the target and X a feature vector.
Throughout this book, we will use the terms predictor and feature interchangeably.
Multiple Linear Regression (Bruce, Gedeck &
Bruce, 2020)
When there are multiple predictors, the equation is simply extended to
accommodate them:

Y=b0+b1X1+b2X2+...+bpXp+e
Instead of a line, we now have a linear model—the relationship between
each coefficient and its variable (feature) is linear.

All of the other concepts in simple linear regression, such as fitting by least
squares and the definition of fitted values and residuals, extend to the
multiple linear regression setting. For example, the fitted values are given by:

Yˆi=bˆ0+bˆ1X1,i+bˆ2X2,i+...+bˆpXp,i
Key Terms (Bruce, Gedeck & Bruce, 2020)
Root mean squared error: The square R-squared: The proportion of variance
root of the average squared error of the explained by the model, from 0 to 1.
regression (this is the most widely used Synonyms: coefficient of determination,
metric to compare regression models). R2
Synonyms - RMSE
t-statistic: The coefficient for a
Residual standard error: The same as predictor, divided by the standard error
the root mean squared error, but of the coefficient, giving a metric to
adjusted for degrees of freedom. compare the importance of variables in
Synonyms - RSE the model. See “t-Tests”.

Weighted regression: Regression with


the records having different weights.
Prediction Using Regression (Bruce, Gedeck &
Bruce, 2020)
The primary purpose of regression in data science is prediction. This is useful to
keep in mind, since regression, being an old and established statistical method,
comes with baggage that is more relevant to its traditional role as a tool for
explanatory modeling than to prediction.

KEY TERMS FOR PREDICTION USING REGRESSION

Prediction interval: An uncertainty interval around an individual predicted value.

Extrapolation: Extension of a model beyond the range of the data used to fit it.
Interpreting the Regression Equation (Bruce,
Gedeck & Bruce, 2020)
In data science, the most important use of regression is to predict some
dependent (outcome) variable. In some cases, however, gaining insight
from the equation itself to understand the nature of the relationship
between the predictors and the outcome can be of value. This section
provides guidance on examining the regression equation and
interpreting it.
Key Terms (Bruce, Gedeck & Bruce, 2020)
Correlated variables: When the predictor variables are highly correlated, it is difficult to interpret
the individual coefficients.

Multicollinearity: When the predictor variables have perfect, or near-perfect, correlation, the
regression can be unstable or impossible to compute.
Synonyms - collinearity

Confounding variables: An important predictor that, when omitted, leads to spurious relationships
in a regression equation.

Main effects: The relationship between a predictor and the outcome variable, independent from
other variables.

Interactions:An interdependent relationship between two or more predictors and the response.
Regression Diagnostics (Bruce, Gedeck &
Bruce, 2020)
In explanatory modeling (i.e., in a research context), various steps, in
addition to the metrics mentioned previously (see “Assessing the
Model”), are taken to assess how well the model fits the data; most are
based on analysis of the residuals. These steps do not directly address
predictive accuracy, but they can provide useful insight in a predictive
setting.
KEY TERMS FOR REGRESSION DIAGNOSTICS
(Bruce, Gedeck & Bruce, 2020)
Standardized residuals: Residuals divided by Non-normal residuals: Non-normally
the standard error of the residuals. distributed residuals can invalidate some
technical requirements of regression, but
are usually not a concern in data science.
Outliers: Records (or outcome values) that
are distant from the rest of the data (or the
predicted outcome). Heteroskedasticity: When some ranges of
the outcome experience residuals with
higher variance (may indicate a predictor
Influential value: A value or record whose missing from the equation).
presence or absence makes a big difference
in the regression equation.
Partial residual plots: A diagnostic plot to
illuminate the relationship between the
Leverage: The degree of influence that a outcome variable and a single predictor.
single record has on a regression equation. Synonyms - added variables plot
Synonyms - hat-value
Define Stage (Shaffie & Shahbazi, 2012)
Given these steps, the main deliverables
in this phase are:
 The identification of a product,
process, or service that is in need of
improvement
 The identification of a customer that is
driving the need for improvement and
defining their expectations
 A team charter with a problem
statement, goal, financial benefits,
scope, and required resources
 An outline of areas of risk
 The receipt of approval or sign-off
from the project champion or
executive sponsor
The basic requirements are:

1. A well-defined problem statement with an accurate depiction of the


frequency and magnitude of errors, issues, and defects
2. Consensus on the need for change, vision, and direction by the
project champion and key stakeholders

(Shaffie & Shahbazi, 2012)


(Shaffie & Shahbazi, 2012)
What defines a great project is whether:
1. The opportunity has clear boundaries and measurable goals.
2. The opportunity is aligned with a business-critical issue or initiative.
3. The customer will feel the improvements.
Internal improvement opportunities can be identified by:
Brainstorming with a cross-functional team
Analyzing core business processes, either by mapping them or by examining their
historical performance (if they have been measured in the past)
Financial analysis of the business unit
Review of repeated business, process, product, or service issues or challenges
External opportunities are driven by your customer(s)—auditors,
creditors, or consumers of your final product or service. Project ideas
from external sources can be identified by:
 Conducting surveys
 Analysis of existing customer feedback
 Direct dialogue with the customers

(Shaffie & Shahbazi, 2012)


Asking the following questions should help in
the weeding-out process:
1. What can you do to improve the situation?
2. How important is the issue to your customer?
3. Is the opportunity or error in need of improvement measurable?
4. Are data available or easily generated?
5. Can benefits be quantified?
6. Is the process stable, or at least controllable?
7. Is the scope of the problem narrow enough to finish the improvement in
four to six months?
8. Is there a sponsor or champion who is willing to provide or help you
acquire the resources?
(Shaffie & Shahbazi, 2012)
Critical to Quality

• A CTQ is any product,


process, or
service characteristic that sa
tisfies a
key customer requirement
and/or process
requirement.

(Shaffie & Shahbazi,


2012)
A Robust Business Case (Shaffie & Shahbazi,
2012)
1. Business case. An explanation of why the project should be pursued.
2. Problem and goal statements. A description of the problem or
opportunity. The project’s objectives should be clear, concise, and presented
in measurable terms.
3. Project scope. A definition of the boundaries of the process, service, or
product that is in need of improvement. Included and excluded components
are well outlined.
4. Team roles. The required resources, expectations, and responsibilities are
laid out.
5. Financial benefits. The expected financial benefits (if any) are calculated
based on expected improvements.
S: Specific

SMART M: Measurable
Criteria
(Shaffie & A: Attainable
Shahbazi,
2012) R: Relevant

T: Time-bound
Problem Statement (Shaffie & Shahbazi,
2012)
The problem statement is an objective (quantifiable) description of the
“pain” experienced by internal and/or external customers as a result of
a poorly performing process or service. When writing the
problem statement, consider the following questions:
• What is wrong or not meeting the customer’s need or expectations?
• When and where do the problems occur?
• How big is the problem, and what is its impact?
Problem Statement (Shaffie & Shahbazi,
2012)
Here is an example of a poorly written problem statement:

“Our customers are angry with us, resulting in poor customer retention.”

Here is how it can be improved:

“In the second quarter (when), 15 percent of our customers have left the
bank for another institution (what). The current rate of attrition is up from 8
percent to the current level of 15 percent (magnitude). This negatively
affects our operating cash flow (impact or consequence).
Project Scope (Shaffie & Shahbazi, 2012)
The three critical questions that are considered when developing the
scope are:
1. What process, service, or product will the team focus on?
2. Is there anything that is outside the scope?
3. Are there any constraints (process, IT, resources, and so on) that
must be considered?
Project Champion (Shaffie & Shahbazi, 2012)
The champion is the individual who has initiated the improvement. Either the
project idea has come from her or she sees the need for change. The champion
creates the sense of urgency for the improvement. She is also known as the barrier
buster. At every stage of project execution, the project leader may be faced with
resistance—team members may be reluctant to help, area managers may not
assign the required resources to the project, functional managers may reject
improvement ideas, and so on. The central role of the project champion is to work
within her sphere of influence to set the direction for the project and to support
the Green Belt or Black Belt in its execution. She is the liaison between the project
team and management, helping to create the need for change, setting the direction
for the project, assigning the required resources, allocating the required funding,
and removing any and all barriers to project execution. During the course of the
project, she reviews progress frequently. Furthermore, once the project has ended
and changes have been implemented, the project champion ensures that those
changes recommended by the team are not cast aside after a few months.
Project Leader (Shaffie & Shahbazi, 2012)
The project leader is typically a Black Belt and is allocated to the project
full-time. The Black Belt is the key planner for the project and is
ultimately responsible for the deliverables: he defines the direction for
project execution, determines the deliverables in each phase, creates
team assignments, follows progress, and acts as a link to management.
The project leader is responsible for maintaining project execution
momentum—any potential risks should be escalated to the Master
Black Belt or project champion. The project leader is also responsible
for keeping the champion and the key stakeholders updated on the
status of the project.
Key Stakeholders (Shaffie & Shahbazi, 2012)
Key stakeholders are individuals whose areas, processes, and teams will
be directly affected by the project. They are typically part of
management. It is critical to identify them during the Define phase,
because without their buy-in and involvement, the probability of long-
term success will be reduced. It is the responsibility of the project
leader to always keep the key stakeholders abreast of the project
status. Any signs of resistance from the key stakeholders should be
escalated to the project champion for quick resolution.
Teams (Shaffie & Shahbazi, 2012)
Core Cross-Functional Team
The members of the core team are expected to allocate 10 to 15 percent of
their time to supporting the project. These are functional area experts,
representing the processes, services, or products that are within the scope of
the project.
Supplementary Team
The members of the supplemental team act as support for the project on an
as-needed basis. The time commitment from these members is minimal.
Examples of supplementary team members may include HR, whose expertise
will be needed during the Control phase to help ensure sustainability,
Compliance to sign off on documentation or process changes, or IT to
support the extraction of data from the various databases.
Financial Benefits (Shaffie & Shahbazi, 2012)
There will be situations in which projects that do not provide significant
financial benefits will be selected for execution, for example, those
dealing with compliance or regulatory issues. However, for the most
part, Lean Six Sigma projects should have financial benefits associated
with them. If you are focused on reducing cycle time, this should yield a
reduction in touch time, and hence a reduction in labor. A reduction in
cycle time also reduces inventory, such as unprocessed applications.
The faster an account is opened, the faster revenue can be realized.
Calculating the expected financial benefits helps prioritize
improvement opportunities and creates motivation for the team.
Business Case Example
(Shaffie & Shahbazi,
2012)
(Shaffie & Shahbazi, 2012)
Characteristics of a well-defined business case include:
• A clearly defined problem and goal statement
• Clearly understood defect and opportunity definitions
• No presupposed solution
• A need for improvement related to the customer’s requirements
• Alignment of the project with a business strategy
• A manageable project scope—it can yield results within four to six months
• Identifiable and measurable impact
• Adequate resources assigned to the project
• A data-driven project!
SIPOC (Shaffie & Shahbazi, 2012)
Since the objective is to define a high-level view of the process in its “as-is” state, we must
have the core cross-functional team involved in the development of a SIPOC.
1. As the first step, the team must agree on the start and end points of the process. The
business case can help guide this discussion.
2. Working backward, list the Customers. Identify each customer’s CTQ (accurate, timely,
simple, and so on) and the primary Output (loans, calls, x-rays, or something else) that the
customer receives from the process.
3. With the C and O of the SIPOC defined, using brainstorming techniques, the team should
outline the five to seven high-level Process steps that result in the outputs. Process steps
typically start with a verb.
4. Once the team has agreed on the process steps, the critical Inputs that affect the quality
of the process can be identified.
5. The last step is to list all the Suppliers that provide inputs to the process.
6. The SIPOC should then be validated by walking the actual process.
STEP 4: DEFINE AND EXECUTE
A CHANGE MANAGEMENT STRATEGY
The main benefits of a well-defined and well-executed change management strategy are
that it:
• Helps an organization start and successfully complete Lean Six Sigma projects with
shorter cycle time.
• Creates a shared vision of the goal and deliverables of the project.
• Ensures that you have the buy-in of key stakeholders prior to starting the project.
Image Helps identify the key stakeholders.
Image Outlines their required level of support if the project is to be a success.
• Allows for efficient implementation of solutions.
Image Increases access to required resources and data.
• Ensures that the Control portion of the project is sustainable—that there is an easy
handoff.
(Shaffie & Shahbazi, 2012)
Characteristics (Shaffie & Shahbazi, 2012)
1. Identifying change leadership. Every project will need a champion who will
sponsor the change.
2. Developing a shared need. The project leader and champion have to clearly
articulate the need for change. It is best to leverage the business case. The need for
change can be driven by a threat or opportunity, or it can be instilled within the
organization and widely shared through data, demonstration, or demand.
3. Shaping a vision. The desired outcome of the project is clear, legitimate, widely
understood, and shared.
4. Mobilizing commitment. There is a strong commitment from key constituents to
invest in change. The team leader is to demand and receive management attention.
5. Institutionalizing the change and monitoring progress. Once change is started, it
endures, and best practices are transferred throughout the organization.
Key
Stakeholder
Analysis
(Shaffie &
Shahbazi,
2012)
Key Stakeholder (Shaffie & Shahbazi, 2012)
• Responsible for the final decision
• Likely to be affected, positively or negatively, by the outcomes you
want
• In a position to assist or block your achievement of the outcomes
• An expert or special resource that could substantially affect the
quality of your end product or service
• Able to have influence over other stakeholders
Important (DeCarlo, 2007)
♦ An organization can only afford to focus its improvement efforts on
processes that are critical to customer satisfaction or the viability of the
business.
♦ There are many ways to generate Lean Six Sigma project ideas, and many
different sources for project ideas.
♦ Two of the most important and useful ways to generate project ideas are
through a CT Flow Down and the Voice of the Customer (VOC).
♦ Before you actually choose a Lean Six Sigma project, you have to separate
your viable ideas from those that are not worth your time and effort.
♦ You can use a Prioritization Matrix to methodically determine which
projects are best to focus on right away.
DEFINE PHASE SUMMARY
The Define phase is one of the more challenging steps as it requires the
Green Belt or Black Belt to clearly outline the need for change. Uncovering
an improvement opportunity is not always a comfortable topic of discussion
for all members of leadership—it inherently brings to light issues that have
been overcompensated or ignored in the past. However the project leader
can better facilitate discussion and remove emotion by using data in problem
and goal statements. The more precise and accurate the definition of the
opportunity, the better the chances that the project will be prioritized
accordingly. It should also be noted that it is typically in the Define phase
that a project is either accepted or rejected by the leadership team. And any
rejections should not be viewed as failure by the Green Belt or Black Belt,
rather it is cause for celebration. Thanks to her due diligence the company
avoided investing time and money on an issue that is noncritical or blown
out of proportion.
(Voehl, Harrington & Charron, 2014)
We don’t know what we don’t know. We can’t act on what we don’t
know. We won’t know until we search. We won’t search for what we
don’t question. We don’t question what we don’t measure.”
—Dr. Mikel Harry

I often say that when you can measure what you are speaking about
and express it in numbers, you know something about it; but when you
cannot measure it, when you cannot express it in numbers, your
knowledge is of a meagre and unsatisfactory kind.
—William Thomson, Lord Kelvin (1824–1907)
In God we trust. All
others must bring data.
—W. Edwards Deming
DMAIC (Voehl, Harrington & Charron, 2014)
Define: Select an appropriate project and define the problem, especially in
terms of customer-critical demands.
Measure: Assemble measurable data about process performance and
develop a quantitative problem statement.
Analyze: Analyze the causes of the problem and verify suspected root
cause(s).
Improve: Identify actions to reduce defects and variation caused by root
cause(s) and implement selected actions, while evaluating the measurable
improvement (if not evident, return to step 1, Define).
Control: Control the process to ensure continued, improved performance
and determine if improvements can be transferred elsewhere. Identify
lessons learned and next steps.
DMADV (Voehl, Harrington & Charron, 2014)
Define: Define design goals that are consistent with customer
demands.
Measure: Identify and measure product characteristics that are critical
to quality (CTQ).
Analyze: Analyze to develop and design alternatives, create a high-level
design, and evaluate design capability to select the best design.
Design: Complete design details, optimize the design, and plan for
design verification.
Verify: Verify the design, set up pilot runs, implement the production
process, and hand it over to the process owners.
Purpose of Measure Phase (Brue,2015)
Measure phase the team focuses the improvement effort by
gathering information about the current situation. The other
goals of this phase are to:
Define one or more CTQ characteristics (dependent variables);
Map the process in detail;
Evaluate the measurement systems;
Assess the current level of process performance to establish a
baseline capability and the short- and long-term process sigma
capabilities; and
Quantify the problem.
The objectives of the measure (Keller, 2011)
Process definition at a detailed level to understand the decision points
and detailed functionality within the process.

Metric definition to verify a reliable means of process estimation.

Process baseline estimation to clarify the starting point of the project.

Measurement system analysis to quantify the errors associated with


the metric.
Process (Keller 2011)
A process consists of repeatable tasks carried out in a specific order.
Process personnel responsible for implementing the process on a daily
basis should be enlisted to develop the detailed process map.
Process (Levine, Melnyck & Gitlow, 2015)

Every process must have an owner; that is, an individual who is


responsible for the process (Gitlow and Levine, 2004). Process owners
can be identified because they can change the flowchart of a process
using only their signature. Process owners may have responsibilities
extending beyond their departments, called cross-functional
responsibilities; their positions must be high enough in the organization
to influence the resources necessary to take action on a cross-
functional process.
Process (Levine, Melnyck & Gitlow, 2015)
Boundaries must be established for processes
process owner must help you identify where the process starts and stops
(Gitlow and Levine, 2004).
These boundaries make it easier to establish process ownership and highlight
the process’s key interfaces with other (customer/vendor) processes.
Process interfaces frequently are the source of process problems, which
result from a failure to understand downstream requirements; they can
cause turf wars.
Process interfaces must be carefully managed to prevent communication
problems.
Construction of operational definitions for critical process characteristics that
are agreed upon by process owners on both sides of a process boundary will
go a long way toward eliminating process interface problems.
Process (Levine, Melnyck & Gitlow, 2015)
A process owner in the Purchasing Department of a health system could
translate the preceding organizational objective into the following subset of
objectives and metrics:
• Objective: Decrease the number of days from purchase request to
item/service delivery.
• Metric: Number of days from purchase request to item/service
delivery by delivery overall, and by type of item purchased, by purchase.
• Objective: Increase ease of filling out purchasing forms.
• Metric: Number of questions received about how to fill out forms by
form by month.
Process (Levine, Melnyck & Gitlow, 2015)
• Objective: Increase employee satisfaction with purchased material
• Metric: Number of employee complaints about purchased
material by month.
• Objective: Continuously train and develop Purchasing personnel
with respect to job requirements.
• Metric: Number of errors per purchase order by purchase order.
• Metric: Number of minutes to complete a purchase order by
purchase order.
Process Flow Chart (Levine, Melnyck & Gitlow,
2015)
A flowchart functions as a planning tool. Designers of processes are greatly
aided by flowcharts. They enable a visualization of the elements of new or
modified processes and their interactions.
• A flowchart removes unnecessary details and breaks down the system so
designers and others get a clear, unencumbered look at what they’re
creating.
• A flowchart defines roles. It demonstrates the functions of personnel,
workstations, and sub-processes in a system. It also shows the personnel,
operations, and locations involved in the process.
• Flowcharts can be used in the training of new and current employees.
• A flowchart helps you understand what data needs to be collected when
trying to measure and improve a process.
Process Flow Chart (Levine, Melnyck & Gitlow,
2015)
• A flowchart can also be used to be compliant with regulatory
agencies, i.e., JCAHO (Joint Commission on Accreditation of Healthcare
Organizations )
• A flowchart can be used to compare the current state of the process
(how the process is), the desired state of the process (how the process
should be with standardization), and the future state of the process
(how the process could be with improvement).
• And last, but not least, a flowchart helps to identify the weak points
in a process; that is, he points in the process that are causing problems
for the stakeholders of the process.
Analyze Flow Charts (Levine, Melnyck &
Gitlow, 2015)
1. Find the steps of the process that are weak (for example, parts of the
process that generate a high defect rate).
2. Improve the steps of the process that are within the process owner’s
control; that is, the steps of the process that can be changed without
higher approval from the process owner.
3. Isolate the elements in the process that affect customers.
4. Find solutions that don’t require additional resources.
5. Don’t have to deal with political issues.
Other questions that the process improver
can ask are: (Levine, Melnyck & Gitlow, 2015)
• Why are steps done? How else could they be done?
• Is each step necessary?
• Are they value added and necessary? Are they repetitive?
• Would a customer pay for this step specifically? Would they notice if
it’s gone?
• Is it necessary for regulatory compliance?
• Does the step cause waste or delay?
• Does the step create rework?
• Could the step be done in a more efficient and less costly way?
Other questions that the process improver
can ask are: (Levine, Melnyck & Gitlow, 2015)
• Are the right people doing the right thing?
• Could steps be automated?
• Does the process contain proper feedback loops?
• Are roles and responsibilities for the process clear and well
documented?
• Are there obvious bottlenecks, delays, waste or capacity issues that
can be identified and removed?
• What is the impact of the process on stakeholders?
BASIC MEASUREMENT CONCEPTS (Neuman,
Cavanagh & Pande, 2001)
If working with data and using measures is new to many of your team
members, be sure to review the following basic concepts:

1. Observe first, then measure.

2. Know the difference between discrete and continuous measures.

3. Measure for a reason.

4. Have a measurement process.


Measurement Concept #2: Continuous vs. Discrete
Measures (Neuman, Cavanagh & Pande, 2001)
♦ Continuous measures are only those things that can be measured on an
infinitely divisible continuum or scale. Examples: time (hours, minutes,
seconds), height (feet, inches, fractions of an inch), sound level (decibels),
temperature (degrees), electrical resistance (ohms), and money (dollars, yen,
euros, and fractions thereof).

♦ Discrete measures are those where you can sort items into distinct,
separate, non-overlapping categories. Examples: types of aircraft, categories
of different types of vehicles, types of credit cards. Discrete measures include
artificial scales like the ones on surveys, where people are asked to rate a
product or service on a scale of 1 to 5. Discrete measures are sometimes
called attribute measures because they count items or incidences that have a
particular attribute or characteristic that sets them apart from things with a
different attribute or characteristic.
Measurement Concept #3: Measure for a Reason
(Neuman, Cavanagh & Pande, 2001)
Ever notice how much useless data gets collected at work? It’s probably
because the computer has made it easy to collect tons of numbers,
however trivial. But don’t let your team get sucked into that quagmire.
Unless there’s a clear reason to collect data—a key variable you want to
track—don’t bother. There are basically two reasons for collecting data:

1. Measuring efficiency and/or effectiveness

2. Discovering how variables (Xs or causes) upstream in the process


affect the outputs (Ys or effects) delivered to the process customer.
Measurement Concept #3: Measure for a Reason
(Neuman, Cavanagh & Pande, 2001)
Ever notice how much useless data gets collected at work? It’s probably
because the computer has made it easy to collect tons of numbers,
however trivial. But don’t let your team get sucked into that quagmire.
Unless there’s a clear reason to collect data—a key variable you want to
track—don’t bother. There are basically two reasons for collecting data:

1. Measuring efficiency and/or effectiveness

2. Discovering how variables (Xs or causes) upstream in the process


affect the outputs (Ys or effects) delivered to the process customer.
Measurement Concept #4: A Process for
Measurement (Neuman, Cavanagh & Pande, 2001)
Remember the old carpenter’s saying, “Measure twice; cut once”? It
should remind us of the importance of getting our measures right the
first time. There is nothing more tedious and frustrating than having to
collect data a second time because it wasn’t done right the first time.
Treating data collection as a process that can be defined, documented,
studied, and improved is the best way to make sure you only have to
“cut once.” A detailed process is described below.
TWO COMPONENTS OF MEASURE (Neuman,
Cavanagh & Pande, 2001)
The guidelines for data collection given above have been incorporated
into the two procedures that comprise the Measure stage of DMAIC:

A. Plan and measure performance against customer requirements.

B. Develop baseline defect measures and identify improvement


opportunities.
(Neuman, Cavanagh & Pande, 2001)
Population and Process Sampling on similar
items (Neuman, Cavanagh & Pande, 2001)
Sampling Concept #1: Bias
(Neuman, Cavanagh & Pande, 2001)
Bias is the difference between the data collected in a sample and the true
nature of the entire population or process flow. Bias that goes undetected
will influence your interpretation and conclusions about the problem and
process. Here are some examples of types of sampling bias:
♦ Convenience sampling: Collecting data simply because it’s easy to
collect. Example: collecting data every afternoon 20 minutes before closing
time because “things are quiet then,” thus ignoring data when things are
busy— which might be very different data.
♦ Judgment sampling: Making educated guesses about which items or
people are representative of your process. Example: surveying only those
customers who scored high on your last customer satisfaction survey.
Good Sampling (Neuman, Cavanagh & Pande,
2001)
♦ Systematic sampling: This is the method we’d recommend for most
business processes. By systematic sampling of a process we mean taking data
samples at certain intervals (every half-hour or every 20th item). A
systematic sample from a population would be to check every 10th item in
the database. But beware! Make sure your systematic sampling doesn’t
correspond to some hidden pattern that will bias the data. Example:
Sampling every 10th insurance claim might mean that you always get claims
reviewed by the same clerk on the same computer, while ignoring four other
clerks and their computers.
♦ Random sampling: By random we mean that every item in a population
or a process has an equal chance to be selected for counting. Selecting data
randomly has its own challenges, not least of which are unconscious biases
or hidden patterns in the data. Most random sampling is done by assigning
computer-generated random numbers to items being surveyed.
Good Sampling (Neuman, Cavanagh & Pande,
2001)
♦ Stratified sampling: Suppose your company had a customer base of
10,000 purchasers, and your job is to survey a sample to determine
customer satisfaction. Are you equally interested in what all 10,000
customers have to say? The answer is probably “no.” It’s likely more
important that you understand the needs and perceptions of your
biggest customers or most reliable purchasers than it is that you find
out what a onetime customer thinks. (The opposite would be true if
you were trying to expand market share; in this case, you might want to
understand how you could convert infrequent customers into regular
purchasers/users.)
Sampling Concept #2: Confidence Level or
Interval (Neuman, Cavanagh & Pande, 2001)
This is your level of confidence that the data you sample actually represents
the entire population or process under study, provided the sample was
gathered randomly. In the business world, a confidence level of 95% is
standard. It means that you have five chances out of 100 of drawing the
wrong conclusion from your data.

There’s a “Catch 22” to developing a good sampling plan: You already have to
know something about the data you’re collecting before you collect it. As a
result, your first measures won’t be as reliable as you’d like because they’re
based on educated “guesstimates.” The longer you take measures, however,
the better you’ll know your process and the better your sampling plan will
be.
Check Sheet (Neuman, Cavanagh & Pande,
2001)
♦ Keep it simple. If the form is cluttered, hard to read, or confusing, there’s a risk of
errors or nonconformance.
♦ Label it well. Make sure there is no question about where data should go on the form.
♦ Include space for date, time, and collector’s name. These obvious details are often
omitted, causing headaches later.
♦ Organize the data collection form and compiling sheet (the spreadsheet you’ll use to
compile all the data) consistently.
♦ Include key factors to stratify the data.
Common types of checksheets include …
♦ Defect or Cause Checksheet. Used to record types of defects or causes of
defects. Examples: reasons for field repair calls, types of operating log discrepancies,
causes of late shipments.
♦ Data Sheet. Captures readings, measures or counts. Examples: transmitter power level,
number of people in line, temperature readings.
Check Sheet (Neuman, Cavanagh & Pande,
2001)
♦ Frequency Plot Checksheet. Records a measure of an item along a scale
or continuum. Examples: gross income of loan applicants, cycle time for
shipped orders, weight of packages.
♦ Concentration Diagram Checksheet. Shows a picture of an object or
document being observed on which collectors mark where defects actually
occur. Examples: damage done to rental cars, noting errors on application
forms.
♦ Traveler Checksheet. Any checksheet that actually travels through the
process along with the product or service being produced. The check-sheet
lists the process steps down one column, then has additional columns for
documenting process data. Some examples of traveler checksheet uses are
capturing cycle time data for each step in an engineering change order,
noting time or number of people working on a part as it is assembled,
tracking rework on an insurance claim form.
Keep an Eye on (Neuman, Cavanagh & Pande,
2001)
♦ Accuracy: How precise is the measurement: hours, minutes,
seconds, millimeters, two decimal places?
♦ Repeatability: If the same person measures the same unit with the
same measuring device, will they repeat the same results every time
they do it? How much variation is there between measurements?
♦ Reproducibility: If two or more people or devices measure the
same thing, will they produce the same results? What is the variation?
♦ Stability: How much do accuracy, repeatability, and reproducibility
change over time? Do we get the same variation in measures that we
did a week ago? A month ago?
Measeurement (Neuman, Cavanagh & Pande,
2001)
Collect the Data
Implement your plans. Remember that part of your plan includes the
“sample size,” that is, the number of data points you have to collect.
Your data collection should stop when you’ve reached the appropriate
sample size, unless there were problems with some of that data. Do
not continue to collect data unless there are plans to make it a
standard part of the process.
Monitor Accuracy and Refine Procedures as Appropriate
Throughout the data collection, be sure to monitor both the
procedures and devices (if any) used to collect the data.
Measeurement (Neuman, Cavanagh & Pande,
2001)
Implement and Refine the Measurement Process, you should have data
in hand that: Meet your data collection priorities. Were sampled
according to your plan. Reflect accurate, repeatable, reproducible, and
reliable measurement practices.
The Key Definitions: Units, Defects, and Defect
Opportunities (Neuman, Cavanagh & Pande, 2001)
The Six Sigma team needs to understand a few key terms both to collect and analyze data used to
determine the capability of its process:

♦ Unit: An item being processed, or the final product or service being delivered either to internal
customers (other employees working for the same company as the team) or external customers
(the paying customers). Examples: a car, a mortgage loan, a computer platform, a medical diagnosis,
a hotel stay, or a credit card invoice.

♦ Defect: Any failure to meet a customer requirement or performance standard. Examples: a poor
paint job, a delay in closing a mortgage loan, the wrong prescription, a lost reservation, or a
statement error.

♦ Defect Opportunity: A chance that a product or service might fail to meet a customer
requirement or performance standard.
Two guidelines prevent the use of defect
opportunities from becoming a nightmare:
♦ First, you need to focus on defects that are important to the customer.
Consider a bank that regularly makes two kinds of mistakes: (1) mailing out
monthly statements a day late, and (2) entering interest payments a day late.
Which of these defects do you think is really important to most customers?
So we link opportunities in most cases to a CTQ Tree (see p. 135).

♦ Second, defect opportunities reflect the number of places where


something in a process can go wrong, not all the ways it can go wrong. So,
for example, you would define “wrong address” as one opportunity for a
defect on a database record rather than describing all the ways in which that
address could be wrong (incorrect street number, street name, wrong ZIP
code, etc.). The more opportunities we add, the more things look fishy—
because more opportunities mean better-looking performance.
Tips for Defining Defect Opportunities
(Neuman, Cavanagh & Pande, 2001)
Here are a few more tips on using defect opportunities for your products and
services:
♦ Focus on “routine” defects: Defects that are extremely rare shouldn’t be
considered as opportunities.
♦ Group closely related defects into one opportunity category: This will simplify
the work of gathering data.
♦ Be consistent: As Six Sigma spreads throughout your company, you should
consider using standard definitions of defect opportunities across the board.
♦ Change definitions only when necessary: The team will use the number of
defect opportunities to calculate a baseline sigma measure at the beginning of the
project, and then compare that number to the improved sigma number near the
end of the project. So stick with the same defect opportunities throughout the
project.
Revisit Your Problem Statement (Neuman,
Cavanagh & Pande, 2001)
Even before completing a thorough analysis of your data, it’s likely your
team has learned more about the initial problem that sparked this
project. Refine the Problem Statement as appropriate, perhaps by
providing more specificity about what the problem is or how it impacts
the organization.
Create a Plan for Analyze (Neuman, Cavanagh &
Pande, 2001)
By the end of Measure, you know how much data you’ve collected, and probably
have a gut feeling for how easy or difficult it’s going to be to analyze that data.
Think ahead and create a plan for the Analyze work. As before, if you are a novice
team leader, you can keep the plan simple:
♦ Assign responsibilities within the team for completing the data analysis (often,
the initial work is completed by a subteam, who presents the results to the team
for discussion).
♦ Set or confirm a target date for completion of the data analysis.
♦ Have your team meetings scheduled.
♦ If most people on the team are new to data analysis, you may want to check
with your Coach/Black Belt or a Master Black Belt to arrange for extra support.
Update Story Board (Neuman, Cavanagh &
Pande, 2001)
The Measure section of your storyboard should display your data
collection plan, a few data collection forms, and a revised Problem
Statement (if you changed it). You might also consider including a
process diagram that highlights problem areas. There may be measures
of the sigma level of the process here, as well as rough estimates of the
cost of defects and their rework in the process.
Prepare for the Tollgate Review (Neuman,
Cavanagh & Pande, 2001)
♦ What did you do well as a team in the Measure review?
♦ What could have been done better?
♦ What did the reviewers (your customers!) say?
♦ Which of the support materials (slides, overheads, handouts,
flipcharts, etc.) worked and which didn’t?
♦ How did you do on time? If it was too long, what could you do this
time to make sure you keep it brief? If it was too short, do you need to
add in more detail? Speak more slowly?
Tips for the Storming Stage (Neuman,
Cavanagh & Pande, 2001)
Don’t Panic
Recognize that storming reflects a high-energy state.
Solve obvious problems and eliminate any process steps that clearly
don’t add value for customers.
Make sure everyone on the team has an assignment for every
meeting and in between meetings, too
Be sure to educate/train team members in what is expected of them.
Work in pairs
Analyze Phase
Normality Test (Cudney & Kestle, 2018)
Data Analysis (Neuman, Cavanagh & Pande,
2001)
1. Know what you need to know. As we all know, there are a lot of numbers
floating around out in the workplace, and it’s easy to get drowned. Revisit your
project charter and problem statement regularly to keep in mind what the team is
trying to accomplish.
2. Have a hypothesis. There are dozens if not hundreds of ways to analyze data,
and a Six Sigma team can waste a lot of time following blind alleys if they aren’t
careful. Now that you have all the evidence (à la Sherlock!), having a hypothesis can
help you decide how to analyze that data. For example:
Hypothesis: The rise in complaints at Chez Chic Restaurant is the results of having
newer, inexperienced waitstaff.
Analysis Approach: Divide the customer complaint data into two sets—data from
customers served by new staff and data from people served by experienced staff.
Look for systematic differences in the two sets.
Ask lots of questions (Neuman, Cavanagh &
Pande, 2001)
If you limit your investigation to one hypothesis or one question, you won’t ever know if you’ve
asked the right question. A better approach is to ask lots of questions about your data, and find out
through analysis which of those questions are important and which aren’t. Here are just a few
questions the restaurant team above might ask:

♦ Do customers served by new waitstaff complain more often than other customers? (frequency of
the problem)
♦ What do customers of new waitstaff complain about? How does that compare to complaints
received from customers of experienced wait-staff? (type of problem observed)
♦ If customers of new waitstaff complain more, does it mean they are less likely to return? (impact
of the problem)

Your team needs to have a deep understanding of the problem in order to make sound choices
about where to spend its time and where to implement solutions. Otherwise you can end up
wasting three months fixing a problem that occurs infrequently or that has no impact on customers.
(Neuman, Cavanagh & Pande, 2001)
• Do the defects clump up in categories?

• Do the problems appear more in one place than another?

• Are there times when the defects really are prevalent?

• Are there any things or variables that change as the problem or


defects change?
Introduction to the Power Tools (Neuman,
Cavanagh & Pande, 2001)
The following three tools will help the team pinpoint causes:
a. Pareto Charts: a special type of bar chart that helps a team focus on
the components of the problem that have the biggest impact. Used
with discrete or attribute data.
b. Run (Trend) Charts: a very important tool that helps a team look at
whether there are patterns over time in the problem. Used with
continuous data.
c. Histograms (Frequency Plots): Used with either continuous data or
counts of attributes (discrete data).
Pareto Charts (Neuman, Cavanagh & Pande,
2001)
Pareto charts are based on the “Pareto Principle” that 80% of the
effects we see are due to 20% of the causes. The split isn’t always
exactly 80 and 20 in real data, but the effect is often the same.
Two things to note about Pareto charts:
1. The vertical axis needs to be at least as tall as the total count
number. That’s the only way you can visually judge how much any given
type of defect contributes relative to the problem as a whole.
2. The categories of data (individual bars) are arranged in descending
frequency.
Run Charts (Neuman, Cavanagh & Pande,
2001)
Like this school team, all Six Sigma teams need to gain perspective on
what their process is doing now versus what it has been capable of
doing in the past. One way to do that is to create run charts, which are
also known as time plots or trend charts. Run charts are constructed
from a measurement that has been gathered over time (usually at
regular intervals, such as hourly, daily, weekly…) and then plotted in
time order.
To interpret a run chart, you have to understand something about
variation. As hard as it may be to believe, every cause of variation can
be put into one of two categories: special causes and common causes.
The difference is important because teams need to eliminate special
causes first—before they work on common causes.
Run Charts (Neuman, Cavanagh & Pande,
2001)
Run charts (time plots) are also popular with teams because, like
Pareto charts, they are easy to construct and easy to interpret, and can
be used to plot either continuous measures or count (discrete) data.
The tricky point can sometimes be gathering sufficient data.
Interpretation of a run chart will be more reliable if you can gather at
least 25 data points. However, if you are plotting monthly or yearly
data—like the school team described above—you may have to make do
with fewer points; just be sure to take your interpretations with a grain
of salt!
Using Histograms to Understand the Process
(Neuman, Cavanagh & Pande, 2001)
• The histogram displays continuous
data, such as time, measures of
length or weight, dollar amounts,
and any other measures that can
be sub-divided into fractions. The
data is displayed on a chart on
which the horizontal axis is marked
off in increasing values (from right
to left), and the vertical axis shows
the frequency. You can use either
bars or individual symbols (such as
a stack of dots) to reflect the
counts (frequency) of each data
value.
The secret to uncovering the underlying causes of
problems? (Neuman, Cavanagh & Pande, 2001)
You have to dig deep beneath the surface symptoms of that
problem. By the time you finish the exploration phase of your analysis,
your team should have both knowledge about when, where, and how
the problem manifests itself, and lots of ideas about possible causes.
The trick at this stage is keeping focused on the problem definition and
organizing your detective work to make sure the causes you choose to
study address that problem.
The secret to uncovering the underlying causes of
problems? (Neuman, Cavanagh & Pande, 2001)
The two most common tools used at this point are the cause-and-effect
diagram and the relations diagram. They provide a critical link that will help your
team make sure you’ve isolated the underlying or root causes of a problem. Two
important notes about these tools:

♦ First, they help you think logically about potential causes of a problem; you will
still need to gather data to verify which are the real causes of a problem.

♦ Second, their effectiveness is directly related to the creativity and depth of the
thinking that goes into creating them. That’s why these tools are best used with
your team as a whole—you want many minds brainstorming ideas so you have a
broad and deep list of potential causes.
Cause and effect Analysis
Cause-and-effect analysis lets a group start with an “effect”—a problem or, in
some cases, a desired effect or result—and create a structured list of
possible causes for it.
Benefits of cause-and-effect diagrams include:
♦ It’s a great tool for gathering group ideas and input, being basically a
“structured brainstorming” method.
♦ By establishing categories of potential causes, it helps ensure a group
thinks of many possibilities, rather than focusing on a few typical areas (e.g.,
people, bad materials).
♦ Using a cause-and-effect diagram to identify some “prime suspect”
causes gives focus to help begin process and data analysis.
♦ They help get the Analyze phase started, or keep the thought processes
moving after an initial exploration of data and the process.
Cause and effect Analysis (Neuman, Cavanagh
& Pande, 2001)
Analyzing Complex Systems: The Relations
Diagram (Neuman, Cavanagh & Pande, 2001)
Relations diagram (sometimes called an interrelations
diagram or digraph) may be a more appropriate cause analysis tool
than cause and effect diagram.
Interpreting a relations diagram is a matter of counting the number of
“in” and “out” arrows for each potential cause: those with the most
“out” arrows are underlying or potential root causes. In this banking
team’s example, two of the top boxes—low pay scale and lack of
opportunities for advancement—have the most “out” arrows, and
therefore warrant further investigation. (In contrast, “poor job
satisfaction” and “stressful work environment” have a lot of “in”
arrows—that means they are the effect of other underlying causes.)
Analyzing Complex Systems: The Relations
Diagram (Neuman, Cavanagh & Pande, 2001)
Statistical Verification of Causes (Neuman,
Cavanagh & Pande, 2001)
There are two basic approaches to using statistics to determine cause-
and-effect relationships:

♦ Judging the degree to which a cause (X) and an output (Y) are
correlated. This can be an approximate assessment using the patterns
seen in a scatter plot (see below), or an actual calculation using
regression and correlation formulas (see the end of this chapter).

♦ Codifying or stratifying the data to expose patterns (or the lack


thereof).
Scatter Diagram (Neuman, Cavanagh &
Pande, 2001)
Scatter diagrams provide another way for teams to test hypotheses about the
causes of problems. These diagrams used “paired data” to test the correlation
between X variables and Y outcomes. Paired data, as the name suggests, are data
of two variables gathered at the same time on each item being measured. The
“pairs” typically reflect a potential cause and its effect: like the complexity of a form
that needs completion (an X variable or potential cause) and the actual time it
takes to complete the form (a Y outcome or effect). The paired data are plotted
along the X and Y axes of a chart, and then analyzed for signs of cScatter diagrams
provide another way for teams to test hypotheses about the causes of problems.
These diagrams used “paired data” to test the correlation between X variables and
Y outcomes. Paired data, as the name suggests, are data of two variables gathered
at the same time on each item being measured. The “pairs” typically reflect a
potential cause and its effect: like the complexity of a form that needs completion
(an X variable or potential cause) and the actual time it takes to complete the form
(a Y outcome or effect). The paired data are plotted along the X and Y axes of a
chart, and then analyzed for signs of correlationorrelation
Scatter Diagram (Neuman, Cavanagh &
Pande, 2001)
Scatter Diagram
(Neuman,
Cavanagh &
Pande, 2001)
Process Mapping (Neuman, Cavanagh &
Pande, 2001)
During the Define stage of its project, your team likely created a high-level process
map called SIPOC. You may have even used it to drill down to a little more detail in
order to identify a place to start taking measures.

Putting together a detailed process map is always a lively and rewarding experience
for team members. They come to respect the amount of work other people do in
the process, and they also begin to realize just how much variation there is in the
methods that people use, particularly in service processes.

Another effective tool at this stage is a deployment flowchart, which adds a unique
element to a normal flowchart: who is responsible for which activities. Deployment
flowcharts are particularly helpful when working in a process that has a lot of
handoffs between individuals or groups
First-Level Analysis: Identifying Obvious Process
Problem (Neuman, Cavanagh & Pande, 2001)
♦ Disconnects: Steps in the process where there are breakdowns in communications
between shifts, between customers and suppliers, or manager and employees.
♦ Bottlenecks: Points in the process where volume of work often overwhelms capacity,
slowing the entire process downstream. (If work has to wait for someone to return from
vacation, you’re probably looking at a bottleneck.)
♦ Redundancies: Steps in the process that duplicate activities or results elsewhere in the
process; for example, the same information coming from two steps and going to the same
place with the same information.
♦ Rework loops: Points where units with missing parts or information have to be sent
back upstream or delayed at one step until the necessary work is done. Inspection steps
often trigger rework loops.
♦ Decision/Inspection points: Process steps where a variety of checks and appraisals are
made, creating delays and rework loops. In organizations plagued by high defect levels,
these points abound.
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
These kinds of questions confront Six Sigma teams when they need to
♦ Test that there is a meaningful difference between sets of data.
♦ Create a valid hypothesis about the cause of the problem.
♦ Validate or disprove various hypotheses about causes.
♦ Prove to someone who insists on numbers, not just graphs, to show
the level of correlation and causation.
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
To answer these questions, statisticians have come up with standards
that apply to most Six Sigma projects: they have operationally defined
as “statistically significant” anything that has less than a 5% probability
of happening by chance alone. To test for this probability, statisticians
have devised a number of tests, including …
♦ chi-square tests
♦ t-tests
♦ analysis of variance (ANOVA)
♦ multivariate analysis
Advanced Analytical Tools (Neuman,
Cavanagh & Pande, 2001)
• Another set of tools that Six Sigma teams might use to test causal
theories are correlation and regression analysis. These are tests to
show the numerical measure of correlation between X variables and Y
outputs. If the team has paired data, regression analysis can help
measure the degree to which different variables influence the
outcomes. For example, a Six Sigma team finds an apparent positive
correlation between the speed that phone orders are taken by
salespeople and number of defects in the orders taken. By calculating
the “correlation coefficient,” the team discovers that only about 25%
of the defects correlate to the speed with which the orders are taken.
This is a powerful clue, but the team will need to keep probing for
other causes.
GETTING READY FOR IMPROVE (Neuman,
Cavanagh & Pande, 2001)
By the end of the Analyze phase of DMAIC, your team should have confirmed
the root causes of problems with your processes, products, and services. And
now that you’ve pinpointed the cause, you stand a better chance of
implementing changes that will have a lasting effect. Before you move on,
however, there are a few last tasks to complete:
1. Document the verified causes.
2. Update your project storyboard.
3. Create a plan for Improve.
4. Prepare for the tollgate review by your Sponsor or Leadership Council.
5. Celebrate.
Celebrate (Neuman, Cavanagh & Pande,
2001)
As before, take time to celebrate the work and progress on your Six
Sigma projects. Be sure to point out particular challenges that the team
handled well in its Analyze work, for example:
♦ Innovative data analyses that might set a precedent for other teams
in your organization.
♦ Patience in working through process analyses.
♦ People maintaining their commitment, carrying through on
assignments.
Once these steps are complete, your team is ready for Improve.
The Six Sigma Way Team
Fieldbook: An Implementation
Guide for Process Improvement
Teams

(Neuman, Cavanagh & Pande, 2001)


Normal Data and Team Norms Guiding the Six
Sigma Team in the Analyze Stage (Neuman,
Cavanagh & Pande, 2001)
1. Draw attention to progress the team’s already made.
2. Fix “low-hanging fruit” and build some momentum.
3. Revisit and update team ground rules.
4. Pay more attention to how the team works together.
Draw attention to progress the team’s already
made. (Neuman, Cavanagh & Pande, 2001)
When a tree falls in the woods and there’s no one there to hear it,
there isn’t any sound. I know. I was there in the woods when it fell.”

There are two kinds of progress the team should be making by the time
it gets to Analyze. The first has to do with progress on the Six Sigma
project and how to get the word out to others in the organization. The
other has to do with the progress the team has made internally,
evolving from a group of people into something resembling a team.

Project Progress: Update Your Storyboard


BUILD MOMENTUM BY FIXING “LOW-HANGING
FRUIT” (Neuman, Cavanagh & Pande, 2001)
To paraphrase an old saying, “Nothing motivates a team like success.” If
a Six Sigma team has to wait six months or longer before its sees some
improvements in the process it’s working on, there’s likely to be a drop
in motivation within the team. To prevent this decline, once data
begins to throw some light on the process, the team needs to start
fixing some of the obvious things in the process that need fixing, the
so-called “low-hanging fruit.” Where these improvements can be made,
teams gain a boost of energy to continue work on the “high-hanging
fruit” that will probably be harder to pick off.
REVISIT AND UPDATE TEAM GROUND RULES
(Neuman, Cavanagh & Pande, 2001)
The ground rules that the team needed in its early days of Defining a
problem and project usually need to be revised and updated in the
Analyze phase, if not earlier. For example, early rules on “respecting
everyone’s opinions” will need to be revised as the team comes to rely
more and more on the analysis of objective data. How can I respect the
opinion of someone on the team who refuses to use data, or analyze it,
and relies instead on bluff and bluster to get their way? The old ground
rules need to be discussed, amended, and added to as the team
matures.
PAY MORE ATTENTION TO HOW THE TEAM WORKS
TOGETHER (Neuman, Cavanagh & Pande, 2001)
The ultimate goal of the Six Sigma team is not simply to improve a
process and increase customer satisfaction, important though these
goals may be. Of even greater importance is the new way of doing
things in your organization. The Six Sigma way is to gather data, analyze
it and then learn from that analysis.
The Six Sigma team needs to learn about itself by studying its own
activities and processes. How the team operates and how its members
work with one another provide opportunities for learning that far too
many teams ignore in their desire to solve problems and “get the
project over.”
TROUBLESHOOTING AND PROBLEM PREVENTION
FOR ANALYZE (Neuman, Cavanagh & Pande, 2001)
Failure #1: The Team Bogs Down in the Paralysis of Analysis and Can’t Identify
Root Causes of Problems and Unwelcome Variation
Why this happens: The tools to analyze the data collected range from simple
comparative logic (“Does the data explain why we see this problem in Chicago and
not in Los Angeles?”) all the way to measures of statistical probability and
multivariate analysis. Some teams resist concluding that they have in fact
discovered the major sources of variation in their process, and keep looking for
“one more proof.” The team heaps up reams of statistical proof when it should be
moving on to improvement.
How to prevent it: At the end of the day, statistics only offer us probable proof of
the causal relationship between key variables and key outputs. These probabilities
must be balanced by the common sense and experience of the team. Teams should
only use enough statistical analysis to reach reasonable conclusions of cause, and
then manage the risks of improvement. To paraphrase an old saying, “If the only
tool you have is statistics, everything looks like a correlation coefficient.”
TROUBLESHOOTING AND PROBLEM PREVENTION
FOR ANALYZE (Neuman, Cavanagh & Pande, 2001)
Failure #2: Jumping to Conclusions About Causes Before All the Data Is In

Why this happens: Of all the challenges facing Six Sigma team members, reaching this
step without having already made up their minds about the cause of the problem/defect may be
one of the toughest! After all, team members were most likely chosen because they have some
relevant experience or knowledge about the process or problem—which means they’ve likely been
living with the “defect” for some time, and naturally have their own theories about what’s going on.
And in the long run, having well-informed ideas about causes is what’s going to lead to a permanent
solution.

How to prevent it: While you can’t really prevent people from making up their minds early in the
project, you can prevent their decisions from harming the project by insisting that the team have
data to back up its conclusions. Remind the team that they will be asked by their Sponsor and
others to show what led them to their conclusions. Make sure that you encourage open-minded
thinking during brainstorming discussions, especially when it comes to creating a cause-and-effect
diagram, for instance.
Improve Stage
Remember the following guidelines:
(Neuman, Cavanagh & Pande, 2001)
♦ Whatever the team selects as a solution should address the root causes
of the problem and the goal the team set for itself in the Project Charter.
♦ Although the team will brainstorm many possible solutions, one or two
will be better than the others; the team must decide which are the best
options and determine what it will take to make them work.
♦ The solutions must not cost so much or be so disruptive that the
expenses outweigh the benefits in the long run.
♦ The chosen solutions must be tested to prove their effectiveness before
they are completely implemented.

But even before the team reaches the point of implementing solutions to
make improvement, it must struggle with the poverty of imagination that
often follows the brilliance of analysis, as the following vignette shows.
STEPS TO WORKABLE, EFFECTIVE SOLUTIONS
(Neuman, Cavanagh & Pande, 2001)
There are five steps for reaching that goal:
1. Generate creative solution ideas.
2. Cook the raw ideas.
3. Select a solution.
4. Pilot test.
5. Implement full-scale.
Generate creative solution ideas. (Neuman,
Cavanagh & Pande, 2001)
1. Be clear about what your brainstorm is supposed to produce. The team needs to be focused on the target.
2. Set a quota of ideas. Left alone to brainstorm, most people come up with three or four ideas and quit. If
asked for 15 ideas, most people double the “average” output. Go for quantity!
3. Play off other people’s suggestions. Not easy to do this when you’re trying to be brilliant, but even Edison
got many good tips from other people. Listen carefully!
4. List ideas without comment, discussion, or criticism. Tearing up each idea as it gets written down is not only
time-consuming, it’s depressing. Rather than spending 10 minutes talking about the first idea to pop out, get
10 ideas and discuss them afterward. Having people write down 10 ideas on separate sticky notes before
slapping them on the wall will discourage premature discussion. Keep the storm moving!
5. Challenge assumptions and go a little crazy. Easier said than done, unless you can think like a three-year-old
and keeping asking “why” about everything you’re working on. Why are tires underneath the car? Why can’t
they go on top of the car? That would save wear and tear on the treads, especially if they were filled with
helium! Get crazy before you get practical!
6. Brainstorm one day; check back the next day. The old notion of “let me sleep on it” makes a lot of sense
around brainstorming. Ideas just seem to get a little crazier and better if you come back to them the next day.
Identifying Process Changes (Neuman,
Cavanagh & Pande, 2001)
Simplification.
Straight-line processing.
Parallel Processing
Alternative Paths
Bottleneck Management
Front-loaded decision making
Standardized Options
Single point of contact or multiple contacts
STEP 2: COOK THE RAW IDEAS: SYNTHESIZING
SOLUTION IDEAS (Neuman, Cavanagh & Pande,
2001)
Refine the brainstormed list
Identify what portions of the problem each of the individual solution
ideas will address
Use the information from B to generate “complete solution” ideas
Document the “full solution” ideas
Step 3 SELECT A
SOLUTION (Neuman,
Cavanagh & Pande,
2001)

You have several options at


this point:
A. Perform a “minimum
requirement” test.
B. Assess the amount of
benefit (impact) for the effort
required.
C. Do a formal analysis of
pros/cons, costs and benefits.
STEP 4. PILOT TEST (Neuman, Cavanagh &
Pande, 2001)
Having reached a decision on a solution, the team is ready to
implement its choice, but has to remember the “five
Ps”: Planning, Piloting, Problem Prevention and Proactivity
IMPLEMENT FULL-SCALE (Neuman, Cavanagh
& Pande, 2001)
It’s a big mistake to get over-confident after a successful pilot.
Compared to full-scale implementation, the pilot is usually a much
more controlled situation, with fewer variables to manage and fewer
people involved. Other problems are almost sure to arise in the
conversion from test to final roll-out of a new process. Some of the
critical ingredients—all common sense but worth noting—
To Do Before Implimenting (Neuman,
Cavanagh & Pande, 2001)
Training: New approaches need to be taught (and learned), old habits
broken.
Documentation: References on how to do things, answers to
frequently asked questions, process maps, etc., are all important.
Troubleshooting: People need to be clear about whose responsibility it
is to deal with issues that arise.
Performance management: Watch for needs/opportunities to revise
job descriptions, incentives, performance review criteria.
Measurement: Document results.
Getting Ready for Control (Neuman,
Cavanagh & Pande, 2001)
By the end of the Improve phase of DMAIC, your team should have led the
full-scale implementation of a solution that was clearly linked to root causes
of the targeted problem. The issue in Control will be how to prevent
backsliding to the older methods. Before you move on, there are a few last
tasks to complete:
1. Finalize any process documentation.
2. Update your project storyboard.
3. Create a Plan for Control.
4. Prepare for the Tollgate Review by your Sponsor or Leadership Council.
5. Celebrate.
Finalize Process Documentation (Neuman,
Cavanagh & Pande, 2001)
In the course of your pilot tests and full-scale implementation, you
should have created new process maps and/or other documentation to
help people implement the new procedures.

As a final check, make sure that any refinements to the procedures


have been captured, and useful job aids to help with daily
implementation are in place.
2. Update Your Project Storyboard (Neuman,
Cavanagh & Pande, 2001)
Under Improve, teams often display the results of their efforts to
brainstorm solutions that will reduce or eliminate the causes of their
problem. They often include information about experiments and pilot
tests of solutions, along with analytical charts showing how they
reduced variation and the number of defects produced. Charts
designed to prevent problems (based on failure mode and error
analysis methods) can be shown here, too. Another sigma calculation
and an updated Pareto or histogram are good ways to show
improvements. A cost benefit analysis showing what the payoff of the
solutions might be rounds out the Improve section.
Create a Plan for Control (Neuman, Cavanagh
& Pande, 2001)
Planning for Control is really planning for the long-term maintenance and
improvement of the process. The key aspect here is that you will probably
want to involve more people than just those on your team—because it’s not
just team members who will be responsible for making sure everything is
kept in place! Your plan should include:
• Time to identify likely ways in which the new procedures may fall apart.
• Time to develop measures to prevent that backsliding.
• Ways to involve everyone who works with the process.
• Time to develop ways to monitor the process performance (including the
quality of the product/service output).
• Methods for making sure that the hand-offs are complete and
responsibilities clear.
4. Prepare for Your Tollgate Review by Your Sponsor
or Leadership Council (Neuman, Cavanagh &
Pande, 2001)
Before you prepare for the Improve Tollgate Review, do a debrief with the
team on what happened with your Analyze tollgate:
• What improvements did you try out that were different from the Measure
review? How well did those work? Should you do the same thing this time,
or try something different?
• What comments or suggestions did your reviewers (your customers!) have?
• Which of the support materials (slides, overheads, handouts, flipcharts,
etc.) worked and which didn’t?
• How did you do on time? If it was too long, what could you do this time to
make sure you keep it brief? If it was too short, do you need to add in more
detail? Speak more slowly?
5. Celebrate (Neuman, Cavanagh & Pande,
2001)
By now, your team should have actual results to celebrate! Since a portion of
your Improve work most likely involved people not on the team, consider
holding an event (meeting, lunch, etc.) where all those involved are invited:
• Highlight data that shows improvements have been made.
• Thank everyone involved in the pilot test and full-scale implementation.
• Acknowledge team members for their continued commitment to the
project.
• Invite everyone to help the team maintain the improvements.
Once these steps are complete, your team is ready for Control.
Control
Control Phase
The Control phase focus is to maintain the the transfer function equation—Y =
f(X)—in order to sustain the improvements in the Ys. The team must:

Evaluate and validate the solution.


Assess the capability of the process within, between, and over time with respect to
the sources of variation causing the problem.
Establish control systems to ensure the solution works for the long term.
Document and monitor the process using the metrics defined earlier in the project.
Standardize procedures also known as standard work.
Hand over the process to the process owners with training.
Calculate the financial gains and document the entire project including plans going
forward in a final report.
Parts of Control Phase (Neuman, Cavanagh &
Pande, 2001)
1. Discipline.

2. Documenting the improvement.

3. Keeping score: establishing ongoing process measures.

4. Going the next step: building a process management plan.


Part 1 Dicipline (Neuman, Cavanagh & Pande,
2001)
• Maintaining a stable and predictable process requires discipline at
both the personal and organizational level.
• Discipline starts with the very processes whereby companies select,
train, track, and especially evaluate and reward their own employees.
• If disciplined and proactive processes are not in place, control of
improved processes will be hit-and-miss, left to the individual
employee and subject to luck, not skill.
• Discipline at the personal level is difficult unless you and every other
person who works on the process understand
the reasons and benefits for monitoring, control, and improvement.
PART 2. DOCUMENTING THE IMPROVEMENT
(Neuman, Cavanagh & Pande, 2001)
Write clearly and use pictures—flowcharts, photographs, charts, and videotapes—whenever possible. Avoid jargon and technical
terminology, unless you can correctly assume that the person looking after the improvement is as fluent as you are with “insider”
language

When you’re unpacking the new computer, which one do you read, the two-inch thick manual or the 10-page “Highlights for
Operating Startup”? (Or maybe you don’t read either one!) Keep it brief!

Anticipate problems and warning signs that cropped up during the pilot phase. These are the document equivalent of those glass
panels that say “Break in Case of Fire and Remove Axe.”

If someone has to take the company shuttle over to the third floor of building 16 or spend an hour on the computer to dig out
required procedures, they probably won’t bother, and undesired variation will creep back into the process.

If the process is too complicated, people will stop updating what they’re doing, until what the documents say and what people
actually do become very different things. Some companies handle this problem by having a department specially devoted to
“documentation control,” but we recommend you keep control of documentation in the hands of people who actually manage the
process and know what needs to updated or dropped.
PART 3. KEEPING SCORE: ESTABLISHING ONGOING
PROCESS MEASURES (Neuman, Cavanagh & Pande,
2001)
There are three obvious starting points where your team should look for measures:
a. Examine a SIPOC map of the improved process. As usual, customer
requirements are the starting point. Obviously the Six Sigma team must measure
process outcomes for conformance to customer requirements, defects, and
process variation.
b. Decide which upstream process measures are linked to the improvements,
measures that will predict problems downstream in the outputs. For example, if
“on time” is a key customer requirement and process measurement shows that two
of the key process steps are taking longer and longer, the team needs to find and
eliminate the cause of the trend before it causes the output to be late—a defect.
c. Look at critical input measures that help to predict the quality of process steps
and key outputs
PART 4. GOING THE NEXT STEP: BUILDING A
PROCESS MANAGEMENT PLAN (Neuman,
Cavanagh & Pande, 2001)
Process management plan can cover:
♦ Current process map. The manager of the improved process needs to see at a glance the flow of
activities and decisions in the process. A concise process map provides this visual aid.
♦ Action alarms. Develop a process response plan that clearly marks the points in the process
where measures can take the pulse of inputs, process operation, and outputs. With thresholds
indicating when the quality of any of these is degrading, the process response plan lets the process
manager know when to take action. For example, the failure to complete three customer orders on
time could signal the need to go to a contingency plan for deliveries.
♦ Emergency fixes. Once an action alarm goes off, it’s important to have emergency fixes or back-
up plans already spelled out so that employees don’t have to improvise them. Contrary to the old
saying, very few people work well under pressure.
♦ Plans for continuous improvement. By tracking the occurrence of problems in the process, the
process response plan also gives a basis for deciding on the need to have a Six Sigma team overhaul
weak parts of the process. Once enough process response plans are in place, and information from
them is collected and analyzed, managers will have an extensive pool of projects to be attacked by
future Six Sigma teams.
ENDING THE PROJECT (Neuman, Cavanagh &
Pande, 2001)
As tired as team members get of the project, sometimes it’s hard to
bring everything to a close. After all, by this time most teams have a
high level of camaraderie and are likely working very well as a unit. Still,
even good things must come to an end. In this case, that means:
• 1. Completing your storyboard.
• 2. Preparing for the final tollgate review.
• 3. Celebrating the end!
CONTROL DO’S (Neuman, Cavanagh & Pande,
2001)
Do
• Document the improved process. Keep the documents brief, clear,
and visual wherever possible.
• Balance your measures. Select key input and process measures to
predict and interpret output measures.
• Create channels of information back from the customer to you.
CONTROL DONT’S (Neuman, Cavanagh &
Pande, 2001)
• File the updated process documents away. If you do, people will
follow their own ideas, variation will increase, problems will occur …
you know the story.
• Expect the data you collect to confirm your assumptions. It’s easy to
accept data that confirms your pet theories. Be prepared—and open-
minded—when the data refute what you expected to see.
• Forget the process maps. They are worth thousands and thousands
of unread words in three-ring binders.
Failure Modes and Effects Analysis (FMEA)
(George, 2002)
Like several other tools described previously in this chapter, failure modes and effects analysis
(FMEA) is a primarily a tool of focus. FMEA is used to prioritize risks to the project and document
recommended actions. Each potential type of failure of a product or process is assessed relative to
three criteria on a scale of 1 to 10:

• The likelihood that something will go wrong (1 = not likely; 10 = almost certain).

• The detectability of failure (1 = likely to detect; 10 = very unlikely to detect).

• The severity of a failure (1 = little impact; 10 = extreme impact, such as personal injury or high
financial loss).

• The three scores for each potential failure are multiplied together to produce a combined rating
known as the Risk Priority Number (RPN): those with the highest RPNs provide the focus for
further process/redesign efforts.
Gage Repeatability and Reproducibility (R&R)
(George, 2002)
In some ways, it might be argued that gage repeatability and
reproducibility (gage R&R) should appear first on everyone's tool list,
because it's of fundamental importance. Implicit in our discussion is
the assumption that the measurements being taken are accurate and
consistent. But this assumption is not always true. Gage R&R is the
method by which physical measurement processes are studied and
adjusted to improve their reliability. "Repeatability" means that
someone taking the same measurement on the same item with the
same instrument will get the same answer. "Reproducibility" means
that different people measuring the same item with the same
instrument will get the same answer.
Run Charts (George, 2002)
Run Charts. By definition, a process is something that is repeated
periodically over time. It stands to reason, therefore, that much of the data a
team collects will be produced over time as well—such as key process
measures taken each shift, number of defects produced per hour or per day,
total lead time each day, and so on.
There is a special subset of tools useful for displaying and analyzing data that
is time-ordered, the simplest of which is called a run chart

You can learn a lot simply by plotting data in time order, such as …
The general range of scatter (variation) in the points.
Whether the data points are generally stable around some mean or if there
are clear trends upward or downward.
Control Charts (George, 2002)
Control Charts. Control charts are the high-power version of run charts. The purpose of a control chart is to
help a team determine whether the variation seen in the data points is a normal part of the process (known as
"chance" or "common cause" variation) or if something different or noticeable is happening ("special cause" or
"assignable" variation). There are different improvement strategies depending on which type of variation is
present (common or special cause), so it is important for a team to know the difference. There are several
simple statistical rules used to analyze the patterns and trends on the control chart to determine whether
special cause variation is present.
The basic structure of a control chart is always the same. The charts show the following:
 Data points plotted in time order.
 A centerline that indicates the average.
 Control limits (lines drawn approximately 3 standard deviations from the average) that indicate the expected
amount of variation in the process.
What differs from chart to chart is the type of data plotted on the chart and the specific formulas used to
calculate the control limits. Being able to know what kind of data to collect and the best way to calculate
control limits is a skill that a black belt will develop only through special training or under the guidance of a
master black belt or other statistical expert.
Control Charts (George, 2002)
It takes time and effort to create a control chart, so the first and most important decision to make is
when to create one. When control charts are used as part of a DMAIC project, that decision should
be fairly clear: you want to monitor variation in characteristics of the process and/or its output that
are critical to quality in terms of your project goals. In other words, don't have black belts create
control charts just because they can! Pick and choose where to use these tools.
In and of itself, creating a control chart does you no good. You have to understand what the chart is
telling you and take appropriate action. We create control charts for one purpose: to help us
distinguish between two types of variation: common cause variation and special cause variation
(also known as assignable variation).
Common cause variation is inherent in the process; it is present all the time to a greater or lesser
extent.
Special cause variation is a change that occurs because of something different or unusual in the
process.
As mentioned above, the reason we need to tell the difference between special and common cause
variation is because there are different strategies for reacting to them.
DFSS (Morgan & Brenig-Jones, 2012)
Design is a funny word. Some people think Design means how it looks. But, of course, if you
dig deeper, it’s really how it works.
—Steve Jobs
Design for Six Sigma (DfSS) is a philosophy for designing new products, services and
processes often with high customer involvement from the outset, though that won’t
always be so – consider inventions by people such as Jobs and Dyson, for example.
When redesigning a process, we focus on the customer. When designing a new service or
product, there may not be a customer yet. In this case, we concentrate on the demands of
the (potential) marketplace.
Where the customer is involved, we mean both end user customers and internal business
stakeholders and users. Customer requirements and the resulting CTQs (Critical To Quality
– see Chapter 4) are established early on and the DMADV framework rigorously ensures
that these requirements are satisfied in the final product, service or process.
SIMPLIFIED FMEA (Brussee, 2012)
Manufacturing. Before implementing any new design, process, or change, do a simplified FMEA. An
FMEA converts qualitative concerns into specific actions. You need input on what possible negative
effects could occur.
Sales and marketing. A change in a sales or marketing strategy can affect other products or lead to
an aggressive response by a competitor.
A simplified FMEA is one way to make sure that all the possible ramifications are understood and
addressed.
Accounting and software development. The introduction of a new software package or a different
accounting procedure sometimes causes unexpected problems for those affected. A simplified
FMEA will reduce unforeseen problems and trauma.
Receivables. How receivables are handled can affect future sales. A simplified FMEA will help
everyone involved to understand the concerns of both customers and internal salespeople and to
identify approaches that minimize future sales risks while reducing overdue receivables.
Insurance. The balance between profits and servicing customers with insurance claims is dynamic. A
simplified FMEA helps keep people attuned to the risks associated with any actions under
consideration.
SIMPLIFIED FMEA (Brussee, 2012)
A simplified FMEA is a method for reviewing things that can go wrong even if a
proposed project, task, or modification is completed as expected. Often a project
generates so much support and enthusiasm that it lacks a healthy number of
skeptics, especially with regard to any effects that the project may have on things
that are not directly related to it. Everyone is working on the details of getting the
project going, and little effort is being spent on looking at the ramifications beyond
the specific task!

The simplified FMEA form is a way of taking a critical look at a project before it is
implemented; it often saves a lot of cost and embarrassment. In doing a simplified
FMEA, it is assumed that all inherent components of the direct project will be done
correctly. (They should have been covered in regular project reviews.) The
emphasis in a simplified FMEA is on identifying affected components or issues
downstream or in tangentially related processes in which issues may arise because
of the project.
SIMPLIFIED FMEA
(Brussee, 2012)

• The left side of the


simplified FMEA form is a
list of things that could
possibly go wrong, assuming
that the project is
completed as planned. The
first task of the meeting is
to generate this list of
concerns.
SIMPLIFIED FMEA (Brussee, 2012)
On this list could be unforeseen issues involving other parts of the process, safety issues,
environmental concerns, negative effects on existing similar products, or even employee problems.
These will be rated in importance as follows:

5 means that it is a safety or critical concern.

4 means that it is a very important concern.

3 means that it is a medium concern.

2 means that it is a minor concern.

1 means that whether it is an issue is a matter for discussion.


SIMPLIFIED FMEA (Brussee, 2012)
Across the top of the simplified FMEA is a list of solutions that are already in place to
address the concerns and additional solutions that have been identified during the
meeting. Below each solution and opposite the concern, each response item is to be rated
on how well it addresses the concern:

5 means that it addresses the concern completely.


4 means that it addresses the concern well.
3 means that it addresses the concern satisfactorily.
2 means that it addresses the concern somewhat.
1 means that it addresses the concern very little.
0 or a blank means that it does not affect the concern.

A negative number means that the solution actually makes the concern worse.
SIMPLIFIED FMEA (Brussee, 2012)
Enter this value in the upper half of the block beneath the solution
item and opposite the concern. After these ratings are complete,
multiply each rating by the concern value on the left. Enter this product
in the lower half of each box. Add all the values in the lower half of the
boxes in each column and enter the sum in the Totals row indicated
near the bottom of the form. These are then prioritized, with the
solution with the highest value being the number one consideration for
implementation.
ANALYSIS IMPROVEMENT CONTROL

Potential

Detectability 2
Occurrence 2
Detectability
Occurrence

Severity 2
failure modes Effect failure Potential

Severity

RPN 2
RPN
No.

associated mode would causes of Date to


with the have on the identified Current be
Input input customer if it failure process Suggested done Actual
variable variable occurred modes controls actions Assigned to by actions
0 0
0 0
0 0
0 0
0 0
0 0
0 0
Quality Function Deployment (Cohen &
Ficalora, 2009)
• QFD is a method for structured product or service planning and development that enables a
development team to specify clearly the customer’s wants and needs, and then evaluate each
proposed product or service capability systematically in terms of its impact on meeting those
needs.

• QFD is fundamentally a quality planning and management process to drive to the best possible
product and service solutions.

• A key benefit of QFD is that it helps product-introduction teams communicate to management


what they intend to do, and to show their strategy in the planned steps forward. Management
can then review these plans and allocate budget and other resources.

• QFD helps enable management to evaluate whether the product plans are worth the investment.
Working through the QFD process together provides the important benefit of alignment—within
the project team, and to management desires.
Quality Function Deployment (Cohen &
Ficalora, 2009)
Any place there are voices to be heard, analyzed, and driven into development activities, QFD can
be utilized. In general, there are four broad categories of voices in the Six Sigma community that
may be linked to development activities involving QFD:

1. Voice of the Customer (VOC): the voices of those who buy or receive the output of a process,
clearly an important voice in product and process development

2. Voice of the Business (VOB): the voices of funding and sponsoring managers for marketing and
development activities

3. Voice of the Employee (VOE): the voices of those employees who work in your company and/or
develop new products, services, and technologies

4. Voice of the Market (VOM): the voices of trendsetting lead-user or early-adopter market
segments, or market-defining volume purchasers
Quality
Function
Deployment
(Brussee,
2012)
Quality
Function
Deployment
(Brussee,
2012)
Quality Function Deployment (Brussee, 2012)
Here are some examples of how a QFD is used.

Manufacturing. Use the simplified QFD to get input from customers on their needs at the start of
every new design or before any change in process or equipment.
Sales and marketing. Before any new sales initiative, do a simplified QFD, inviting potential
customers, salespeople, advertisement suppliers, and other such groups to give input.
Accounting and software development. Before implementing a new program language or software
package, do a simplified QFD. A customer’s input is essential for a seamless introduction.
Receivables. Do a simplified QFD on whether your approach to collecting receivables is optimized.
In addition to those who are directly involved in collections, invite customers who are overdue on
receivables to participate. (You may have to give them some debt relief to get their cooperation.)
Insurance. Do a simplified QFD with customers to see what they look for when they pick an
insurance company or what it would take to make them switch. (Brussee, 2012)
Quality Function Deployment (Brussee, 2012)
QFD originally stood for quality function deployment. Years ago, when quality departments were
generally much larger than they are now, quality engineers were “deployed” to the customers to
rigorously probe a customer’s needs and then create a series of forms that made the transition from
those customer needs to a set of actions that the supplier could take. The simplified QFD attempts
to accomplish the same task in a condensed manner.

What is presented here is a simplified version of the QFDs that are likely to be described in many Six
Sigma classes. Some descriptions of these traditional QFDs and the rationale for the simplification
will be given later in this chapter. The simplified QFD is usually used in the Define or the Improve
step of the DMAIC process.

A simplified QFD is a Six Sigma tool that does not require any statistics. However, it is usually
necessary to do a simplified QFD to understand what actions are needed to address a problem or
implement a project. The specific actions that are identified in the QFD, or in any of the other
qualitative tools, are often what trigger the application of the statistically based Six Sigma tools.
Quality Function Deployment (Brussee, 2012)
Many product, process, and service issues are caused by not incorporating
inputs from customers and/or from suppliers of components and raw
materials early in a design or process change. Often the manufacturers’
decision makers just assume that they and the people from whom they
source the components and raw materials already know what the customers
want.

The customers in this case include everyone who will touch the product
while or after it is made. This includes employees in production, packaging,
shipping, and sales as well as the end users. They are all influenced by any
design or process change. The people who operate equipment, who do
service work, or who implement can be both customers and suppliers.
Quality Function Deployment (Brussee, 2012)
The most difficult (and important) step in doing any QFD is getting the
suppliers, operators, and customers together to prepare the required
QFD form(s). Every group that is affected by the project should be
represented. The desires of one group will sometimes lead to
limitations on others, and simultaneous discussions among the factions
will often identify options that were not previously considered, to
arrive at the best possible overall solution.
Quality Function Deployment (Brussee, 2012)
Quality Function Deployment (Brussee, 2012)
The simplified QFD form is a way of quantifying design options, always measuring these options
against customer needs. The first step in preparing the simplified QFD form is to make a list of the
customer needs. Then, assign a value of from 1 to 5 to each need:

5 means that it is a critical or a safety need, a need that must be satisfied.

4 means that it is a very important need.

3 means that it is highly desirable.

2 means that it is a “nice to have.”

1 means that it is a “wanted if it’s easy to do.”


Quality Function Deployment (Brussee, 2012)
You can use a more elaborate rating system, but if you do, you will find
that you are spending too much time assigning numbers! The customer
needs and ratings are listed down the left side of the simplified QFD
form.
Across the top of the simplified QFD form are potential actions to
address the customer needs. Note that the customer needs are often
expressed qualitatively (easy to use, won’t rust, long life, and the like),
whereas the design action items listed will be more specific (tabulated
input screen, stainless steel, sealed roller bearings, and the like).
Quality Function Deployment (Brussee, 2012)
Under each design action item and opposite each customer need, you will determine a
value (1 to 5) that indicates how strongly that design item addresses the particular
customer need:

5 means that it addresses the customer need completely.


4 means that it addresses the customer need well.
3 means that it addresses the customer need somewhat.
2 means that it addresses the customer need a little.
1 means that it addresses the customer need very little.
0 or blank means that it does not affect the customer need.

A negative number means that it is detrimental to that customer need. (A negative number
is not that unusual, since a solution to one need sometimes interferes with another need!)
Quality Function Deployment (Brussee, 2012)
Put the rating in the upper half of the block beneath the design item and opposite
the need. Then, multiply the design rating times the value assigned to the
corresponding customer need. Enter this result in the lower half of the square
under the design action item rating. These values will have a possible high of 25.
Once all the design items have been rated against every customer need, add up the
values in the lower half of the boxes under each design item and enter the sums
into the Totals row at the bottom of the sheet. The solutions with the highest
values are usually the preferred design solutions to address the customer needs.
Upon reviewing these totals, someone may feel that something is awry and want to
go back and look again at some ratings or design solutions. This second (or third)
review is extremely valuable. Also, the customer 5 ratings should be discussed one
at a time to make sure that they are being addressed sufficiently.
Quality Function Deployment (Brussee, 2012)
In the simplified QFD form, near the bottom, design action items are grouped when only
one of the options can be done. The priorities within a group are only among the items in
that group.

In this case, the priorities showed that the supplier should cast a plastic license plate cover
with a built-in plastic lens. This precludes the need for a separate lens, which is why NA
(not applicable) is shown for both items in that grouping. The unit should be mounted
using plastic screws, with holes for all plates cast in. Gold or silver plating is an option that
can be applied to the rim of the plastic.

Note that some of the luxury items (like steel casting) weren’t picked because other factors
were deemed to be more important. Having the customers there for these discussions is
very critical, so that they don’t feel that the supplier is giving them something other than
what they really want. Often customers will start out with a wish list of items that is then
trimmed down to the critical few.
Quality Function Deployment (Brussee, 2012)
The form can be tweaked to make it more applicable to the product,
process, or service that it is addressing. What is important is getting
involvement from as many affected parties as possible and using the
simplified QFD form to drive the direction of the design.

The simplified QFD form should be completed for all new designs or for
process modifications. The time and cost involved in holding the
required meetings will be more than offset by making the correct
decisions up front, rather than having to make multiple changes later.

You might also like