IOTA Course Material v3.2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 122

BENCHMARK SIX SIGMA

BLACK BELT PROGRAM


IOTA MODULE
METRICS & MEASURES AND THEIR USEFULNESS

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 1


About Benchmark Six Sigma
Founded in 2001, Benchmark Six Sigma leads the Business Problem Solving Workshops in India.
Benchmark utilizes a wide variety of management techniques that have evolved over centuries to help
client organizations and professionals address challenges and chart extra-ordinary growth paths.

Benchmark workshops have evolved as Industry best offerings in two formats. In the public workshops
format, we offer globally recognized programs with content and methodology that can be used in
practically any domain. In the customized in-company workshops format, Benchmark is known for its
ability to design and deliver highly valuable programs with demonstrated applicability. Our in-company
workshops are offered with or without consulting support.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 2


© Benchmark Six Sigma. Version 3.2 All Rights Reserved 3
The four unique elements in BSBI
➢ Champions (Top Management)
collaborates in business improvement Benchmark System Of
project selection.
➢ Integration of 17 competencies to create
Business Improvement
methodology with right toolkit.
➢ Proprietary engaging sequences that make
the methodology work.
➢ Action Learning Workshops.
Action
Learning
Custom Workshops till
sequences for the end
Creation of the custom
methodology

4
methodology
Selection of for your
projects

3
business
impacting
projects

2
1
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 4
BSS across the world

USA
Indonesia Netherlands Oman

Italy

Hungary Germany

Madrid

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 5


Benchmark Six Sigma* is Exemplar Global certified

RAB ASQ Exemplar


QSA RABQSA Global

Jan, 1989, Jan, 2005 Oct, 2013

RAB: Registrar Accreditation Board, USA


QSA: Quality Society of Australasia, Australia
ASQ: American Society for Quality USA
RABQSA changed its name to Exemplar Global in 2013. It has offices in
the USA, Australia, Korea, Brazil, Cyprus, Japan, Malaysia, New Zealand,
Taiwan, Thailand and Turkey, China and Vietnam. Exemplar has
associations with international bodies including ASQ, AACB, AAQG,
ACHMM, IAAR. It has an association with QCI in India.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 6


Our Clients – Part List

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 7


Press Release from RABQSA

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 8


Course Announcements
➢ 100% attendance is expected in all sessions of training
➢ Please log in to Zoom meeting links at least 10 minutes in advance and ensure that
your audio is working fine
➢ Please keep yourself on mute on all times. You may use the chat window to ask
questions
➢ Please avoid use of laptops/mobile for office work during the sessions as such acts
cause distraction
➢ It is advisable, to attend the online sessions in an area that is free from distractions
and noise. This will help you maximize the learning from the sessions
➢ Course Material comprises of soft copies of Course Books
➢ Academy contains Minitab data files (for class exercises), case studies, some useful
templates, reference material and pre-course reading material.
➢ Minitab software will be used for all graphical and statistical exercises. Individual
laptops are required for all days of the training course
➢ Trainer notes are not available in soft copy. Please take notes as required

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 9


Data Types

Attribute/Discrete Continuous

Categoric ➢ Dimensions
➢ Colors ➢ Profit
➢ Gender ➢ Sales Revenue
➢ Region ➢ Handle Time
➢ Satisfaction Scale ➢ Lead Time
Count ➢ Turn Around Time
➢ No. of tickets ➢ Weight
➢ No. of defects ➢ Temperature
➢ No. of clicks
➢ No. of items sold

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 10


Performance Data vs. Cause Data

Cause Data (x’s) Performance Data (Y or y’s)

Input Process Output

Y = f(x)

Performance Data Cause Data


•Descriptive •Focuses on why process performs the
•Focus on Results way it does
•Helps establish a baseline •Helps identify potential root causes
•Measures performance of a process •Collect this type of data to explain
•Should be collected first performance problems

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 11


Performance Standard Matrix

➢ For the primary metric, project Y, we define how we measure


the characteristic, what is the definition of the defect and
where the data is obtained from.
Item Continuous Data Example Discrete Data Example
Item Car Invoice
Metric Length in feet from the left Number of errors per invoice
door to right door handle.
Opportunity 1 / unit 15 / invoice
Data Type Continuous Discrete – Defect
Target 4.2 feet – as specified by Zero
R&D
Specifications +/- 0.1 feet Less than 1 error / invoice
Measurement Source Laser Check Visual Inspection
Measurement Frequency Once/week Once/month

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 12


Lean Six Sigma Black Belt Sections

Iota, Ι Zeta, Z Epsilon, Ε Theta, Θ Omega Ω


Iota denotes a Small “Born Last”, It is the The maximum error Angular Position, Omega is the symbol
difference sixth letter that can occur Temperature of last, best, or final

Metrics and
Process Stability Hypothesis Testing Experimentation
Big Y to small y to
measurements root cause

Measurement System Hypotheses and


Analysis
Process Capability
Comparisons
Design of Experiments Review of DMAIC

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 13


Lean Six Sigma Black Belt Modules

Iota, Ι Zeta, Z Epsilon, Ε Theta, Θ Omega Ω


Iota denotes a Small “Born Last”, It is the The maximum error Angular Position, Omega is the symbol
difference sixth letter that can occur Temperature of last, best, or final

Metrics and
Process Stability Hypothesis Testing Experimentation
Big Y to small y to
measurements root cause

Measurement System Hypotheses and


Analysis
Process Capability
Comparisons
Design of Experiments Review of DMAIC

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 14


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 15


Measurement System Analysis (MSA)

What is the first question you ask when you are about to run an
analysis?
Answer: "Can I trust my data?" If the answer to that question is "No"
or even "I'm not sure" it's usually pointless to go any further: Why
would you spend any valuable time to interpret data you can't rely
on?
That's where measurement systems analysis, or MSA, comes in.

MSA is a collection of methods you can


use to assess your ability to
collect trustworthy, reliable data - the
kind of data you want to Analyze.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 16


Measurement System for Y
Measurement system analysis is carried out to determine the reliability of the
current measurement system, and assists us in ascertaining whether we can
trust the collected data

Measurement System Type of data MSA

Continuous or
All types Logical validation
Discrete

Nominal-Binary
(Discrete) Attribute agreement analysis
Operator/Appraiser only
[e.g. Pass/Fail, (Fleiss Kappa statistic)
Go/No-Go]

Ordinal (Discrete) Attribute agreement analysis


Operator/Appraiser only
[e.g. Rating 1 to 5] (Kendall’s coefficient)

Measurement Resolution, Accuracy, Gage R&R


Operator + Gauge
(Continuous) (Precision)

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 17


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 18


Logical Validation

When the metric chosen is such that conducting an MSA is not possible or
practical, then at the least a Logical Validation should be done to ensure
data is reliable and accurate.

Logical validation can be done by checking the boundary conditions - do the


start and stop points accurately reflect what we expect for the project?

Copier Repair Time

Start Point Stop Point


Is the time started when the copier is really broken?
What if the time is recorded at the end of the day?
Is time recorded in system when copier person is there to fix?

Is the time stopped when the copier is really fixed?


What if the time is recorded at the end of the day?
What is the time recorded even if the copier is not fully fixed, just because repair person showed up?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 19


Logical Validation Example

➢ A project is selected to reduce the inventory of finished goods to less than


one day.
➢ Metric: Number of finished goods stored in a month for more than one
day/ Number of goods produced.
➢ Data is Discrete type. The daily production and dispatch happens through
system and the data is entered in SAP.

➢ In order to validate the data, Daily dispatches and daily production in SAP
can be verified with the physical movement and the actual production,
respectively.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 20


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 21


MSA for Discrete Data
What is an Attribute Measurement System?
Attribute measurement systems are the class of measurement systems where the measurement
value is one of a finite number of categories. The most common of these is a go/no-go gage which
has only two possible results. Other attribute systems, for example visual standards, may result in
five to seven classifications, such as very good, good, fair, poor, very poor.

How can we assess the reliability of an Attribute Measurement system?


Attribute Agreement Analysis: Some measurement systems categorize items by their attributes
to separate “good” items from “bad” ones, sort samples into “blue,” “green,” and “cyan” groups,
and assign invoices to “engineering,” “production,” or “sales” departments. These types of
measurement systems are called attribute measurement systems because they determine or
measure one or more attributes of the item being inspected.
The question is, how repeatably and reliably can one of these systems determine the specific
attribute we are looking for? For example, how repeatably and reliably does your attribute
measurement system detect “bad” disk drives from among all the “good” ones being completed
in production? To quantify how well an attribute measurement system is working, Attribute
Agreement Analysis is performed.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 22


Attribute Agreement Analysis- Procedure
1. Set aside 15 to 30 test samples of the item you’re measuring. Make sure these samples represent the full
range of variation being encountered and make sure approximately equal amounts of each possible attribute
category

2. Create a “master” standard that designates each of the test samples into its true attribute category.

3. Select two or three typical inspectors and have them review the sample items just as they normally would in
the measurement system, but in random order. Record their attribute assessment for each item.

4. Place the test samples in a new random order, and have the inspectors repeat their attribute assessments.
(Don’t reveal the new order to the inspectors!). Record the repeated measurements.

5. For each inspector, go through the test sample items and calculate the percentage of items where their first
and second measurements agree. This percentage is the repeatability of that inspector.

6. Going through each of the sample items of the study, calculate the percentage of times where all of the
inspectors’ attribute assessments agree for the first and second measurements for each sample. This
percentage is the reproducibility of the measurement system.

7. You can also calculate the percent of the time all the inspectors’ attribute assessments agree with each other
and with the “master” standard created in Step 2. This percentage, which is referred to as the Accuracy
(effectiveness) of the measurement system

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 23


Problem 1

➢ If the measured data is discrete, and is output of results from appraisers,


then use Attribute Agreement Analysis to determine the effectiveness of
the gage.

➢ Let us illustrate with an example-

A call centre for a major credit card company handles incoming calls from
customers with account inquiries. The Quality Assurance (QA) department
conducts regular inspections of their operators by listening to recordings of
the customer calls and classifying the responses as “Good” and “Bad”. In
order to ensure high consistency among the QA team members a MSA study
was conducted with 30 recordings. Check if the measurement system is
acceptable.

< Call Quality>

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 24


Attribute Agreement Analysis Inputs
Path: Stat > Quality Tools > Attribute Agreement Analysis…

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 25


Attribute Agreement Analysis Inputs
Ensure that
number of trials
for each
appraiser is
added. Do not
enter number of
samples

Do not enter the


standard/expert
here

If data is Ordinal,
then check this
box

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 26


Attribute Agreement Analysis Output- Graph
Assessment Agreement The red line with crosses atby:
Reported
Date of study:
the ends
Name of product:
represents the 95% confidence
Misc: interval of
the percentage matched.

Within Appraisers Appraiser vs Standard


95.0% CI 95.0% CI
Percent Percent
95 95

90 90
Percent

Percent
85 85

80 80

75 75

70 70

1 2 1 2
Appraiser Appraiser

The blue dot represents the actual The figure on the right indicates the
percentage matched when measurement comparison between the appraiser
by an individual appraiser is repeated. It and the external standard. It is an
is an indication of Repeatability. indication of Accuracy.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 27


Attribute Agreement Analysis Output - Session

Appraiser Repeatability

Repeatability % for each appraiser

Kappa Values should be greater than 0.9


for good agreement and at least greater
than 0.7 for acceptable agreement.

Ho: Kappa = 0
Ha: Kappa > 0

Since P is Low, we conclude Kappa > 0

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 28


Attribute Agreement Analysis Output - Session
Appraiser accuracy

Accuracy % for each appraiser

All the Kappa values are greater than 0.7


but less than 0.9. Hence, they are
acceptable with caution

P Values indicate all Kappa values are


significantly different from Zero.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 29


Attribute Agreement Analysis Output - Session
Team Reproducibility

Reproducibility % for the team

Comparison between appraisers. It is an


indication of Reproducibility. Again, the
Kappa values are cautiously acceptable.

Team accuracy

Team accuracy % for the team

Comparison of the entire measurement


system (all appraisers) with the external
standard. Again, the Kappa values are
acceptable with caution

P values indicate the Kappa values are


significantly different from Zero.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 30


Attribute Agreement Analysis- Result Analysis

I. Within Appraisers: Shows how consistent are the appraisers (students) in their
assessment. It tells the number of times each of them agree with him/herself
when evaluating the same sample in replication.
II. Each Appraiser vs. Standard:
a. Shows how often did each appraiser agree with the standard (trainer). It gives
both number and percentage.
b. Appears only when known standard is given and there are only two-levels
(true/false, go/no-go). It gives the number and percentage of times each
appraiser disagreed with the standard and across trials.
III. Between Appraisers: It gives the percentage of times all appraisers agreed with
each other. The between-appraisers statistics do not compare the appraisers'
ratings to the standard. Although the appraisers' ratings may be consistent, they
are not necessarily correct.
IV. All Appraisers vs. Standard: Determines what percentage of times did all
appraisers agree with the standard.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 31


Attribute Agreement Analysis-Result Analysis
Fleiss’ Kappa* statistics
➢ Statistically the appraisers' responses can be compared using Kappa
values:
o If kappa = 1, then there is perfect agreement.
o If kappa = 0, then there is no agreement.
o The higher the value of kappa, the stronger the agreement.
o Negative values occur (rarely) when agreement is weaker than expected.

Classification Criteria
Accept Caution Reject

Kappa Values for > 0.9 0.7 < Kappa < 0.9 < 0.7
Nominal Data
Kendall’s Coefficient for > 0.9 0.7 < Kendall < 0.9 < 0.7
Ordinal Data

Cohen’s Kappa: Only works when there are two raters.


Fleiss’ Kappa: Calculated as the ratio of the degree of achievement achieved above
chance to the degree of achievement attainable above chance.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 32


Recommendation after AAA

Few appraisers less than 100% on “within appraiser”


• Train those with low scores

All appraisers less than 100% on “within appraiser”


• Possibility of change needed in measurement system

Poor “between appraiser” scores


• Retraining

Poor “appraiser vs standard” scores throughout


• Possibility of change needed in the measurement system.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 33


Practice Exercise 1

An online Image marketplace performs acceptability check on


the uploaded images via Artificial intelligence. However, 30% of
the images are not clearly classified by the AI and passed on to
manual (visual) check to be done by appraisers. Demonstrate if
the visual inspection system is acceptable.
< Image Check >

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 34


Practice Exercise 2

A mobile phone assembly unit has observed an increase in the number of


phones rejected at the final stage due to scratched screens. The production
manager cannot understand why this is happening, as each screen is
inspected for scratches before assembly, and immediately has a protective
film placed over it. This inspection is referred to as the in-line inspection, and
there has been no increase in the number of scratches detected at the in-line
inspection. The inspection is done by a team of quality technicians. They do a
visual check of every screen manufactured, by placing it under an inspection
lamp and looking for scratches. Each screen is then categorised as a Pass or a
Fail and the results are recorded by the technicians in a database. Tom
discusses the issue with the technicians, however they are adamant that
there isn’t a problem with the inspection as it has always worked well in the
past. Nevertheless, Tom feels it would be worthwhile to conduct an MSA, so
that inspection can be eliminated as a potential root cause.

< Mobile Screens >

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 35


Attribute Agreement Analysis (Ordinal data)

Kendall's coefficient
Kappa statistics represent absolute agreement between
ratings while Kendall's coefficients measure the associations
between ratings. Therefore, kappa statistics treat all
misclassifications equally, but Kendall's coefficients do not treat
all misclassifications equally. For instance, Kendall's coefficients
considers the consequences of misclassifying a perfect (rating =
5) object as bad (rating = 1) as more serious than misclassifying it
as very good (rating = 4).
Kendall's coefficient values can range from −1 to 1. A
positive value indicates positive association. A negative value
indicates negative association. The higher the magnitude, the
stronger the association.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 36


Problem 2

A large corporation wants to conduct bulk interviews for associate position.


A team of interviewers Simpson, Montgomery, Holmes, Duncan and Hayes
will carry out the initial screening interviews by telephone. Their task is to
categorize the applicants as “Strongly Hire (2)”, “Hire (1) , “Neutral/Stand by
(0)”, “Reject (-1) and “Strongly Reject (-2)”. Before moving forward, the HR
manager wants to ensure that the team conducting the interviews is capable
of a high level of agreement. The manager creates a study based on previous
interviews that have been recorded. A panel of experts provides the “right
answer” for each candidate.
Each interviewers listens to the recorded interviews and decides whether the
candidates should be rated as -2,-1,0,1,2. Find if there is high level of
agreement among the interviewers.

< Hiring >

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 37


Attribute Agreement Analysis Inputs
Path: Stat > Quality Tools > Attribute Agreement Analysis…

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 38


Attribute Agreement Analysis Inputs

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 39


Attribute Agreement Analysis Output

Reproducibility

The Kendall’s coefficient is greater than


0.9 and hence acceptable. The team
conducting the interview are calibrated
very well and show high level of
agreement.
P-value indicates the Kendall’s coefficient
is significantly different from Zero.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 40


Attribute Agreement Analysis Output

Appraiser accuracy

The Kendall’s coefficient is greater than


0.9 and hence acceptable.
P-value indicates the Kendall’s coefficient
is significantly different from Zero.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 41


Practice Exercise 3

Delay and cost overruns are common in all projects and


construction projects are no exceptions. A questionnaire study
was conducted to identify whether various consultants working
on such projects were in agreement with regards to the
significant factors contributing to delay and cost overruns. A
questionnaire of 15 factors was carefully designed. It was
directed to 12 consultants who were supposed to use a priority
scaling from 1 to 5 (1=very low, 2=low, 3=medium, 4=high, and
5=very high) to rank each factor. Are all the consultants in
agreement? Use statistical reasoning for the same before we can
prioritize the factors.
< Cost Overruns>

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 42


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 43


MSA for Continuous Data

Measurement value
= True Value + Measurement Error
MSA implies assessing the existing measurement system
for the metric chosen for Project y.

Exercise – Each team should give at least two examples of


measurement errors which are observed in routine.

No parameter can
be measured
accurately!!!!!

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 44


Measurement System Analysis

Observed Variation

True Variation
Measurement
System Error

Observed
variation is
sum of part
variation and
measuremen
t system
variation.
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 45
Measurement System Analysis
When data is collected, the observed process variation includes the
following:

Observed Process Variation

Actual Process Variation Measurement System Variation

Variation due to Gauge Variation due to Operator

Resolution Bias Stability Linearity Repeatability Reproducibility

Accuracy Precision

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 46


Measurement system error

Measurement system errors can be classified into two


categories:
➢ Accuracy: the difference between the part’s measured and
actual value
➢ Precision: the variation obtained from replicate
measurements of the same or similar objects under specified
condition.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 47


Precision vs. Accuracy

There is a Problem with Spread/ There is a problem with


Dispersion/ Precision Centering/ Accuracy
Accurate

Precise

 




LSL T USL
LSL T USL
Solution is to Squeeze the
Solution is to Shift the Mean
Spread

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 48


Accuracy- Calibration & Traceability
➢ Calibration is the process of adjusting the output on a measurement
system to agree with value of the applied standard, within a specified
accuracy.
o We need to calibrate a thermometer periodically so that we can verify that it
shows accurate temperature.

➢ Traceability is the unbroken chain of comparisons to a national or


international standard (NIST, PTB, BIS)
o Manufacture a scale – check with gages in the shop
o Validate gages with master gages (external company)
o Company’s master gages validated with established standard.

➢ Need to adequately control integrity of standards & measurement devices


o No one allowed to touch the master gage for length
o Ensure rust prevention, wear & tear etc. for mechanical gages etc.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 49


Accuracy
The accuracy of a measurement system has three components:

➢ Bias: a measure of the inaccuracy in the measurement system; the


difference between the observed average measurement and a
master/standard value. In some organizations accuracy is used
interchangeably with bias.

➢ Linearity: a measure of how the size of the part affects the bias of
the measurement system; the difference in the observed bias
values through the expected range of measurements.

➢ Stability: a measure of how well the system performs over time;


the total variation obtained with a particular device, on the same
part, when measuring a single characteristic over time.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 50


Stability vs. Linearity
Dotted line shows known Bell Shaped curve
value of measurement shows mean and Linear Bias
variation in repeated No Bias

Observed
values for the same
measured item.
Non-
Linear
True
Bias

Instability:
In- Accuracy
over time
Time 4:00 Time 6:00

LSL USL

Non- Linearity:
In- Accuracy
over value

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 51


Discrimination (Resolution)

The smallest division on the scale of the gage or the smallest


difference in reading that can be shown on the display of a
measuring instrument.
➢ Smallest change in a quantity being measured that causes a
perceptible change in the corresponding indication.
➢ The higher the resolution, the smaller the measurement it can
record.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 52


Discrimination (Resolution)

Ideally, the gage (instrument) should be able to measure smaller


than one- tenth of the tolerance (or one-tenth of the range of
variation existing in the process).

If one to two feet


is the variation,
the scale should
have at least 10
gradations
(divisions) within
a foot.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 53


Precision Elements

Precision, or measurement variation, has two


components:
➢ Repeatability: Variation observed if the same operator is asked to
measure the same unit repeatedly with the same measuring equipment.

➢ Reproducibility: Variation observed if two or more operators


measure the same unit with the same measuring equipment.

“If you cannot express what you know as figures, that means you do not know that clearly.
If you do not know it clearly, you cannot control it. If you cannot control it, luck will be the
deciding factor of the outcome. “
……Mikel J. Harry

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 54


Summary: Types of Measurement Error

➢ Accuracy: Difference between the average of observed values and the


standard.

➢ Precision: Variation observed between repeated observations of the


same unit – either due to operator or due to equipment.

➢ Linearity: Consistency of the measurement system across its entire


range.

➢ Stability: Variation which occurs with time. It is observed if the same


operator measures the same unit with the same measuring equipment over
an extended period.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 55


Exercise

➢ Give examples of gages in real-life situations which have

o Poor accuracy

o Poor resolution

o Poor precision

o Poor stability

o Poor linearity

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 56


Criteria for Acceptance
Classification Indicator Criteria
Accept Caution Reject
Accuracy %Bias <5% 5-10% >10%
%Linearity
Precision % Gage R&R <10% 10-30% >30%
Continuous

Resolution Number of Distinct > 10 4 - 10 <4


Categories

Accuracy %Appraisers vs. >90% 70-90% <70%


Standard
Discrete

Precision %Within or Between >90% 70-90% <70%


Appraisers

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 57


MSA Checklist

Before conducting MSA, it is suggested that the following checklist


must be filled:
➢ Determine the gage. Are there more than one gages for the same data?
If yes, after MSA, use only the most suitable one.
➢ Define the procedure of measurement.
➢ Are there any standards available? If yes, are they correct and updated?
If yes, are they used thoroughly? If no, plan with the team to do the
same.
➢ Define the design intent of the gage or demand from supplier
o Discrimination (Resolution/ Granularity)
o Accuracy
o Precision

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 58


Gage R&R
Focuses on a the Repeatability and Reproducibility of a Measurement
System, and it’s ability to produce measurements consistently.

❑ Repeatability
o ”Within Operator Variation” - The variation in
measurements
– obtained with the SAME MEASURING INSTRUMENT when
– used several times by ONE APPRAISER while measuring the
– identical characteristic on the SAME PART.
❑ Reproducibility
o ”Between Operator Variation” – The variation in the average
– of measurements made by DIFFERENT APPRAISERS using
– the SAME MEASURING INSTRUMENT when measuring
– the identical characteristic on the SAME PART.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 59


Gage Study for Continuous Data

➢ If the measured data is continuous, use the Gage R&R to


determine the effectiveness of the gage.
Crossed Designs Operators
Used where each part can be subject to repeated Opr1 Opr2
measurements. In this design, each part is inspected
by each operator. Example – Weight of a drug
1 2 3
Nested Designs
Used where the same part cannot be used for Parts
repeated measurements as the parts get Operators
destroyed on testing. In this design, each
operator is given different parts from the same Opr1 Opr2
batch. Example – Measuring the effect of a
drug by dissolving it in a chemical.
1 2 3 4 5 6

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 60


Continuous Gage R&R Approach
1. Prepare MSA checklist
2. Select 3-5 parts that span the entire range of variation of the process. Parts/Samples should be
representative of the population.
3. Study should be done for all operators performing the measurement. Typically, 3-5 operators
are selected.
4. Each operator should take at least three measurements on each part to check for repeatability.
The study should be conducted blind so that operators don’t recall prior readings.
• Select neutral part numbers 1-2-3
• Randomize operator measurement of parts rather than measuring in sequence.
• Collect data separately with each operator to avoid interference
5. Enter data into Minitab and perform the analysis.
6. Carry out MSA by ANOVA method as it gives more information*
7. Analyze the result based on the criteria decided for acceptance
8. Conclusions should include analyses by all applicable tools and not only Gage R&R study
9. The measurement system must be corrected and baseline redefined before the data is used for
the project.
* There are other less important methods which are not used now

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 61


Fundamental Properties
Although each measurement system may be required to have different statistical properties,
there are certain fundamental properties that define a “good” measurement system. These
include:
➢ Adequate discrimination and sensitivity. The increments of measure should be small relative
to the process variation or specification limits for the purpose of measurement. The
commonly known Rule of Tens, or 10-to-1 Rule, states that instrument discrimination should
divide the tolerance (or process variation) into ten parts or more. This rule of thumb was
intended as a practical minimum starting point for gage selection.
➢ The measurement system ought to be in statistical control. This means that under repeatable
conditions, the variation in the measurement system is due to common causes only and not
due to special causes. This can be referred to as statistical stability and is best evaluated by
graphical methods.
➢ For product control, variability of the measurement system must be small compared to the
specification limits. Assess the measurement system to the feature tolerance.
➢ For process control, the variability of the measurement system ought to demonstrate
effective resolution and be small compared to manufacturing process variation. Assess the
measurement system to the 6-sigma process variation and/or Total Variation from the MSA
study.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 62


Gage R&R Example 1 (Crossed)

A company assembles wire connectors. The final wire length is


measured in mm manually by operators. Measurement by the
same gage is carried out for three parts by three different
operators. Find if the system is acceptable.
Observe the data in Minitab and answer the < Gage R&R Crossed Before>
following question

How many parts?


How many operators?
Each operator measures each part _________ times.
Each part is measured in total _________ times.
How many measurements in total by each operator?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 63


Data in Pivot Table

How many parts?


How many operators?
Each operator measures each
part _________ times.
Each part is measured in total
_________ times.
How many measurements in
total by each operator?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 64


Gage R&R Inputs

Path: Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed)

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 65


Gage R&R Inputs

ANOVA method is more


accurate as it accounts for
Operator-Part interaction

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 66


GR&R Outputs

A D

B E

C F

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 67


GR&R Outputs

A D

B E

C F

WHAT DO WE LOOK FOR IN THESE CHARTS:


A. Part-to-Part to be the biggest contributor (not as shown)
B. Want Chart to be In-Control and R values close to zero. An Out-of-Control Range Chart indicates Poor Repeatability.
C. Want 50% Out-Of-Control (Indicating the Operators can tell a good part from a bad part)
D. Want Small Spread around the Mean Points of each individual Part
E. Want Mean & Spread the same between Operators
F. Want lines to be on top of each other. When they are, this indicates Good Reproducibility.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 68


Analysis of the Graphs
A. Shows contribution of components to the total variation. The three
colors are Blue (% with respect to tolerance), Green (% with respect to
1st study variation), Orange (% with respect to variance). For a good gage,
we expect most of the contribution to the total variation should come
from the parts and very minimal (less than 30%) should come from the
Gage R&R. If all three colors are there, interpret the blue color bars. If
the tolerance has not been specified, then there will not be any blue
bars, in which case interpret the green color bars.

B. The graph is useful to find stability of repetitive measurements by


operators. Graph shows results for each operator inspecting each part. If
each operator is consistent in measuring the parts, this graph should be
close to zero and be in control. Investigate causes or re-measure for
points going beyond the control limits.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 69


Analysis of the Graphs
C. Depicts capability to detect different samples. The red control limits
indicate gage variation (error). The graph shows average part means.
Since most of the variation should come from the parts, we expect
the part means to be bigger than the gage variation. This means that
the operators can discern between different parts. For a good gage,
maximum number of points should go outside the control points.

D. Sample to sample variation is shown. For a good gage, we expect to


see consistent values for each part being measured by different
operators. We do expect to see different means for different parts.
The means joining the parts should not be horizontal.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 70


Analysis of the Graphs
E. Shows mean of measurements by operators. Each operator
measures three different parts. So, on an average all the
measurements for each operator should be consistent. For a good
gage, we expect to see a horizontal line joining the means.

F. Shows how different each operator’s measurements are for the


same and different sample. Values should be same for the same
sample. For a good gage, all line should be on top of each other. This
chart can help identify if there is an interaction between the part and
the operator. This chart will identify if there is an operator who is not
consistent with others or if there is a part that is hard to measure.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 71


GR&R Output – Session Window

Ref

Provided for reference

Ho: There is NO interaction between parts & operator


Ha: There is interaction between parts & operator
P value is greater than Alpha (0.05), hence we accept Ho
This is GOOD

Ho: Parts have NO impact on measurement results


Ha: Parts have an impact on measurement results
P value is greater than Alpha (0.05), hence we accept Ho
This is NOT GOOD

Ho: Operators have NO impact on measurement results


Ha: Operators have an impact on measurement results
P value is greater than Alpha (0.05), hence we accept Ho
This is GOOD

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 72


GR&R Output – Session Window

1st Look for this Value. This value


should be less than 10% for the Gage to
be acceptable.

2nd Look for this Value if the tolerance


value is not available. This value should
be less than 30%, for the gage to be
acceptable.

This value should be greater than 4


(better if greater than 10). This is the
number of non-overlapping groups
within your process data that your
measurement system can discern. So,
this gage is NOT ACCEPTABLE.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 73


Calculations Behind the GR&R Numbers

➢ %Contribution =  2 GR&R /  2Total x 100%


➢ % Study Variation =  GR&R / Total x 100% = %Contribution
➢ %Tolerance = 6* x  GR&R / Tolerance x100%
➢ SN Ratio =  Parts/  GR&R
➢ Number of Distinct Categories = Round (SN Ratio x 1.41)

Classification Criteria

Accept Caution Reject

% Contribution <1% 1-9% >9%

% Study Variation or <10% 10-30% >30%


%Tolerance

No of Distinct Categories >10 4-10 <4

* Usually, this value is chosen as 5.15 because 99% of a normal distribution is within
2.575  of either side of mean.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 74


Post, measurement system correction

➢ The measurement system was rejected and an improvement


exercise was taken up for the same.
➢ The MSA activity was repeated again.
➢ Find if the system is acceptable post correction

< Gage R&R Crossed After >

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 75


Gage R&R Example 2 (Crossed)

An engineer selects 10 parts that represent the expected range


of the process variation. Three operators measure the 10 parts,
three times per part, in a random order. Find if the
measurement system is acceptable.
Observe the data and < GageAIAG.xls>
answer the following
question

How many parts?


How many operators?
Each operator measures each part _________ times.
Each part is measured in total _________ times.
How many measurements in total by each operator?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 76


Gage R&R Example 2
Operator B is NOT repeatable!
Gage is ACCEPTABLE
with GR&R less than 30%

Operator C is NOT consistent


with others!

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 77


Gage R&R Example 2

Ref

Provided for reference

GOOD. Part variations are


significant – which is what we
expect.

NOT GOOD. Operators have an


impact on measurement

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 78


Gage R&R Example 2

Cautiously
ACCEPTABLE

Cautiously
ACCEPTABLE

Cautiously
ACCEPTABLE

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 79


Practice Exercises 4

The manager of a fitness centre wants to offer


body fat percentage measurement to clients.
Because skinfold thickness is easily measured
and correlates highly with total body fat, the
manager decides to purchase a skinfold
calliper and teach his fitness trainers to
< Fitness Gage >
perform these measurements. The manager
asks 10 clients who have varying amounts of
body fat to participate in a measurement
systems analysis. Three trainers measure the
triceps skinfold thickness of the 10 clients, 2
times each, in random order. Find if the
trainers are consistent and accurate in their
measurements. (Crossed)

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 80


Practice Exercises 5
A quality control manager in a hospital laboratory wants to assess
the equipment that is used to measure blood clotting time. The
manager selects blood samples that represent the expected
range of the variation. Three technicians measure five samples
twice, for a total of 30 measurements. Each sample is unique to
the technician, that is, no two technicians measured the same
sample. Demonstrate if the measurement system is acceptable.
(Nested)
< Blood Clot >

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 81


Common reasons for unacceptable MS
➢ The operational definitions are not clear and/or not being followed by everyone.
➢ Ineffective training with respect to measurement systems, operational definitions,
etc.
➢ Issue with a particular part, measurement system, etc.
➢ Data not being recorded consistently. E.g. rounding up vs rounding down
➢ Everyone not using the same version of the current standard, checklists, drop
down menus, reference documents, software etc.
➢ Appraisers becoming bored, tired, rushed due to monotony of job.
➢ Operator/Appraiser not aware of importance of the measurement and following
the operational definition.
➢ Inconvenience during measurements. E.g. Straining of eyes, having to stretch or
twist to read the measurements or note the results
➢ Operators/Appraisers eyesight NOT OK
➢ Poor lighting conditions
➢ Sample preparation or loading issues

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 82


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 83


Baseline Performance – Discrete Data
In this step, our objective is to evaluate current performance of Y (output)
and baseline it for comparison post improvement. We may choose to
compute Sigma level or any of the following baseline metrics (for discrete
data)

➢ DPMO – Defects Per Million Opportunities

➢ DPU – Defects Per Unit

➢ Yield % or Defective %

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 84


DPMO method for Discrete Data
Situation 1: Sample Size, Defects, OFE known

Example: Phone calls are recorded at a call center and later evaluated. A
sample of 100 calls are heard by the quality representatives and evaluated
based on a Call Quality Checklist. There are 10 Opportunities for error in each
call. 150 defects were observed in the sample of 100 calls. What is DPMO?

Samples inspected = 100 DPO = 150/1000 = 0.15

Output
Inputs

Defects=150 DPMO = 150000


OFE=10 Zoverall = 1.04
TOFE= 10*100=1000

OFE Opportunities for error per unit = number of characteristics inspected per unit
DPO Defects per Opportunity
DPMO Defects per million Opportunity

Zoverall Sigma Level Overall

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 85


DPMO method for Discrete Data
https://www.benchmarksixsigma.com/calculators/

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 86


DPU method for Discrete Data
Situation 2: Sample Size, Defects, OFE is difficult to ascertain

Example: 200 shirts were inspected for puckered seams


(considered as defects). One shirt can have more than one
puckered seam. We observed 200 defects. What is DPU?

Units inspected = 200 DPU= 200/200 = 1

Output
Inputs

Zoverall = -0.34
Defects = 200

DPU Defects per Unit


Zoverall Sigma Level Overall

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 87


DPU method for Discrete Data
https://www.benchmarksixsigma.com/calculators/

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 88


Yield method for Discrete Data
Situation 3: Sample Size, Defectives

Example: 1000 samples were inspected. 50 were found to be defective. What


is the Yield?

Units inspected = 1000 Yield= 950/1000 = 95%

Output
Inputs

Defectives = 50 Zoverall = 1.64


Non-Defectives= 1000-50=950

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 89


Yield method for Discrete Data
https://www.benchmarksixsigma.com/calculators/

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 90


Practice Exercises
Practice Exercise 6: Product catalogues displayed on the website of an e-
commerce platform are being inspected to check whether the following
attributes are correct: 1.Item model number; 2.Brand name; 3.Price; 4.Image;
5. Description. 200 SKU’s were inspected, and 5 defects were observed. Use
the appropriate metric to baseline the process

Practice Exercise 7: 2000 lines of delivered code was inspected. 100 errors
were found. Use the appropriate metric to baseline the process

Practice Exercise 8: Loan files are being evaluated at a bank. A sample of 200
files was checked by the quality representatives and evaluated based on a
Quality Checklist. There are 8 Opportunities for error in each file. 50 defects
were observed in the sample of 200 files. Use the appropriate metric to
baseline the process

Practice Exercise 9: 800 pressure valves were inspected. 40 were defective.


Use the appropriate metric to baseline the process
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 91
Yield Concepts
✓ Yield
✓ It is the first-time right percentage

✓ Kinds of Yield
✓ First Time Yield (FTY)
✓ Used to determine the quality level of individual sub process

✓ Rolled Throughput Yield


✓ Shows quality of entire process. Obtained by multiplying individual FTY’s of
sub processes

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 92


Sigma level for Yield
Consider the following process.

Filling Loan Completeness Entering Customer Credit


Application of Application Application data check
Yield of 100% Yield of 92.5% Yield of 93.0% Yield of 94.4%

RTY = 1.00 × 0.925 × 0.930 × 0.944 = 0.8121 = 81.21 %

Submit Confirmation
Requisition
requisition Purchase sent to
Fill out entered to
92% requisition 95% to
purchase
85% the 95% order sent
to supplier
99% requisition
co-
computer
dept coordinator

RTY=

RTY Rolled Throughput Yield

FTY First Time Yield

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 93


Practice Exercise 10
Dough Prep Apply sauce Add cheese Add toppings Bake

FTY of Each Sub-process (The steps are sequential)


87% 90% 96% 95% 100%

You own a Pizza shop. Lately you have observed lot of wastage in various
steps of the process leading to high expense on raw materials. To resolve the
issue you have declared that employees will get a bonus at the end of the
quarter only if the overall wastage of the process is less than 30%. The data
of the FTY of each sub process (at the end of the quarter) is given above. Will
you dispense the bonus?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 94


Practice Exercise 11

FTY of 92.5%

Waste 7.5% FTY of 93%

FTY of 94.4%
Waste 7%

Waste 5.6%

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 95


Measure Current Performance - Discrete
List of baseline metrics for Discrete Data

Data Method
Defects (Discrete) DPMO – Defects Per Million Opportunities
OFE known
Defects (Discrete) DPU – Defects Per Unit
OFE is difficult to ascertain
Defectives (Discrete) Yield (%) / Defective %

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 96


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 97


Baseline Performance – Continuous Data
In this step, our objective is to evaluate current performance of Y
(output) and baseline it for comparison post improvement. We
may choose to compute one of the following

➢ Zoverall – Sigma Level Overall

➢ Capability Stats – (Will be covered in the Zeta Module)

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 98


What is Six Sigma performance?
Your process is performing at sigma level of six if the difference between mean
and specification limit is six times the standard deviation

Question - How many


times of standard
LSL USL deviation (σ) are
Mean specification limits to
mean?
Answer – Same as sigma
level for the process (Z
value)

1σ ? times σ

-6σ -5σ -4σ -3σ -2σ -1σ μ +1σ +2σ +3σ +4σ +5σ +6σ

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 99


Calculation of Sigma Level
The purity of the chemical A must be at least 99%. The average
purity for the last 30 batches is 99.72% with a standard deviation
of 0.28%. Calculate the Sigma Level overall for the purity?.

USL = DPMO = 5063.995

Output
Inputs

LSL = 99 Zoverall = 2.57


Mean = 99.72
Standard Deviation = 0.28

DPMO Defects per million Opportunity


Zoverall Sigma Level Overall

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 100


Sigma Level method for Continuous Data
https://www.benchmarksixsigma.com/calculators/

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 101


IOTA – Overview

Introduction to Logical MSA Discrete


MSA Validation Data

Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data

Probability
Calculations

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 102


Most Common Probability Distributions

Binomial PD Defective data

Discrete PD
Probability Distribution (PD)

Poisson PD Defect data

Normal PD Continuous data

Continuous PD Exponential PD Life- related data

Weibull PD Mechanical characteristics

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 103


Probability Calculation for Continuous Data (Normal)
If average TAT for a process is 2.5035 and standard deviation is
0.5203, what is the probability of TAT being less than 2 days.
Path: Graph > Probability Distribution Plot…

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 104


Probability Calculation- Normal Distribution

Because we within 2
days) we are selecting
left tail

Distribution Plot
Normal, Mean=2.5035, StDev=0.5203
0.8

0.7
With the current process
0.6
parameters there is 16.6
percent chance that the 0.5
Density

TAT will be equal to or 0.4

less than 2 days 0.3

0.2

0.1666
0.1

0.0
2 2.503
X

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 105


Probability calculation- Binomial distribution
The supervisor for a call centre is monitoring unanswered calls.
Client has expressed the expectation of unanswered calls to not
be more than 5%. The current process is at 10%, what is the
probability that the defective % for the next 100 calls will be at
the most 5%?

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 106


Probability calculation- Binomial distribution

Distribution Plot
Binomial, n=100, p=0.1
0.14

0.12

0.10
5.7% chance that
Probability

0.08
the next 100 calls
will have 5% or 0.06

less unanswered 0.04

calls 0.02 0.05758

0.00
5 20
X

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 107


Practice Exercises

Practice Exercise 12: A process has 2% defectives. Find the


probability of greater than one defective in a sample of 5 taken
from the process.

Practice Exercise 13: Let us say, the Indian Cricket Captain


decides to always call “heads”. A coin being flipped an outcome
of tail is considered a defective. What is the possibility of
winning at least one toss in the series when the cricket match
series has 4 matches. (Hint: Probability of event: 0.5)

Practice Exercise 14: Percentage defectives in a process is 1. A


sample of 20 is taken for inspection. Find the probability that
there will be at least three defectives.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 108


Probability Calculation- Poisson Distribution
Quality team inspects transactions versus checklist. A fixed
sample of 100 transactions is inspected everyday and number of
errors is recorded. The current average defects (DPU) is 0.824.
Calculate the probability of having no errors in a transaction.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 109


Probability Calculation- Poisson Distribution

Distribution Plot
Poisson, Mean=0.824
0.5

0.4387

0.4

0.3
Probability

The chance of
0.2
having a
transaction with no 0.1

defects is 43.9%.
0.0
0 4
X

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 110


Practice Exercises

Practice Exercise 15: The number of accidents on Highways of a


State per day follow Poisson Distribution. If the mean is 2 (DPU),
what is the probability that there will be zero accidents on any
day.

Practice Exercise 16: Errors in an invoice follow the Poisson


distribution with mean as 0.05 (DPU). Find the probability that
an invoice selected will have at most two errors.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 111


Appendix 1 - Six Sigma

Lower Spec Upper Spec

6 5 4 4 5 6

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 112


Appendix 2- Sigma Level / DPMO Table

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 113


Appendix 3 - Central Limit Theorem (CLT)

Definition
Consider that a sample of size “n” with mean x and variance s2
taken from a random population with mean µ and variance 2 .

The central limit theorem states that, for large values of n, the
distribution of the sample mean, or sum, will converge to a
normal distribution even though the random variable x
(individual data points) may be non-normal

Whenever a large sample of chaotic elements


are taken in hand and marshalled in the order
of their magnitude, an unsuspected and most
beautiful form of regularity proves to have been
latent all along. - Sir Francis Galton
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 114
Central Limit Theorem
Distribution of Sample mean as sample size is increased

This may be verified by considering sample data and increasing the sample size in steps.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 115


CLT Examples

➢ Sales for one store of a chain may not be normal. But, if we


take the sales from several stores of the same chain, the total
may be normally distributed.

➢ Height of a stack of parts may be normally distributed even if


the heights of individual parts may not be normally
distributed.

➢ The time you arrive at the office may be normally distributed


even if all the traffic lights (on/off times) are not normally
distributed.

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 116


Glossary (1/5)
A B cont… C Cont… D
•Accuracy •Binomial Distribution •Coefficient of Variation (CV) •Defectives
•Activity Based Accounting •Black Noise •Common Cause •Defects
•Agile •Blockchain •Confidence Interval •Descriptive Analytics
•Algorithm •Bootstrapping •Confidence Level •Design For Assembly (DFA)
•Alternative Hypothesis •Bottleneck •Constraint •Design Risk Analysis (DRA)
•Apples to Oranges •Box plot •Continuous Data •Design of Experiments (DOE)
•Ansoff Matrix •Brainstorming 1 •Continuous Flow •DFMEA
•ARMI •Brainstorming 2 •Control Chart 1 •DFSS
•Artificial Intelligence (AI) 1 •Bubble Plot •Control Chart 2 •Discrete Data
•Artificial Intelligence (AI) 2 •Bullwhip Effect •Control Chart 3 •Discrimination
•Attribute Agreement Analysis •Burn Down Chart •Control Limits •DMADV
•Attribute Data •Burn Up Chart •Control Plan 1 •DMAIC 1
•Automation •Business Analytics •Control Plan 2 •DMAIC 2
•Autonomation •Business Excellence •CONWIP •DMAIC 3
•Correction •DMAIC 4
B C •Corrective Action •DMAIC 5
•Balanced Scorecard 1 •Capability Analysis •Correlation-1 •DPMO 1
•Balanced Scorecard 2 •Capability Index •Correlation-2 •DPMO 2
•Baseline •Capable Process •COPQ, Cost of Poor Quality 1 •Drum Buffer Rope approach
•Bathtub Curve •Cause •COPQ, Cost of Poor Quality 2
•Bayes' Theorem •Causation •Crashing
•Bells and Whistles •Central Limit Theorem 1 •Creativity & Innovation 1
•Benford's Law •Central Limit Theorem 2 •Creativity & Innovation 2
•Ben Franklin Effect •Central Tendency •CRM
•Bessel's Correction •Chaku Chaku •Crowd Intelligence
•Bias •Change Acceleration Process •CTQ
•Biased Sampling Error •Checksheet •Customer
•Big Hairy Audacious Goal •CLOSED MITT •Customer Satisfaction (CSAT)
(B.H.A.G.) •Clusters •Cycle Time

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 117


Glossary (2/5)
E F Cont… I L
•80:20 Rule •Fraud •IDOV •Law of Large Numbers
•8D Problem Solving •Fraud Detection •Industry 4.0 •Lead Time
•Effective/Efficient •Fraud Prevention •Instruction Creep •Lean
•Efficiency •Internal Rate of Return (IRR) •Lean Six Sigma
•Efficiency Metrics G •Internet of Things (IoT) •Lessons Learned
•Eisenhower Matrix •Gage R&R 1 •Internet of Things Security •Linearity
•Elevator Pitch •Gage R&R 2 Risks •Little's Law
•Emergency Response Action 1 •Gage R&R 3 •Inter Quartile Range •Logical Sub-grouping
•Emergency Response Action 2 •Gemba •Inventory
•European Foundation for •Genchi Genbutsu •Important vs Urgent M
Quality Management (EFQM) •Green Belt •Ishikawa Diagram •Management By Walking
•Evolution of Six Sigma •Groupthink Around (MBWA)
•Excellence •Guide to Hypothesis Test J •Mean Improvement
•Exponential Function •Jidoka •Measurement System Analysis
H (MSA)
F •Hammurabi Code K •Median
•5S •Handoffs •Kaikaku •Metrics
•False Alert, Missed Alarm •Hansei •Kaizen 1 •Mistake Proofing 1
•Fast Tracking •Hawthorne Effect •Kaizen 2 •Mistake Proofing 2
•Fault Tree Analysis •Herd Immunity •Kakushin •Mistake Proofing 3
•Filter Bubble •Heuristic Methods •Kanban 1 •Mixtures
•Fishbone Diagram •Hidden Factory •Kanban 2 •Monte Carlo Simulation
•First Time Yield •Hypothesis Testing 1 •Kano Model 1
•FMEA 1 •Hypothesis Testing 2 •Kano Model 2
•FMEA 2 •Hypothesis Testing 3 •Kaplan Meier Estimator
•FMECA •Hypothesis Testing 4 •Kappa Value
•Force-Field Analysis •Kendall's Coefficient
•Forecasting •KNOT Technique
•Kurtosis

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 118


Glossary (3/5)
N P Cont… P Cont… R Cont…
•Nash Equilibrium •Paynter Chart •Project •Rework
•Net Present Value (NPV)-1 •Personal Excellence •Prototype •Risk Priority Number (RPN)
•Net Present Value (NPV)-2 •PFMEA 1 •Pull System 1 •Robotic Process Automation
•Net Present Value (NPV)-3 •PFMEA 2 •Pull System 2 (RPA) 1
•Net Promoter Score (NPS) •PICK Chart •Purple Cow •Robotic Process Automation
•Nominal Condition •Pilot •Push System 1 (RPA) 2
•Nominal Group Technique •Planning Poker •Push System 2 •Root Cause
(NGT) •Point of Use Inventory •P-value •Root Cause Analysis 1
•Noninferiority Testing •Poka-yoke •Root Cause Analysis 2
•Non Value Adding (NVA)-1 •Power of a test Q •Rolled Throughput Yield
•Non Value Adding (NVA)-2 •Pre-Control Chart •Quality Assurance •R-Squared
•Normal Distribution •Prediction Interval •Quality Control •R-Squared Adjusted
•Null Hypothesis •Predictive Analytics •Quality Costs •Run Chart
•Prescriptive Analytics •Quantile Regression
O •Precision •Quick Win
•Observation •Preventive Action
•One Factor At a Time (OFAT) •Price Elasticity of Demand R
•Operational Definition •Process •RACI
•Operational Excellence •Process Capability •Range
•Optimal Stopping •Process Cycle Efficiency •RATER Model
•Oscillations •Process Excellence •Rational Subgrouping-1
•Outlier •Process Input/Process Output •Rational Subgrouping-2
•Process Mapping 1 •Regression Analysis-1
P •Process Mapping 2 •Regression Analysis-2
•Pacemaker Process •Process Maturity •Regression Testing
•Pareto Chart 1 •Process Stability •Reporting Bias
•Pareto Chart 2 •Product Limit Estimator •Resolution
•Pascal's Triangle •Productivity •Return on Investment (ROI)

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 119


Glossary (4/5)
S S cont… T V cont…
•6 Big Losses •Simpson's Paradox •Takt Time 1 •Variation Reduction
•6S •Single Piece Flow •Takt Time 2 •Visual Control
•Sample •SIPOC •Test of Equivalence •VOB
•Sample Size-1 •Sisyphus Effect •Theory of Inventive Problem •VOC-1
•Sample Size-2 •Six Sigma Solving (TIPS) •VOC-2
•Sample Size-3 •Skewness •Time Series •VOC-3
•Sampling •SMED •TIMWOOD •VUCA
•Sampling Error •Spaghetti Diagram •Tollgate Reviews 1
•SCAMPER •Special Cause •Tollgate Reviews 2 W
•Scatter Plot •Special Process •Trends •Wastes-1
•Scope Creep •Special Requirement •Tribal Knowledge •Wastes-2
•Scrum •Specification Limit •TRIZ •Waste Elimination
•Scrum Poker •Sponsor •Turing Test •Web Analytics
•Scrumban •Spread •Type I Error •Weighted Pareto
•Secondary Metric 1 •Sprint •Type II Error •WIFM
•Secondary Metric 2 •Stability •Will Rogers Phenomenon
•Segmentation •Stable Process U •Wisdom of the Crowds
•Sensitivity Analysis •Stakeholder •Unbiased Sampling Error •Work-Out
•Service 4.0 •Stakeholder Engagement •Unusual Observation •WORMPIT
•SERVQUAL Model •Standard Deviation •Urgent
•Seven Wastes of Lean •Statistical Significance
•Severity Ranking •Stop Gap Arrangement V
•Shojinka •Stop The Line •Validation
•Sigma Level-1 •Stratified Sampling •Value Adding (VA)-1
•Sigma Level-2 •SWAG •Value Adding (VA)-2
•Swim Lane Mapping •Value Stream Map (VSM)
•Verification
•Variance

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 120


Glossary (5/5)
X
•X
•Xbar-R

Y
•Y
•Y=f(X)
•Yield-1
•Yield-2
•Yokoten

Z
•Zero Defect
•Zipf's Law

Link for World’s Best Business Excellence Dictionary –


https://www.benchmarksixsigma.com/forum/business-excellence-dictionary-glossary/

Want your name to be featured in this dictionary?


Provide the best answer to our weekly questions (2 questions per week). Click here to
check the current open question

© Benchmark Six Sigma. Version 3.2 All Rights Reserved 121


© Benchmark Six Sigma. Version 3.2 All Rights Reserved 122

You might also like