Professional Documents
Culture Documents
IOTA Course Material v3.2
IOTA Course Material v3.2
IOTA Course Material v3.2
Benchmark workshops have evolved as Industry best offerings in two formats. In the public workshops
format, we offer globally recognized programs with content and methodology that can be used in
practically any domain. In the customized in-company workshops format, Benchmark is known for its
ability to design and deliver highly valuable programs with demonstrated applicability. Our in-company
workshops are offered with or without consulting support.
4
methodology
Selection of for your
projects
3
business
impacting
projects
2
1
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 4
BSS across the world
USA
Indonesia Netherlands Oman
Italy
Hungary Germany
Madrid
Attribute/Discrete Continuous
Categoric ➢ Dimensions
➢ Colors ➢ Profit
➢ Gender ➢ Sales Revenue
➢ Region ➢ Handle Time
➢ Satisfaction Scale ➢ Lead Time
Count ➢ Turn Around Time
➢ No. of tickets ➢ Weight
➢ No. of defects ➢ Temperature
➢ No. of clicks
➢ No. of items sold
Y = f(x)
Metrics and
Process Stability Hypothesis Testing Experimentation
Big Y to small y to
measurements root cause
Metrics and
Process Stability Hypothesis Testing Experimentation
Big Y to small y to
measurements root cause
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
What is the first question you ask when you are about to run an
analysis?
Answer: "Can I trust my data?" If the answer to that question is "No"
or even "I'm not sure" it's usually pointless to go any further: Why
would you spend any valuable time to interpret data you can't rely
on?
That's where measurement systems analysis, or MSA, comes in.
Continuous or
All types Logical validation
Discrete
Nominal-Binary
(Discrete) Attribute agreement analysis
Operator/Appraiser only
[e.g. Pass/Fail, (Fleiss Kappa statistic)
Go/No-Go]
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
When the metric chosen is such that conducting an MSA is not possible or
practical, then at the least a Logical Validation should be done to ensure
data is reliable and accurate.
➢ In order to validate the data, Daily dispatches and daily production in SAP
can be verified with the physical movement and the actual production,
respectively.
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
2. Create a “master” standard that designates each of the test samples into its true attribute category.
3. Select two or three typical inspectors and have them review the sample items just as they normally would in
the measurement system, but in random order. Record their attribute assessment for each item.
4. Place the test samples in a new random order, and have the inspectors repeat their attribute assessments.
(Don’t reveal the new order to the inspectors!). Record the repeated measurements.
5. For each inspector, go through the test sample items and calculate the percentage of items where their first
and second measurements agree. This percentage is the repeatability of that inspector.
6. Going through each of the sample items of the study, calculate the percentage of times where all of the
inspectors’ attribute assessments agree for the first and second measurements for each sample. This
percentage is the reproducibility of the measurement system.
7. You can also calculate the percent of the time all the inspectors’ attribute assessments agree with each other
and with the “master” standard created in Step 2. This percentage, which is referred to as the Accuracy
(effectiveness) of the measurement system
A call centre for a major credit card company handles incoming calls from
customers with account inquiries. The Quality Assurance (QA) department
conducts regular inspections of their operators by listening to recordings of
the customer calls and classifying the responses as “Good” and “Bad”. In
order to ensure high consistency among the QA team members a MSA study
was conducted with 30 recordings. Check if the measurement system is
acceptable.
If data is Ordinal,
then check this
box
90 90
Percent
Percent
85 85
80 80
75 75
70 70
1 2 1 2
Appraiser Appraiser
The blue dot represents the actual The figure on the right indicates the
percentage matched when measurement comparison between the appraiser
by an individual appraiser is repeated. It and the external standard. It is an
is an indication of Repeatability. indication of Accuracy.
Appraiser Repeatability
Ho: Kappa = 0
Ha: Kappa > 0
Team accuracy
I. Within Appraisers: Shows how consistent are the appraisers (students) in their
assessment. It tells the number of times each of them agree with him/herself
when evaluating the same sample in replication.
II. Each Appraiser vs. Standard:
a. Shows how often did each appraiser agree with the standard (trainer). It gives
both number and percentage.
b. Appears only when known standard is given and there are only two-levels
(true/false, go/no-go). It gives the number and percentage of times each
appraiser disagreed with the standard and across trials.
III. Between Appraisers: It gives the percentage of times all appraisers agreed with
each other. The between-appraisers statistics do not compare the appraisers'
ratings to the standard. Although the appraisers' ratings may be consistent, they
are not necessarily correct.
IV. All Appraisers vs. Standard: Determines what percentage of times did all
appraisers agree with the standard.
Classification Criteria
Accept Caution Reject
Kappa Values for > 0.9 0.7 < Kappa < 0.9 < 0.7
Nominal Data
Kendall’s Coefficient for > 0.9 0.7 < Kendall < 0.9 < 0.7
Ordinal Data
Kendall's coefficient
Kappa statistics represent absolute agreement between
ratings while Kendall's coefficients measure the associations
between ratings. Therefore, kappa statistics treat all
misclassifications equally, but Kendall's coefficients do not treat
all misclassifications equally. For instance, Kendall's coefficients
considers the consequences of misclassifying a perfect (rating =
5) object as bad (rating = 1) as more serious than misclassifying it
as very good (rating = 4).
Kendall's coefficient values can range from −1 to 1. A
positive value indicates positive association. A negative value
indicates negative association. The higher the magnitude, the
stronger the association.
Reproducibility
Appraiser accuracy
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
Measurement value
= True Value + Measurement Error
MSA implies assessing the existing measurement system
for the metric chosen for Project y.
No parameter can
be measured
accurately!!!!!
Observed Variation
True Variation
Measurement
System Error
Observed
variation is
sum of part
variation and
measuremen
t system
variation.
© Benchmark Six Sigma. Version 3.2 All Rights Reserved 45
Measurement System Analysis
When data is collected, the observed process variation includes the
following:
Accuracy Precision
Precise
LSL T USL
LSL T USL
Solution is to Squeeze the
Solution is to Shift the Mean
Spread
➢ Linearity: a measure of how the size of the part affects the bias of
the measurement system; the difference in the observed bias
values through the expected range of measurements.
Observed
values for the same
measured item.
Non-
Linear
True
Bias
Instability:
In- Accuracy
over time
Time 4:00 Time 6:00
LSL USL
Non- Linearity:
In- Accuracy
over value
“If you cannot express what you know as figures, that means you do not know that clearly.
If you do not know it clearly, you cannot control it. If you cannot control it, luck will be the
deciding factor of the outcome. “
……Mikel J. Harry
o Poor accuracy
o Poor resolution
o Poor precision
o Poor stability
o Poor linearity
❑ Repeatability
o ”Within Operator Variation” - The variation in
measurements
– obtained with the SAME MEASURING INSTRUMENT when
– used several times by ONE APPRAISER while measuring the
– identical characteristic on the SAME PART.
❑ Reproducibility
o ”Between Operator Variation” – The variation in the average
– of measurements made by DIFFERENT APPRAISERS using
– the SAME MEASURING INSTRUMENT when measuring
– the identical characteristic on the SAME PART.
Path: Stat > Quality Tools > Gage Study > Gage R&R Study (Crossed)
A D
B E
C F
A D
B E
C F
Ref
Classification Criteria
* Usually, this value is chosen as 5.15 because 99% of a normal distribution is within
2.575 of either side of mean.
Ref
Cautiously
ACCEPTABLE
Cautiously
ACCEPTABLE
Cautiously
ACCEPTABLE
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
➢ Yield % or Defective %
Example: Phone calls are recorded at a call center and later evaluated. A
sample of 100 calls are heard by the quality representatives and evaluated
based on a Call Quality Checklist. There are 10 Opportunities for error in each
call. 150 defects were observed in the sample of 100 calls. What is DPMO?
Output
Inputs
OFE Opportunities for error per unit = number of characteristics inspected per unit
DPO Defects per Opportunity
DPMO Defects per million Opportunity
Output
Inputs
Zoverall = -0.34
Defects = 200
Output
Inputs
Practice Exercise 7: 2000 lines of delivered code was inspected. 100 errors
were found. Use the appropriate metric to baseline the process
Practice Exercise 8: Loan files are being evaluated at a bank. A sample of 200
files was checked by the quality representatives and evaluated based on a
Quality Checklist. There are 8 Opportunities for error in each file. 50 defects
were observed in the sample of 200 files. Use the appropriate metric to
baseline the process
✓ Kinds of Yield
✓ First Time Yield (FTY)
✓ Used to determine the quality level of individual sub process
Submit Confirmation
Requisition
requisition Purchase sent to
Fill out entered to
92% requisition 95% to
purchase
85% the 95% order sent
to supplier
99% requisition
co-
computer
dept coordinator
RTY=
You own a Pizza shop. Lately you have observed lot of wastage in various
steps of the process leading to high expense on raw materials. To resolve the
issue you have declared that employees will get a bonus at the end of the
quarter only if the overall wastage of the process is less than 30%. The data
of the FTY of each sub process (at the end of the quarter) is given above. Will
you dispense the bonus?
FTY of 92.5%
FTY of 94.4%
Waste 7%
Waste 5.6%
Data Method
Defects (Discrete) DPMO – Defects Per Million Opportunities
OFE known
Defects (Discrete) DPU – Defects Per Unit
OFE is difficult to ascertain
Defectives (Discrete) Yield (%) / Defective %
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
1σ ? times σ
-6σ -5σ -4σ -3σ -2σ -1σ μ +1σ +2σ +3σ +4σ +5σ +6σ
Output
Inputs
Baseline Baseline
MSA Continuous
Performance Performance
data
Discrete Data Continuous Data
Probability
Calculations
Discrete PD
Probability Distribution (PD)
Because we within 2
days) we are selecting
left tail
Distribution Plot
Normal, Mean=2.5035, StDev=0.5203
0.8
0.7
With the current process
0.6
parameters there is 16.6
percent chance that the 0.5
Density
0.2
0.1666
0.1
0.0
2 2.503
X
Distribution Plot
Binomial, n=100, p=0.1
0.14
0.12
0.10
5.7% chance that
Probability
0.08
the next 100 calls
will have 5% or 0.06
0.00
5 20
X
Distribution Plot
Poisson, Mean=0.824
0.5
0.4387
0.4
0.3
Probability
The chance of
0.2
having a
transaction with no 0.1
defects is 43.9%.
0.0
0 4
X
6 5 4 4 5 6
Definition
Consider that a sample of size “n” with mean x and variance s2
taken from a random population with mean µ and variance 2 .
The central limit theorem states that, for large values of n, the
distribution of the sample mean, or sum, will converge to a
normal distribution even though the random variable x
(individual data points) may be non-normal
This may be verified by considering sample data and increasing the sample size in steps.
Y
•Y
•Y=f(X)
•Yield-1
•Yield-2
•Yokoten
Z
•Zero Defect
•Zipf's Law