Professional Documents
Culture Documents
Lean Six Sigma Measure Phase
Lean Six Sigma Measure Phase
Lean Six Sigma Measure Phase
Measure Phase
Welcome to Measure
Now that we have completed the Define Phase we are going to jump into the Measure Phase.
Welcome to the Measure Phase - will give you a brief look at the topics we are going to cover.
Welcome to Measure
Overview
Process
Process Discovery
Discovery
Six
Six Sigma
Sigma Statistics
Statistics
Measurement
Measurement System
System Analysis
Analysis
Process
Process Capability
Capability
Wrap
Wrap Up
Up &
& Action
Action Items
Items
DMAIC Roadmap
Process Owner
Champion/
D t
Determine
i Appropriate
A i t Project
P j t Focus
F
Define
Estimate COPQ
Establish Team
Measure
Verify
y Financial Impact
p
Here is the overview of the DMAIC process. Within Measure we are going to start getting into details about
process performance, measurement systems and variable prioritization.
Welcome to Measure
Select the Vital Few X’s Causing Problems (X-Y Matrix, FMEA)
Y
Repeatable &
Reproducible?
N
This provides a process look at putting “Measure” to work. By the time we complete this phase you
will have a thorough understanding of the various Measure Phase concepts
concepts.
Measure Phase
Process Discovery
Process Discovery
Overview
Welcome
Welcome to
to Measure
Measure
Process
Process Discovery
Discovery
Cause
Cause &
& Effect
Effect Diagram
Diagram
Detailed
Detailed Process
Process Mapping
Mapping
Cause
Cause and
and Effect
Effect Diagrams
Diagrams
FMEA
FMEA
Six
Six Sigma
Sigma Statistics
Statistics
Measurement
Measurement System
System Analysis
Analysis
Process
Process Capability
Capability
Wrap
Wrap Up
Up &
& Action
Action Items
Items
The purpose of this module is highlighted above. We will review tools to help facilitate Process
Discovery.
This will be a lengthy step as it requires a full characterization of your selected process
process.
On the next lesson page we will help you develop a visual and mental model that will give you
leverage in finding the causes to any problem..
Process Discovery
The Y
The or
Problem
The X’s Problem
Condition
(Causes)
l
Material Measurement Environment Categories
You will need to use brainstorming techniques to identify all possible problems and their causes.
Brainstorming techniques work because the knowledge and ideas of two or more persons is
always greater than that of any one individual.
Brainstorming will generate a large number of ideas or possibilities in a relatively short time.
Brainstorming tools are meant for teams
teams, but can be used at the individual level also
also.
Brainstorming will be a primary input for other improvement and analytical tools that you will use.
You will learn two excellent brainstorming techniques, cause and effect diagrams and affinity
diagrams. Cause and effect diagrams are also called Fishbone Diagrams because of their
appearance and sometimes called Ishikawa diagrams after their inventor.
In a brainstorming session, ideas are expressed by the individuals in the session and written down
without debate or challenge
challenge. The general steps of a brainstorming sessions are:
Process Discovery
A cause and effect diagram is a composition of lines and words representing a meaningful
relationship between an effect,
effect or condition
condition, and its causes
causes. To focus the effort and facilitate thought
thought,
the legs of the diagram are given categorical headings. Two common templates for the headings are
for product related and transactional related efforts. Transactional is meant for processes where
there is no traditional or physical product; rather it is more like an administrative process.
Transactional processes are characterized as processes dealing with forms, ideas, people,
decisions and services. You would most likely use the product template for determining the cause of
burnt pizza and use the transactional template if you were trying to reduce order defects from the
order taking process
process. A third approach is to identify all categories as you best perceive them
them.
When performing a cause and effect diagram, keep drilling down, always asking why, until you find
the root causes of the problem. Start with one category and stay with it until you have exhausted all
possible inputs and then move to the next category. The next step is to rank each potential cause by
its likelihood of being the root cause. Rank it by the most likely as a 1, second most likely as a 2 and
so on. This make take some time, you may even have to create sub-sections like 2a, 2b, 2c, etc.
Then come back to reorder the sub-section in to the larger ranking. This is your first attempt at really
finding the Y=f(X); remember the funnel? The top X’s have the potential to be the Critical X’s, those
X’s which exert the most influence on the output Y.
Finally you will need to determine if each cause is a control or a Noise factor. This as you know is a
requirement for the characterization of the process. Next we will explain the meaning and methods
of using some of the common categories.
There may be several interpretations of some of the Process Mapping symbols; however, just about
everyone uses these primary symbols to document processes. As you become more practiced you
will find additional symbols useful, i.e. reports, data storage etc. For now we will start with just these
symbols.
Process Discovery
The People category groups root causes related to people, staffing, and
organizations:
Examples
p of q
questions to ask: People
p
• Are people trained, do they
have the right skills?
• Is there person to person
Y
variation?
• Are people over - worked?
The Method category groups root causes related to how the work is done, the
way the process is actually conducted:
Examples
p of q
questions to ask: Method
• How is this performed?
• Are procedures correct?
• What might unusual? Y
The Materials category groups root causes related to parts, supplies, forms or
information needed to execute a process:
Process Discovery
The Equipment category groups root causes related to tools used in the process:
Examples of questions to ask:
• Have machines been serviced recently,
what is the uptime?
• Have tools been properly maintained? Y
• Is there variation?
Equipment
The Environment (a.k.a. Mother Nature) category groups root causes related to
our work environment, market conditions, and regulatory issues.
Examples of questions to ask:
• Is the workplace safe and
comfortable? Y
• Are outside regulations impacting the
business?
• Does the company culture aid the
process? Environment
For each of the X’s identified in the Fishbone diagram classify them
as follows:
– Controllable – C (Knowledge)
– Procedural – P (People, Systems)
– Noise – N (External or Uncontrollable)
WHICH X
X’s
s CAUSE DEFECTS?
The Cause and Effect Diagram is an organized way to approach brainstorming. This approach allows
us to further organize ourselves by classifying the X’s into controllable, procedural or noise types.
Process Discovery
Measurement
Capability (C) Adherence to procedure (P) S
Specifications
ifi ti (C)
Chemical
Startup inspection (P) Room Humidity (N) Column Capability (C) Purity
Handling (P) RM Supply in Market (N) Nozzle type (C)
Purification Method (P) Shipping Methods (C) Temp controller (C)
Data collection/feedback
(P)
This example of the Cause and Effect Diagram is of chemical purity. Notice how the input variables for
each branch are classified as Controllable, Procedural and Noise.
Below is a Cause & Effect Diagram for surface flaws. The next few
slides will demonstrate how to create it in MINITAB™.
The Fishbone Diagram shown here for surface flaws was generated in MINITAB™. We will now
review the various steps for creating a Cause and Effect Diagram using the MINITAB™
statistical software package.
Process Discovery
Open the MINITAB™ Project “Measure Data Sets.mpj” and select the worksheet
Surfaceflaws.mtw.
Take a few moments to study the worksheet. Notice the first 6 columns are the classic bones for a
Fishbone. Each subsequent column is labeled for one of the X’s listed in one of the first six columns
and are the secondary bones
bones.
After you have entered the Labels, click on the first field under the “Causes” column to bring up the
list of branches on the left hand side. Next double-click the first branch name on the left hand side to
move “C1 Man” underneath “Causes”.
Process Discovery
To continue identifying
the secondary
branches, select the
button, “Sub…” to the
right of the “Label”
column.
In order to adjust the Fishbone Diagram so the main causes titles are
not rolled grab the line with your mouse and move the entire bone.
Process Discovery
Process Discovery
ct
Sta rt Step A Step B Step C St
Step D Fi i h
Finish
e
sp
In
Process Mapping, also called flowcharting, is a technique to visualize the tasks, activities and steps
necessary to produce a product or a service. The preferred method for describing a process is to
identify it with a generic name, show the workflow with a Process Map and describe its purpose with
an operational description
description.
Remember that a process is a blending of inputs to produce some desired output. The intent of each
task, activity and step is to add value, as perceived by the customer, to the product or service we are
producing. You cannot discover if this is the case until you have adequately mapped the process.
Individual maps developed by Process Members form the basis of Process Management. The
individual processes are linked together to see the total effort and flow for meeting business and
customer needs.
In order to improve or to correctly manage a process, you must be able to describe it in a way that
can be easily understood, that is why the first activity of the Measure Phase is to adequately describe
the process under investigation. Process Mapping is the most important and powerful tool you will
use to improve the effectiveness and efficiency of a process.
Process Discovery
Process Mapping
Then there is the third view: “what it should be”. This is the result of process improvement activities. It
is precisely what you will be doing to the key process you have selected during the weeks between
classes. As a result of your project you will either have created the “what it should be” or will be well
on your way to getting there. In order to find the “what it should be” process, you have to learn
process mapping and literally “walk”
walk the process via a team method to document how it works. This is
a much easier task then you might suspect, as you will learn over the next several lessons.
Process Discovery
There may be several interpretations of some of the Process Mapping symbols; however, just
about everyone uses these primary symbols to document processes. As you become more
practiced you will find additional symbols useful, i.e. reports, data storage etc. For now we will
start with just these symbols.
Process Discovery
Levell 1 – The
L Th Macro
M Process
P Map,
M sometimes
ti called
ll d a
Management level or viewpoint.
Calls
Customer Take Make Cook Pizza Box Deliver Customer
for
Hungry Order Pizza Pizza Correct Pizza Pizza Eats
Order
No
Take Order Add Place in Observe Check Yes Remove
from Cashier Ingredients Oven Frequently if Done from Oven 1
Start New
Pizza
Scrap
No
Tape
Pizza Place in Put on
1 Correct Box
Order on Delivery Rack
Yes Box
Before Process Mapping starts, you have to learn about the different level of detail on a Process
Map and the different types of Process Maps. Fortunately these have been well categorized and
are easy to understand.
There are three different levels of Process Maps. You will need to use all three levels and you most
likely will use them in order from the macro map to the micro map. The macro map contains the
least level of detail, with increasing detail as you get to the micro map. You should think of and use
the level of Process Maps in a way similar to the way you would use road maps. For example, if
you want to find a country, you look at the world map. If you want to find a city in that country, you
look at the country map. If you want to find a street address in the city, you use a city map. This is
the general rule or approach for using Process Maps.
Thee Macro
ac o Process
ocess Map,ap, what
at is
s called
ca ed tthe
e Level
e e 1 Map,
ap, sshows
o s tthee big
b g picture,
p ctu e, you will use tthis
s to
orient yourself to the way a product or service is created. It will also help you to better see which
major step of the process is most likely related to the problem you have and it will put the various
processes that you are associated with in the context of the larger whole. A Level 1 PFM,
sometimes called the “management” level, is a high-level process map having the following
characteristics:
Process Discovery
Probably not, you are going to need a Level 3 Map called the Micro Process Map. It is also known
as the improvement view off a process. There is however a lot off value in the Level 2 Map,
because it is helping you to “see” and understand how work gets done, who does it, etc. It is a
necessary stepping stone to arriving at improved performance.
Next we will introduce the four different types of Process Maps. You will want to use different
types of Process Maps, to better help see, understand and communicate the way processes
behave.
Take
Map (pronounced
Order
M k
Make C
Cookk
Pizza Pizza Correct Pizza
Value Stream
Map.
Deliverer
Deliver
Pizza
The value of the Swim Lane map is that is shows you who or which department is responsible for
While they all
the steps in a process. This can provide powerful insights in the way a process performs. A
timeline can be added to show how long it takes each group to perform their work. Also each
show how work
time work moves across a swim lane, there is a “Supplier – Customer” interaction. This is usually
where bottlenecks and queues form.
gets done, they
emphasize
different aspects of process flow and provide you with alternative ways to understand the
behavior of the process so you can do something about it. The Linear Flow Map is the most
traditional and is usually where most start the mapping effort.
The Swim Lane Map adds another dimension of knowledge to the picture of the process: Now
you can see which department area or person is responsible. You can use the various types of
maps in the form of any of the three levels of a Process Map.
Process Discovery
L in e a r P r o c e s s M a p fo r D o o r M a n u fa c tu r in g
B e g in P r e p d o o r s In s p e c t P r e -c le a n in g A
R e tu r n
fo r
r e w o r k
M a r k f o r d o o r
In s ta ll in to In s p e c t
A w o r k jig
L ig h t s a n d in g
f in is h
h a n d le B
d r illin g
R e w o r k
D e - b u r r a n d A p p ly p a r t M o v e t o
B D r ill h o le s
s m o o th h o le n u m b e r fin is h in g
C
S c r a t c h F in a l A p p ly s t a in
C In s p e c t In s p e c t E n d
r e p a ir c le a n in g a n d d r y
S c r a p
S w im L a n e P r o c e s s M a p fo r C a p ita l E q u ip
P r e p a r e
Business
D e fin e p a p e r w o r k R e v ie w &
R e c e iv e &
Unit
( C A A R & a p p r o v e
N e e d s in s t a lla tio n C A A R
u s e
r e q u e s t )
R e v ie w &
C o n f ig u r e
I.T.
a p p r o v e
& in s t a ll
s t a n d a r d
Finance
R e v ie w &
Is s u e
a p p r o v e
p a y m e n t
C A A R
Corporate
Top Mgt/
R e v ie w &
a p p r o v e
C A A R
Procurement
A c q u ir e
e q u ip m e n t
S u p p lie r S u p p lie r
Supplier
S h ip s P a id
2 1 d a y s 6 d a y s 1 5 d a y s 5 d a y s 1 7 d a y s 7 d a y s 7 1 d a y s 5 0 d a y s
gathering of other r Drink types & quantities r Order transaction r Correct Price
r Other products r Delivery info
pertinent data that is r Phone number
N ame
systematic way. It will r Time, day and date
r
help you to better see
Volume
and understand all of the Level 1 Process M a p for Custom er O rder Process
influences affecting the
Call for Answer W rite Confirm Sets Address Order to
behavior and an Order Phone Order Order Price & Phone Cook
performance of the
process. The SIPOC diagram is especially useful after you have been able to construct
either a Level 1 or Level 2 Map because it facilitates your gathering of other
You may also add a pertinent data that is affecting the process in a systematic way.
requirements section
to both the supplier side and the customer side to capture the expectations for the inputs and
the outputs of the process. Doing a SIPOC is a great building block to creating the Level 3
Micro Process Map. The two really compliment each other and give you the power to make
improvements to the process.
Process Discovery
thi P
this Process M Map level
l l is
i att queue 2.65 days 20.47 days 16.9 days 1.60 days 7.57 days
finding bottlenecks in the The Value Stream Map is a very powerful technique to understand the
process.
p velocity of process transactions, queue levels and value added ratios in
both manufacturing and non-manufacturing processes.
Read the following background for the exercise: You have been concerned
about your ability to arrive at work on time and also the amount of time it takes
from the time your alarm goes off until you arrive at work. To help you better
understand both the variation in arrival times and the total time,, you
y decide to
create a Level 1 Macro Process Map. For purposes of this exercise, the start is
when your alarm goes off the first time and the end is when you arrive at your
work station.
Task 1 – Mentally think about the various tasks and activities that you routinely
do from the defined start to the end points of the exercise.
Task 2 – Using a pencil and paper create a linear process map at the macro
level but with enough detail that you can see all the major steps of your
level,
process.
Task 3 – From the Linear Process Map, create a swim lane style Process Map.
For the lanes you may use the different phases of your process, such as the
wake up phase, getting prepared, driving, etc.
Process Discovery
Process Mapping
follows a general Select the process
Create the Level 2 Create a Level 3
PFM PFM
order, but sometimes
you may find it
necessary, even Determine Add Performance
approach to map Perform SIPOC
advisable to deviate the process data
somewhat. However,
you will find this a
good path to follow Complete Level 1
PFM worksheet
Identify all X’s and Identify VA/NVA
Y’s steps
as it has proven itself
to generate
significant results. Identify customer
Create Level 1 PFM
requirements
On the lessons
ahead we will always
show you where you Define the scope Identify supplier
for the Level 2 PFM requirements
are at in this
sequence of tasks
for Process Mapping. Before we begin our Process Mapping we will first start you off with how to
determine the approach to mapping the process.
Basically there are two approaches: the individual and the team approach.
If you decide to do the individual approach, here are a few key factors: You must pretend that you are the
product or service flowing through the process and you are trying to “experience” all of the tasks that
h
happen th
through
h the
th various
i steps.
t
You must start by talking to the manager of the area and/or the process owner. This is where you will
develop the Level 1 Macro Process Map. While you are talking to him, you will need to receive permission
to talk to the various members of the process in order to get the detailed information you need.
Process Discovery
Process Mapping
P M i
works best with a Select the Using the Team Approach
team approach. The process 1. Start with the Level 1 Macro Process Map.
2. Meet with process owner(s) / manager(s). Create a
logistics of Level 1 Map and obtain approval to call a process
Determine
performing the approach to mapping meeting with process members (See team
mapping a map the workshop instructions for details on running the
process meeting).
somewhat different, 3. Bring key members of the process into the process
Complete
but it overall it takes Level 1 flow workshop. If the process is large in scope, hold
less time, the quality PFM individual workshops for each subsection of the
worksheet total process. Start with the beginning steps.
of the output is Organize meeting to use the “post-it note approach
higher and you will Create to gather individual tasks and activities, based on
Level 1 the macro map, that comprise the process.
have more “buy-in” PFM 4. Immediately assemble the information that has
into the results. Input been provided into a Process Map.
should come from Define the 5. Verify the PFM by discussing it with process owners
individuals familiar scope for and by observing the actual process from beginning
the Level 2
PFM to end.
with
ith allll stages
t off
process.
Where appropriate the team should include line individuals, supervisors, design engineers, process
engineers, process technicians, maintenance, etc. The team process mapping workshop is where it
all comes together.
In summary, after adding to and agreeing to the Macro Process Map, the team process mapping
approach is performed using multiple post-it notes where each person writes one task per note and,
when finished, place them onto a wall which contains a large scale Macro Process Map.
This is a very fast way to get a lot of information including how long it takes to do a particular task.
Using the Value Stream Analysis techniques which you will study laterlater, you will use this data to
improve the process. We will now discuss the development of the various levels of Process Mapping.
Process Discovery
A Macro Process Map can be useful when reporting project status to management. A macro-map can
show the scope of the project
project, so management can adjust their expectations accordingly.
accordingly Remember
Remember,
only major process steps are included. For example, a step listed as “Plating” in a manufacturing
Macro Process Map, might actually consists of many steps: pre-clean, anodic cleaning, cathodic
activation, pre-plate, electro-deposition, reverse-plate, rinse and spin-dry, etc. The plating step in the
macro-map will then be detailed in the Level 2 Process Map.
Exercise – Generate a Level 1 PFM
Process Discovery
If necessary,
necessary you may look
at the example for the Pizza 1. Identify a generic name for the process:
order entry process.
4. Mentally “walk” through the major steps of the process and write
them down:
1
1. Identify a generic name for the process:
(I.E. customer order process).
• Mentally “walk” through the major steps of the process and write them
down:
(Receive the order via phone call from the customer, calculate the price,
create a build order and provide the order to the chef).
Process Discovery
th details.
the d t il If th
the efficiency
ffi i map the No
process Take Order Add Place in Observe Check Yes Remove
or effectiveness of the from Cashier Ingredients Oven Frequently if Done from Oven 1
process could be
significantly improved by a Complete Start New
Pizza
Level 1
broad summary analysis, PFM Scrap
the improvement would be worksheet No
done already. If you map 1 Pizza Place in
Tape
Order on
Put on
Delivery Rack
the process at an Correct
Yes
Box
Box
Create
actionable level, you can
Level 1
identify the source of PFM The rules for determining the Level 2 Process Map scope:
inefficiencies and defects.
But you need to be careful • From your Macro Process Map, select the area which represents your
about mapping too little an problem.
Define the
area and missing your scope for • Map this area at a Level 2.
problem cause, or mapping the Level 2
PFM • Start and end at natural startingg and stopping
pp g ppoints for a pprocess, in
t large
to l an area in
i detail,
d t il other words you have the complete associated process.
thereby wasting your
valuable time.
Process Discovery
Building a SIPOC
Identify
y
supplier
requirements
The tool name prompts the team to consider the suppliers (the 'S' in SIPOC) of your process, the
inputs (the 'I') to the process, the process (the 'P') your team is improving, the outputs (the 'O') of
the process and the customers (the 'C') that receive the process outputs.
Requirements of the customers can be appended to the end of the SIPOC for further detail and
requirements are easily added for the suppliers as well.
The SIPOC tool is particularly useful in identifying:
Who supplies inputs to the process?
What are all of the inputs to the process we are aware of? (Later in the DMAIC methodology
you will use other tools which will find still more inputs, remember Y=f(X) and if we are going to
improve Y, we are going to have to find all the X’s.
What specifications are placed on the inputs?
What are all of the outputs of the process?
Who are the true customers of the process?
What are the requirements of the customers?
You can actually begin with the Level 1 PFM that has 4 to 8 high-level steps, but a Level 2 PFM is even
of more value. Creating a SIPOC with a process mapping team, again the recommended method is a
wall exercise similar to your other process mapping workshop. Create an area that will allow the team to
place post-it
post it note additions to the 8.5
8 5 X 11 sheets with the letters S,
S I,
I P,
P O and C on them with a copy of
the Process Map below the sheet with the letter P on it.
Hold a process flow workshop with key members. (Note: If the process is large in scope, hold an
individual workshop for each subsection of the total process, starting with the beginning steps).
The preferred order of the steps is as follows:
1. Identify the outputs of this overall process.
2. Identify the customers who will receive the outputs of the process.
3. Identify
f customers’ preliminary requirements
4. Identify the inputs required for the process.
5. Identify suppliers of the required inputs that are necessary for the process to function.
6. Identify the preliminary requirements of the inputs for the process to function properly.
Process Discovery
The Excel spreadsheet is somewhat self explanatory. You will use a similar form for identifying the
supplier requirements. Start by writing in the process name followed by the process operational
definition. The operational definition is a short paragraph which states why the process exists, what it
does and what its value proposition is. Always take sufficient time to write this such that anyone who
reads it will be able to understand the process. Then list each of the outputs, the Y’s, and write in the
customer’s name who receives this output, categorized as an internal or external customer.
Next are the requirements data. To specify and measure something, it must have a unit of measure;
called a metric. As an example, the metric for the speed of your car is miles per hour, for your weight it is
pounds, for time it is hours or minutes and so on. You may know what the LSL and USL are but you may
not have a target value. A target is the value the customer prefers all the output to be centered at;
essentially, the average of the distribution. Sometimes it is stated as “1 hour +/- 5 minutes”. One hour is
the target, the LSL is 55 minutes and the USL is 65 minutes. A target may not be specified by the
customer; if not, put in what the average would be. You will want to minimize the variation from this
value.
value
You will learn more about measurement, but for now you must know that if something is required, you
must have a way to measure it as specified in column 9. Column 10 is how often the measurement is
made and column 11 is the current value for the measurement data. Column 12 is for identifying if this is
a value or non value added activity; more on that later. And finally column 13 is for any comments you
want to make about the output.
You will
Yo ill come back to this form and rank the significance of the o
outputs
tp ts in terms of importance to identif
identify
the CTQ’s.
Process Discovery
Target USL
Measurement
System (How is it Frequency of Performance
Measured) Measurement Level Data
NV
or
NVA Comments
Later you will come back to this form and rank the importance of the inputs to the success of your
process and eventually you will have found the Critical X’s.
Process Discovery
It is important to distinguish which category an input falls into. You know through Y=f(X), that if it is a
Critical X, by definition, that you must control it. Also if you believe that an input is or needs to be
controlled then you have automatically implied there are requirements placed on it and that it must be
controlled,
measured. You must always think and ask whether an input is or should be controlled or if it is
uncontrolled.
Read the following background for the exercise: You will use
Perform your selected key process for this exercise (if more than one
SIPOC
person in the class is part of the same process you may do it as a
small ggroup).
p) You may y not have all the p
pertinent detail to correctly
y
identify all supplier requirements, that is ok, do the best you can.
Identify all X’s This will give you a starting template when you go back to do your
and Y’s
workplace assignment. Use the process input identification and
analysis form for this exercise.
Identify
customer Task 1 – Identify a generic name for the process.
requirements
Task 2 - Write an operational description for the process
Task 3 - Complete
p the remainder of the form except
p the Value –
Identify Non value added column.
supplier Task 4 - Report out to the class when called upon,
requirements
Process Discovery
Pi
Pizza
Dough
No
Take Order Add Place in Observe Check Yes Remove
from Cashier Ingredients Oven Frequently if Done from Oven 1
Start New
Pizza
Scrap
No
Tape
Pizza Place in Put on
1 Correct Box
Order on Delivery Rack
Yes Box
OUTPUT
1 IDENTIFICATION
3 4 AND
5 ANALYSIS
6 7 8 9 10 11 12 13 INPUT IDENTIFICATION AND ANALYSIS
1 Output Data 3 4 5 Requirements
6 Data7 8 9 Measurement
10 Data 11 Value Data
12 General Data/Information
13 INPUT
1 IDENTIFICATION
2 3 AND4 ANALYSIS
5 6 7 8 9 10 11 12 13
Customer (Name) Metric 1 Input Data2 3 4 5Requirements
6 Data7 8 9 Measurement 10Data 11 Value Data
12 General Data/Information
13
Output Data Requirements Data Measurement Measurement Data Value
VA Data General Data/Information
Customer (Name) Metric System (How is it Frequency of or VA Input Data Supplier (Name) Metric
Requirements Data Measurement Measurement Data VA
Value Data General Data/Information
Measurement Controlled (C) System (How is it Frequency of Performance or VA
Process Output - Name (Y) Internal External Metric LSL Target USL Measured) Supplier (Name) Metric Measurement
System (How is itMeasurement
Frequency of Performance Level Data NVA or Comments
Process Input- Name (X) Noise Internal
(N) (C) External Metric LSL Target USL Measured)
Controlled it Frequency ofLevel
System (How isMeasurement Data
Performance NVA or Comments
Process Output - Name (Y) Internal External Metric LSL Target USL Measured) Measurement Performance Level Data NVA Comments
Process Input- Name (X) Noise (N) Internal External Metric LSL Target USL Measured) Measurement Level Data NVA Comments
You h
Y have a decision
d i i att thi
this point
i t tto continue
ti with
ith a complete
l t characterization
h t i ti off th
the process you h
have
documented at a Level 2 in order to fully build the process management system or to narrow the effort
by focusing on those steps that are contributing to the problem you want solved.
Usually just a few of the process steps are the root cause areas for any given higher level process
output problem. If your desire is the latter, there are some other Measure Phase actions and tools you
will have to use to narrow the number of potential X’s and subsequently the number of process steps.
To narrow the
T th scope so it is
i relevant
l t tto your problem
bl consider
id ththe ffollowing:
ll i R
Remember
b using
i ththe pizza
i
restaurant as our example for selecting a key process? They were having a problem with overall delivery
time and burnt pizzas. Which steps in this process would contribute to burnt pizzas and how might a
pizza which was burnt so badly it had to be scrapped and restarted effect delivery time? It would most
likely be the steps between “place in oven” to “remove from oven”, but it might also include “add
ingredients” because certain ingredients may burn more quickly than others. This is how, based on the
Problem Statement you have made, you would narrow the scope for doing a Level 3 PFM.
For your project, the priority will be to do your best to find the problematic steps associated with your
Problem Statement. We will teach you some new tools in a later lesson to aid you in doing this. You may
have to characterize a number of steps until you get more experience at narrowing the steps that cause
problems; this is to be expected. If you have the time you should characterize the whole process.
Each step you select as the causal steps in the process must be fully characterized, just as you have
previously done for the whole process. In essence you will do a “mini SIPOC” on each step of the
process as defined in the Level 2 Process Map.Map This can be done using a Level 3 Micro Process Map
and placing all the information on it or it can be consolidated onto an Excel spreadsheet format or a
combination of both. If all the data and information is put onto an actual Process Map, expect the map to
be rather large physically. Depending on the scope of the process, some people dedicate a wall space
for doing this; say a 12 to 14 foot long wall. An effective approach for this is to use a roll of industrial
Process Discovery
A Level 3 Process Map contains all of the process details needed to meet your objective: all of the flows,
set points, standard operating procedures (SOPs), inputs and outputs; their specifications and if they are
classified as being controllable or non-controllable (noise). The Level 3 PFM usually contains estimates of
defects per unit (DPU), yield and rolled throughput yield (RTY) and value/non-value
value/non value add. If processing
cycle times and inventory levels (materials or work queues) are important, value stream parameters are
also included.
This can be a lot of detail to manage and appropriate tracking sheets are required. We have supplied
these sheets in a paper and Excel spreadsheet format for your use. The good news is the approach and
forms for the steps are essentially the same as the format for identifying supplier and customer
requirements at the process level. A spreadsheet is very convenient tool and the output from the
spreadsheet can then be fed directly into a C&E matrix and an FMEA (to be described later),
later) also built
using spreadsheets.
You will find the work you have done up to this point in terms of a Level 1 and 2 Process Maps and the
SIPOC will be of use, both from knowledge of the process and actual data.
An important reminder of a previous lesson: You will recall when you were taught about project definition
where it was stated that you should only try to solve the performance of only one process output, at any
one time
time. Because of the amount of detail you can get into for just one Y
Y, trying to optimize more than one
Y at a time can become overwhelming. The good news is that you will have laid all the ground work to
focus on a second and a third Y for a process by just focusing on one Y in your initial project.
Process Inputs (X’s) and Outputs (Y’s)
You are now down at the PROCESS STEP
Process Name Step Name/Number
improvement view of a
process. Now you do
exactly the same thing
Add
as you did for the overall Performance
process, you list all of data
Process Discovery
characterized. This Ys
visualization shows many N/C 7”, 12”, 16”
N/C 12 meats, 2 veggies, 3 cheese
Size of Pizza
Toppings
of the inputs and outputs N N/A Name
N Within 10 miles Address Order
and their requirements. By N Within area code Phone
Take Order •All fields
complete
using the process and the N 11 AM to 1 AM
N 5 X 52
Time
Day
process step input and N MM/DD,YY Date
output
t t sheets,
h t you gett a
very detail picture about C All fields complete
C Per Spec Sheets
Order
Ingredients •Size
Make Pizza Raw
how your process works. S.O.P Per Rev 7.0 Recipe Pizza
•Weight
•Ingredients
C As per recipe chart 3-1 in Oz. Amounts
Now you have enough data correct
Identifying Waste
When we produce
A
products or services, we
Writes Add to Rewrite
NV
time on Order order
scratch
pad
p
Hint: If an action starts with the two letters “re” it’s a good chance that it’s a form of waste, i.e. rework,
replace, review, etc.
Some non-value activities cannot be removed; i.e., data collection is required to understand and plan
production activity
p y levels,, data must be collected to comply
p y with g
governmental regulations,
g , etc. ((even
though the data have no effect on the actual product or service)
On the process flow diagram we place a red X through the steps or we write NVA or VA by each step.
Process Discovery
A Six Sigma Belt does not just discover which X’s are important in
a process (the vital few).
– The team considers all possible X’s that can contribute or
cause the problem observed.
– The team uses 3 primary sources of X identification:
• Process
ocess Mapping
app g
• Fishbone Analysis
• Basic Data Analysis – Graphical and Statistical
– A List of X’s is established and compiled.
– The team then prioritizes which X’s it will explore first, and
eliminates the “obvious” low impact X’s from further
consideration.
This is an important tool for the many reasons we have already stated. Use it to your benefit,
leverage the team and this will help you progress you through the methodology to accomplish your
ultimate project goal.
Process Discovery
This is the X-Y Diagram. You should have a copy of this template. If possible open it and get
familiar with it as we progress through this section.
Process Discovery
Li t X’s
List X’ from
f Fishbone
Fi hb Diagram
Di in
i horizontal
h i t l rows
Use your Fishbone Diagram as the source and type in the Inputs in this section, use common sense,
some of the info from the Fishbone may not justify going into the X-Y inputs.
Process Discovery
Process Discovery
Example
Example
This is the summary
worksheet. If you click Click the Summary Worksheet
on the “Summary” tab
you will see this output.
Take some time to YX Diagram Summary
review the worksheet. Process: laminatingg
Date: 5/2/2006 Input Matrix Results
100.00%
Output Variables Input Variables 90.00%
80.00%
Description Weight Description Ranking Rank %
Output (Y's)
70.00%
60.00%
broken 10 temperature 162 14.90% 50.00%
40.00%
unbonded area 9 human handling 159 14.63% 30.00%
20.00%
smears 8 material properties 130 11.96% 10.00%
0.00%
thickness 7 washer 126 11.59%
temperature
time
clean room cleanliness
material properties
pressure
Process Discovery
Definition of FMEA
Failure Modes Effect
Analysis or FMEA Failure Modes Effect Analysis (FMEA) is a structured approach to:
[*usually pronounced • Predict failures and pprevent their occurrence in manufacturingg
as F-M-E-A (individual
and other functional areas which generate defects.
letters) or FEMA** (as
a word)] is a structured • Identify the ways in which a process can fail to meet critical
approach to: read customer requirements (Y).
bullets. FMEA at this
point is developed with • Estimate the Severity, Occurrence and Detection (SOD) of
tribal knowledge with a defects
cross-functional
cross functional team.
team • Evaluate the current control plan for preventing these failures
Later using process from occurring and escaping to the customer.
data the FMEA can be
updated and better • Prioritize the actions that should be taken to improve and control
estimates of detection the process using a Risk Priority Number (RPN).
and occurrence can be
obtained. The FMEA is
not a tool to eliminate Give
G ve mee an
a “F”,
F , give
g ve mee an
a “M”……
M
X’s but rather control
the X’s. It is only a tool
to identify potential X’s
and prioritize the order
in which the X’s should
be evaluated.
Process Discovery
History of FMEA
History of FM EA:
• First used in the 1960’s in the Aerospace industry during the
Apollo missions
• In 1974 the N avy developed MIL-STD-1629 regarding the use of
FMEA
• In the late 1970’s
1970 s, automotive applications driven by liability
costs, began to incorporate FMEA into the management of their
processes
• Automotive Industry Action Group (AIAG) now maintains the
FMEA standard for both Design and Process FMEA’s
The “edge of your seat” info on the history of the FMEA! I’m sure you will all be sharing this with
everyone tonight at the dinner table!
Types of FMEA’s
• Design DFMEA: Performed early in the design phase to analyze product fail
modes before they are released to production. The purpose is to analyze how
fail modes affect the system and minimize them. The severity rating of a fail
mode MUST be carried into the Process PFMEA.
Process Discovery
Purpose of FMEA
FMEA’s:
Process Discovery
A a means to
As t manage… FMEA s help you manage
FMEA’s
RISK by classifying your
RISK!!!
process inputs and monitoring
their effects. This is extremely
important during the course of
your project work.
The FMEA…
This is an FMEA
FMEA. We have provided a template for you to use.
use
Process Discovery
FMEA Components…#
The second
Th d column
l iis
the Name of the Process # Process
Function
Potential
Failure
Potential
Failure
S
E
C
l
Potential
Causes of
O
C
Current
Process
D R
E P
Recommen
d Actions
Responsibl
e Person &
Taken
Action
S O D R
E C E P
Step. The FMEA should (Step) Modes
(process
Effects
(Y's)
V a
s
Failure
(X's)
C Controls T N Target
Date
s V C T N
steps documented in
your Process Map.
Phone Enter the Name of the Process Step here. The FMEA should
Dial Number sequentially
ti ll ffollow
ll ththe steps
t d
documented
t d iin your Process
P Map.
M
Listen for Ring Phone
Say Hello Dial Number
Introduce Yourself Listen for Ring
Etc. Say Hello
Introduce Yourself
Etc.
Process Discovery
This information is
usually obtained from This is simply the effect of realizing the potential failure
your Process Map.
mode on the overall process. It focuses on the outputs
of each step.
This information can be obtained in the Process Map.
Process Discovery
The fifth column highlighted here is the ranking that is developed based on the team’s knowledge of the
process in conjunction with the predetermined scale.
Severity is a financial measure of the impact to the business of a failure in the output.
Ranking Severity
The Automotive Industry Action Group, a consortium of the “Big Three”: Ford, GM and Chrysler
developed this criteria. If you don’t like it develop one that fits your organization; just make sure it’s
standardized so everyone uses the same scale.
High Minor disruption to the production line. The product may have to be sorted and a portion 7
(less than 100%) scrapped. Vehicle operable, but at a reduced level of
performance. Customers will be dissatisfied.
Moderate Minor disruption to the production line. A portion (less than 100%) may have to be 6
scrapped (no sorting)
sorting). Vehicle/item operable
operable, but some comfort/convenience
item(s) inoperable. Customers will experience discomfort.
Low Minor disruption to the production line. 100% of product may have to be re-worked. 5
Vehicle/item operable, but some comfort/convenience item(s) operable at a
reduced level of performance. Customers will experience some dissatisfaction.
Very Low Minor disruption to the production line. The product may have to be sorted and a 4
portion (less than 100%) re-worked. Fit/finish/squeak/rattle item does not
conform. Most customers will notice the defect.
Minor Minor disruption to the production line. A portion (less than 100%) of the product may 3
have to be re-worked online but out-of-station. Fit/finish/squeak/rattle item
does not conform. Average customers will notice the defect.
Very Minor Minor disruption to the production line. A portion (less than 100%) of the product may 2
have to be re-worked online but in-station. Fit/finish/squeak/rattle
q item does
not conform. Discriminating customers will notice the defect.
None No effect. 1
* Potential Failure Mode and Effects Analysis (FMEA), Reference Manual, 2002. Pgs 29-45. Chrysler
Corporation, Ford Motor Company, General Motors Corporation.
Process Discovery
You will
Y ill need
d to
t define
d fi your own criteria…
it i …
criteria
and be consistent throughout your FMEA
The actual definitions of the severity are not so important as the fact that the team remains
consistent in its use of the definitions. Below is a sample of transactional severities.
Critical Business May endanger company’s ability to do business. Failure mode affects process
10
Unit-wide operation and / or involves noncompliance with government regulation.
Critical Loss - May endanger relationship with customer. Failure mode affects product delivered
Customer and/or customer relationship due to process failure and/or noncompliance with 9
Specific government regulation.
Major disruption to process/production down situation. Results in near 100%
High 7
rework or an inability to process. Customer very dissatisfied.
Moderate disruption to process. Results in some rework or an inability to process.
Moderate Process is operable, but some work arounds are required. Customers experience 5
dissatisfaction.
Minor disruption to process. Process can be completed with workarounds or
Low rework at the back end. Results in reduced level of performance. Defect is 3
noticed and commented upon by customers.
Minor disruption to process. Process can be completed with workarounds or
Minor rework at the back end. Results in reduced level of performance. Defect noticed 2
internally, but not externally.
None No effect. 1
Shown here is an example for severity guidelines developed for a financial services company.
Process Discovery
Controllable – A factor that can be dialed into a specific setting/value. For example Temperature or
Flow.
Procedures – A standardized set of activities leading to readiness of a step. For example Safety
Compliance, “Lock -Out Tag-Out.”
Noise - A factor that can not be dialed in to a specific setting/value.
setting/value For example rain in a mine
mine.
Recall the classifications of Procedural, Controllable and Noise developed when constructing your
Process Map and Fishbone Diagram? Use those classifications from the Fishbone in the “Class”
column, highlighted here, in the FMEA.
P t ti l C
Potential Causes off F
Failure
il (X’s)
(X’ )
Potential Causes of the Failure refers to how the failure could occur.
This information should be obtained from the Fishbone Diagram.
The column “Potential Causes of the Failure”, highlighted here, refers to how the failure could
occur.
Process Discovery
to occur.
This information should be obtained from Capability Studies or
Historical Defect Data - in conjunction with the predetermined scale.
Ranking Occurrence
Potential Failure Mode and Effects Analysis (FMEA), Reference Manual, 2002. Pg. 35.. Chrysler Corporation, Ford
Motor Company, General Motors Corporation.
The Automotive Industry Action Group, a consortium of the “Big Three”: Ford, GM and Chrysler
developed these Occurrence rankings.
Process Discovery
Current Process Controls refers to the three types of controls that are
in place to prevent a failure in with the X’s. The 3 types of controls are:
•SPC (Statistical Process Control)
•Poke-Yoke – (Mistake Proofing)
•Detection after Failure
The column “Current Process Controls” highlighted here refers to the three types of controls that are
in place to prevent a failures.
FMEA Components…Detection
Components Detection (DET)
The “Detection” highlighted here is an assessment of the probability that the proposed type of
control will detect a subsequent failure mode.
Process Discovery
Ranking Detection
Potential Failure Mode and Effects Analysis (FMEA), AIAG Reference Manual, 2002 Pg. 35.. Chrysler Corporation,
Ford Motor Company, General Motors Corporation.
The Automotive Industry Action Group, a consortium of the “Big Three”: Ford, GM and Chrysler
developed these Detection criteria.
RPN = (SEV)*(OCC)*(DET)
Process Discovery
FEMA Components…Actions
Responsible Person & Date refers to the name of the group or person
responsible for completing the activity and when they will complete it.
Taken Action refers to the action and effective date after it has been
completed.
The columns highlighted here are a type of post FMEA. Remember to update the FMEA throughout
your project, this is what we call a “Living Document” as it changes throughout your project.
Process Discovery
FMEA Exercise
OK Team,
Team let’s
get that FMEA!
Process Discovery
Create an FMEA
Notes
Measure Phase
Six Sigma Statistics
Now we will continue in the Measure Phase with “Six Sigma Statistics”.
Overview
In this module you will learn how your
processes speak to you in the form of W
Welcome
l
Welcome to
tto Measure
M
Measure
data. If you are to understand the Process
Process Discovery
Discovery
behaviors of your processes, then you
Six
Six Sigma
Sigma Statistics
Statistics
must learn to communicate with the
process in the language of data. Basic
Basic Statistics
Statistics
Descriptive
Descriptive Statistics
The field of statistics provides the tools Statistics
and techniques
q to act on data,, to turn Normal
Normal Distribution
Distribution
data into information and knowledge Assessing
Assessing Normality
Normality
which you will then use to make Special
Special Cause
Cause // Common
Common Cause
Cause
decisions and to manage your
processes. Graphing
Graphing Techniques
Techniques
Measurement
Measurement System
System Analysis
Analysis
The statistical tools and methods that
you will need to understand and Process
Process Capability
Capability
optimize your processes are not
Wrap
Wrap Up
Up &
& Action
Action Items
Items
difficult. Use of Excel spreadsheets or
specific statistical analytical software
has made this a relatively easy task.
In this module you will learn basic, yet powerful analytical approaches and tools to increase your
ability to solve problems and manage process behavior.
Relax….it won’t
be that bad!
Having an understanding of Basic Statistics can be quite valuable to an individual. Statistics however,
like anything, can be taken to the extreme.
Data is like crude oil that comes out of the ground. Crude oil is not of much good use. However if
the crude oil is refined many useful products occur; such as medicines, fuel, food products,
lubricants, etc. In a similar sense statistics can refine data into usable “products” to aid in decision
making, to be able to see and understand what is happening, etc
Statistics is broadly used by just about everyone today. Sometimes we just don’t realize it. Things
as simple as using graphs to better understand something is a form of statistics, as are the many
opinion and political polls used today. With easy to use software tools to reduce the difficulty and
time to do statistical analyses, knowledge of statistics is becoming a common capability amongst
people.
An understanding of Basic Statistics is also one of the differentiating features of Six Sigma and it
would
ld nott b
be possible
ibl without
ith t th
the use off computers
t andd programs liklike MINITAB™
MINITAB™. It hhas b
been
observed that the laptop is one of the primary reasons that Six Sigma has become both popular
and effective.
The standard deviation of population data For each, all, individual values
Use this as a cheat sheet, don’t bother memorizing all of this. Actually most of the notation in Greek is
for population data.
Population: All the items that have the “property of interest” under study.
Population
Sample
Sample
Sample
A population parameter is a numerical value that summarizes the data for an entire population, a
sample has a corresponding numerical value called a statistic.
The population is a collection of all the individual data of interest. It must be defined carefully, such
as all the trades completed in 2001. If for some reason there are unique subsets of trades it may
be appropriate to define those as a unique population, such as, “all sub custodial market trades
completed in 2001”
2001 or “emerging
emerging market trades”
trades .
Sampling frames are complete lists and should be identical to a population with every element
listed only once. It sounds very similar to population… and it is. The difference is how it is used. A
sampling frame, such as the list of registered voters, could be used to represent the population of
adult general public. Maybe there are reasons why this wouldn’t be a good sampling frame.
Perhaps a sampling frame of licensed drivers would be a better frame to represent the general
public.
It is important to recognize the difference between a sample and a population because we typically
are dealing with a sample of the what the potential population could be in order to make an
inference. The formulas for describing samples and populations are slightly different. In most
cases we will be dealing with the formulas for samples.
Types of Data
The nature of data is important to understand. Based on the type of data you will have the option
to utilize different analyses.
Data, or numbers, are usually abundant and available to virtually everyone in the organization.
Using data to measure, analyze, improve and control processes forms the foundation of the Six
Sigma
g methodology. gy Data turned into information,, then transformed into knowledge,
g , lowers the
risks of decision. Your goal is to make more decisions based on data versus the typical practices
of “I think”, “I feel” and “In my opinion”.
One of your first steps in refining data into information is to recognize what the type of data it is
that you are using. There are two primary types of data, they are Attribute and Variable Data.
Attribute Data is also called qualitative data. Attribute Data is the lowest level of data. It is purely
binary in nature. Good or bad, yes or no type data. No analysis can be performed on Attribute
Data. Attribute Data must be converted to a form of Variable Data called Discrete Data in order to
be counted or be useful.
Discrete Data is information that can be categorized into a classification. Discrete Data is based
on counts. It is typically things counted in whole numbers. Discrete Data is data that can't be
broken down into a smaller unit to add additional meaning. Only a finite number of values is
possible and the values cannot be subdivided meaningfully. For example, there is no such thing
as a half of defect or a half of a system lockup.
lockup
Continuous Data is information that can be measured on a continuum or scale. Continuous Data,
also called quantitative data can have almost any numeric value and can be meaningfully
subdivided into finer and finer increments, depending upon the precision of the measurement
system. Decimal sub-divisions are meaningful with Continuous Data. As opposed to Attribute
Data like good or bad, off or on, etc., Continuous Data can be recorded at many different points
(length, size, width, time, temperature, cost, etc.). For example 2.543 inches is a meaningful
number, whereas 2.543 defects does not make sense.
Later in the course we will study many different statistical tests but it is first important to
understand what kind of data you have.
Discrete Variables
Shown here are additional Discrete Variables. Can you think of others within your business?
Continuous Variables
The length of prison time served for individuals All the real numbers between a and b,
b where a is
convicted of first degree murder the smallest amount of time served and b is the
largest.
The household income for households with All the real numbers between a and $30,000,
incomes less than or equal to $30,000 where a is the smallest household income in the
population
Th blood
The bl d glucose
l reading
di for
f those
th individuals
i di id l All reall numbers
b b
between
t 200 and d b,
b where
h b is
i
having glucose readings equal to or greater than the largest glucose reading in all such individuals
200
Shown here are additional Continuous Variables. Can you think of others within your business?
• Understanding the nature of data and how to represent it can affect the
types of statistical tests possible.
• Interval Scale – data can be arranged in some order and for which
differences in data values are meaningful. The data can be arranged in
an ordering scheme and differences can be interpreted
interpreted.
• Ratio Scale – data that can be ranked and for which all arithmetic
operations including division can be performed. (division by zero is of
course excluded) Ratio level data has an absolute zero and a value of
zero indicates a complete absence of the characteristic of interest.
Shown here are the four types of scales. It is important to understand these scales as they will dictate
the type of statistical analysis that can be performed on your data.
Nominal Scale
Listed are some Qualitative Variable Possible nominal level data values for
examples of the variable
Nominal Data.
The only analysis Blood Types A, B, AB, O
is whether they
are different or
not.
State of Residence Alabama, …, Wyoming
Ordinal Scale
Interval Scale
I t
Interval
l Variable
V i bl P
Possible
ibl Scores
S
Ratio Scale
Continuous Data
provides us more • Continuous Data is always more desirable
opportunity for
statistical analyses.
Attribute Data can often • In many cases Attribute Data can be converted to
be converted to continuous
Continuous by
converting it to a rate. • Which is more useful?
– 15 scratches or Total scratch length of 9.25”
– 22 foreign materials or 2.5 fm/square inch
– 200 defects or 25 defects/hour
Descriptive Statistics
Descriptive Statistics
Measures of Location
data. 70
70
NN 200
200
60
60
50
e que ncy
different, there is no 40
40
Descriptive Statistics: Data
mathematical difference 30
30
Variable N N* Mean SE Mean StDev Minimum Q1
between the Mean of a 20
20 Median Q3
Data 200 0 4.9999 0.000712 0.0101 4.9700 4.9900
Variable Maximum
population. 00
4.97
4.97 4.98
4.98 4.99
4.99
Data
5.00
5.00 5.01
5.01 5.02
5.02 Data 5.0200
Data
The physical
Th h i l center t
of a data set is the Median is:
Median and • The mid-point, or 50th percentile, of a distribution of data.
• Arrange the data from low to high, or high to low.
unaffected by large
– It is the single middle value in the ordered list if there is an odd
data values. This is number of observations
why people use – It is the average of the two middle values in the ordered list if there
Median when are an even number of observations
discussingg averageg
salary for an Histogram
Histogram(with
(withNNormal
ormal Curv e) of
Curve) ofData
Data
American worker,
M ean 5.000
80
80 Mean 5.000
S tD ev 0.01007
StDev 0.01007
N 200
N 200
70
60
60
50
Frequency
30
Variable N N* Mean SE Mean StDev Minimum Q1 Median Q3
30 Data 200 0 4.9999 0.000712 0.0101 4.9700 4.9900 5.0000 5.0100
average number. 20
20 Variable Maximum
10
Data 5.0200
10
00
4.97
4.97 4.98
4.98 4.99
4.99 5.00
5.00 5.01
5.01 5.02
5.02
Dat a
Data
Trimmed Mean is a:
Compromise between the mean and median.
• The trimmed mean is calculated by eliminating a specified percentage
of the smallest and largest observations from the data set and then
calculating the average of the remaining observations
• Useful for data with potential extreme values
values.
Variable Q3 Maximum
Data 5.0100 5.0200
The trimmed Mean (highlighted above) is less susceptible to the effects of extreme scores.
Mode is:
The most frequently occurring value in a distribution of data.
Mode = 5
H i s t o g r a m ((with
Histogram w ith N o r m a l CCurve)
Normal u r v e ) oof
f DData
a ta
MMean
ean 55.000
.000
880
0
SStDev
tD e v 00.01007
.01 007
NN 2200
00
770
0
660
0
550
0
quency
requency
440
0
Fre
Fr
r
330
0
220
0
110
0
00
44.97
.9 7 44.98
.9 8 44.99
.9 9 55.00
.0 0 55.01
.0 1 55.02
.0 2
DData
ata
It is
i possible
ibl to
t have
h multiple
lti l Modes,
M d when
h this
thi h
happens it’
it’s called
ll d Bi
Bi-Modal
M d l Di
Distributions.
t ib ti H
Here we
only have One mode = 5.
Range is the:
Difference between the largest observation and the smallest
observation in the data set.
• A small range would indicate a small amount of variability and a large
range a large amount of variability.
Variable Maximum
Data 5.0200
A range is typically used for small data sets which is completely efficient in estimating variation for
a sample of 2. As your data increases the Standard Deviation is a more appropriate measure of
variation.
S
Sample P
Population
l ti
Variable Maximum
Data 5.0200
The Standard Deviation for a sample and population can be equated with short and long-term
variation.
Usually a sample is taken over a short period of time making it free from the types of variation
that can accumulate over time so be aware.
Variance is the:
Average squared deviation of each individual data point from the
mean.
Sample Population
The Variance is the square of the Standard Deviation. It is common in statistical tests where it is
necessary to add up sources of variation to estimate the total. Standard Deviations cannot be
added, variances can.
Normal Distribution
We can begin to discuss the Normal Curve and its properties once we understand the basic
concepts of central tendency and dispersion.
As we begin to assess our distributions know that sometimes it’s actually more difficult to determine
what is effecting a process if it is Normally Distributed. When we have a Non-normal Distribution
there is usually special or more obvious causes of variation that can be readily apparent upon
process investigation.
Normal Distribution
By normalizing the Normal Distribution this converts the raw scores into standard Z-scores
Z scores with a
Mean of 0 and Standard Deviation of 1, this practice allows us to use the Z-table.
The area under the curve between any 2 points represents the
proportion of the distribution between those points.
The
Thearea
areabetween
betweenthe
the
Mean
Mean andany
and anyother
other
point
pointdepends
dependsupon
uponthethe
Standard Deviation.
Standard Deviation.
x
Convert any raw score to a Z-score using the formula:
The area under the curve between any two points represents the proportion of the distribution. The
concept of determining the proportion between 2 points under the standard Normal curve is a critical
componentt to
t estimating
ti ti Process
P Capability
C bilit and
d will
ill b
be covered
d iin d
detail
t il iin th
thatt module.
d l
Empirical Rule
No matter what the shape of your distribution is, as you travel 3 Standard
Deviations from the Mean, the probability of occurrence beyond that point
begins to converge to a very low number.
The Anderson
Darling test yields a The sha pe of a ny norma l curve ca n be ca lcula ted ba sed
statistical on the norma l proba bility density function.
assessment (called
a goodness-of-fit
test) of Normality Tests for N orma lity ba sica lly compa re the sha pe of the
and the MINITAB™ ca lcula ted curve to the a ctua l distribution of your da ta
version of the points.
N
Normal l probability
b bili
test produces a For the purposes of this tra ining, w e w ill focus on 2
graph to visual
w a ys in M IN ITAB™ to a ssess N orma lity:
demonstrate just
how good that fit is. – The Anderson-Da rling test
– N orma l proba bility test
Goodness-of-Fit
100
Expected for Normal Distribution
Departure of the Actual Data
20%
actual data from the
80
expected normal C
u
m
distribution. The u
l
a 60
Anderson-Darling t
i
v
Goodness-of-Fit test e
assesses the P
e 40
r
magnitude of these c
e
n
departures using an t
20
Observed minus 20%
Expected formula. 0
3.0 3.5 4.0 4.5 5.0 5.5
Raw Data Scale
Anderson-Darling assess how closely actual frequency at a given value corresponds to the
theoretical frequency for a Normal Distribution with the same Mean and Standard Deviation.
Probability
ProbabilityPlot
Plotof
ofAmount
Amount
Normal
Normal
99.9
99.9
Mean
Mean 84.69
84.69
StDev
StDev 7.913
7.913
99
99 NN 70
70
AD
AD 0.265
0.265
95
95 P-Value 0.684
P-Value 0.684
90
90
80
80
70
ercent
70
rcent
60
60
50
50
40
40
Pe
Pe
30
30
20
20
10
10
55
11
0.1
0.1
60
60 70
70 80
80 90
90 100
100 110
110
Amount
Amount
The graph shows the probability density of your data plotted against the expected density of a
N
Normal l curve. NNotice
ti ththatt th
the y-axis
i ((probability)
b bilit ) does
d nott increase
i linearly.
li l Normal
N l data
d t will
ill lie
li on a
straight line (the red line) in this analysis. The graph shows you which values tend to deviate from
the Normal curve.
Descriptive Statistics
Anderson-Darling Caveat
60
60 MMedian
edian 50.006
50.006
50
50 3rd
3rdQQuartile
uartile 53.218
53 218
53.218
Perc
40
40
36
36 40
40 44
44 48
48 52
52 56
56 60
60 MMaxim um 62.823
aximum 62.823
30
30 95%
95%CConfidence
onfidenceInterv
Intervalalfor
forMMean
ean
20
20 49.596
49.596 50.466
50.466
10
10 95%
95%CConfidence
onfidenceInterv
Intervalalfor
forMMedian
edian
55 49.663 50.500
49.663 50.500
95%
95%CConfidence
onfidenceInterv
Intervalalfor
forSStD
tDev
ev
11 9955%
% CConfide nce IInter
onfidence nte r vvaals
ls
4.662
4.662 5.278
5.278
Mean
Mean
0.1
0.1
35
35 40
40 45
45 50
50 55
55 60
60 65
65 Median
Median
AAnderson
ndersonDarling
Darling 49.50 49.75 50.00 50.25 50.50
49.50 49.75 50.00 50.25 50.50
In this case, both the Histogram and the Normality Plot look very “normal”. However,
because the sample size is so large, the Anderson-Darling test is very sensitive and any
slight deviation from normal will cause the p-value to be very low
low. Again
Again, the topic of
sensitivity will be covered in greater detail in the Analyze Phase.
For now, just assume that if N > 100 and the data look normal, then they probably are.
Answers:
1) Is Distribution A Normal? Answer > No
2) Is Distribution B Normal? Answer > No
Introduction to Graphing
Passive data
collection means The purpose of Gra phing is to:
don’t mess with the • Identify potential relationships between variables.
process! We are • Identify risk in meeting the critical needs of the Customer,
gathering data and Business and People.
looking for patterns • Provide insight into the nature of the X’s which may or may not
in a graphical tool. If control Y.
the data is
• Show the results of passive data collection.
collection
questionable, so is
the graph we create
from it. For now In this section w e w ill cover…
utilize the data 1. Box Plots
available, we will 2. Scatter Plots
learn a tool called
3. Dot Plots
Measurement
System Analysis 4. Time Series Plots
later in this phase. 5. Histograms
Data Sources
Data
demographics Data sources are suggested by many of the tools that have
will come out of
the basic
been covered so far:
Measure Phase – Process Map
tools such as – X-Y Matrix
Process Maps, – Fishbone Diagrams
X-Y Diagrams,
FMEAs and – FMEA
Fishbones. Put
your focus on Examples are:
the top X’s from
X-Y Diagram to 1. Time 3. Operator
focus your Shift Training
Day of the week Experience
activities.
Week of the month Skill
S
Season off th
the year Adherence to procedures
Graphical Concepts
The Histogram
A Histogram is a basic graphing tool
that displays the relative frequency A Histogram displays data that have been summarized into
or the number of times a measured intervals. It can be used to assess the symmetry or skewness of the
items falls within a certain cell size. data.
Histogram
Histogramof
ofHistogram
The values for the measurements Histogram
20
distribution of the data by showing 20
Histogram Caveat
As you can see in
the MINITAB™ file All the Histograms below were generated using random samples of
the columns used to the data from the worksheet “ Graphing Data.mtw” .
generate the
Histogram
Histogramof
ofH1_20,
H1_20, H2_20,
H2_20, H3_20,
H3_20,H4_20
H4_20
Histograms above 98
98 99
99 100
100 101
101 102
102
only have 20 data 44
H1_20
H1_20
44
H2_20
H2_20
points. It is easy to 33 33
samples to create 11 11
FFrequency
requency
Histogram simply by 00
88
H3_20
H3_20
00
88
H4_20
H4_20
Data>Sample from 00 00
98 99 100 101 102
columns…” 98 99 100 101 102
Variation on a Histogram
The
Histogram Using the worksheet “ Graphing Data.mtw” create a simple Histogram for
the data column called granular.
shown
here looks
to be very
Normal. Histogram of Granular
25
20
15
Frequency
10
0
44 46 48 50 52 54 56
Granular
Dot Plot
Using the worksheet “Graphing
Graphing
Data.mtw”, create a Dot Plot. The Dot Plot can be a useful alternative to the Histogram especially if you
want to see individual values or you want to brush the data.
Histogram for the granular
distribution obscures the granularity,
whereas the Dot Plot reveals it.
Also, Dot Plots allow the user to
brush data points. The Histogram
Dotplot
Dotplotof
of Granular
does not
not. Granular
If in fact there are special causes (Uncontrollable Noise or Procedural non-compliance) then they
should be addressed separately and then excluded from this analysis.
Take a few minutes and create other Dot Plots using the columns in this data set.
Box Plot
A Box Plot (sometimes called a
Whisker Plot) is made up of a box Box Plots summarize data about the shape, dispersion and center of the
representing the central mass of the data and also help spot outliers.
variation and thin lines, called Box Plots require that one of the variables, X or Y, be categorical or
whiskers extending out on either
whiskers, discrete and the other be continuous
continuous.
side representing the thinning tails of
A minimum of 10 observations should be included in generating the box
the distribution. Box Plots summarize plot.
information about the shape, Maximum Value
B ox
whisker represents the first 25% of Q2 M
Q2: Median
di 50th Percentile
P til
the data in the Histogram (the light Q1: 25th Percentile
grey area). The second and third
quartiles form the box, which
Lower Whisker
represents fifty percent of the data
and finally the whisker on the right
Lower Limit: Q1+1.5(Q3-Q1)
represents the fourth quartile. The
line drawn through the box
represents the median of the data. Extreme values, or outliers, are represented by asterisks. A
value is considered an outlier if it is outside of the box (greater than Q3 or less than Q1) by more
than 1.5 times (Q3-Q1).
You can use the Box Plot to assess the symmetry of the data: If the data are fairly symmetric,
the Median line will be roughly in the middle of the box and the whiskers will be similar in length.
If the data are skewed, the Median may not fall in the middle of the box and one whisker will
likel be noticeabl
likely noticeably longer than the other
other.
cholesterol 125
Gluco
125
300
300
then check
Da
200
200
Plot! 100
100
2-Day
2-Day 4-Day
4-Day 14-Day
14-Day
Using the
MINITAB™
worksheet “Graphing
Data.mtw”.
Open the
O h MINITAB™ P Project
j The individual value plot shows the individual data points that are
“Measure Data Sets.mpj” and represented in the Box Plot.
select the worksheet “Graphing
Data.mtw”.
Data
12.5
Da ta
following the menu path “Graph> 10.0
10.0
Individual Value Plot> Multiple
7.5
7.5
Y’s, Simple…”.
5.0
5 0
5.0
Brian
Brian Greg
Greg Shree
Shree
If the output is
pass/fail, it must be
plotted on the y axis.
Use the data shown
to create the
transposed Box Plot.
The reason we do this
is for consistency and
accuracy.
The output Y is
Pass/Fail, the Box
11
Plot shows the
spread of hydrogen
Pass/Fail
Pass/Fail
content that created
the results.
22
215.0
215.0 217.5
217.5 220.0
220.0 222.5
222.5 225.0
225.0 227.5
227.5 230.0
230.0 232.5
232.5
Hydrogen
Hydrogen Content
Content
25
25 25
25
20
20 20
20
Data
Data
15
15 15
15
ta
ta
Da
Da
10
10 10
10
55 55
00 00
Weibull
Weibull Normal
Normal BiBiModal
Modal Weibull
Weibull Normal
Normal BiBiModal
Modal
Jitter Example
By using the Jitter
function we will Once your graph is created, click once on any of the data points (that
action should select all the data points).
spread the data apart
Then go to MINITAB™ menu path: Editor> Edit Individual Symbols…Jitter…
making it easier to
Increase the jitter in the x-direction to .075, click OK, then click anywhere
see how many data on the graph except on the data points to see the results of the change.
points there are.
This gives us
Individual
Individual Value
Value Plot
Plot of
of Weibull,
Weibull, Normal,
Normal, Bi
Bi Modal
Modal
relevance so we 30
30
Data
15
Data
15
10
10
55
00
Weibull
Weibull Normal
Normal Bi
Bi Modal
Modal
Using the MINITAB™ Time series plots allow you to examine data over time.
worksheet “Graphing
Depending on the shape and frequency of patterns in the plot,
Data.mtw”.
several X’s can be found as critical or eliminated.
A Time Series is Graph> Time Series Plot> Simple...
created by following
the MINITAB™ menu
path “Graph>
Graph> Time
Time Series
Time Series Plot
Plot of
of Time 11
Time
Series Plot>
Simple...” 602
602
d t point
data i t as it is
i
600
600
Time 11
gathered over time.
Time
Some interesting 599
599
occurrences can be
revealed. 598
598
597
597
11 10
10 20
20 30
30 40
40 50
50 60
60 70
70 80
80 90
90 100
100
Index
Index
600
Time 3). 600
599
599
MINITAB™
allows you to Time
Time Series
Series Plot
Plot of
of Time
Time 33
add a 605
605
601
601
a smoothing
Time 33
Time
600
600
technique 599
599
called Lowess. 598
598
597
597
596
596
11 10
10 20
20 30
30 40
40 50
50 60
60 70
70 80
80 90
90 100
100
Index
Index
Notes
Measure Phase
Measurement System Analysis
Now we will continue in the Measure Phase with “Measurements System Analysis”.
Overview
Measurement System
Analysis is one of those Welcome
Welcome to
to Measure
Measure
non-negotiable items!
MSA is applicable in Process
Process Discovery
Discovery
98% of projects and it
alone can have a Six
Six Sigma
Sigma Statistics
Statistics
massive effect on the
success of your project Measurement
Measurement System
y
System Analysis
y
Analysis
and improvements
within the company. Basics
Basics of
of MSA
MSA
In other words, LEARN
IT & DO IT. It is very Variables
Variables MSA
MSA
important. Attribute
Attribute MSA
MSA
Process
Process Capability
Capability
Wrap
Wrap Up
Up &
& Action
Action Items
Items
Introduction to MSA
So far we have learned that the heart and soul of Six Sigma is
that it is a data-driven methodology.
– How do you know that the data you have used is accurate and
precise?
– How do know if a measurement is a repeatable and
reproducible?
In order to improve your processes, it is necessary to collect data on the "critical to" characteristics.
When there is variation in this data, it can either be attributed to the characteristic that is being
measured and to the way that measurements are being taken; which is known as measurement error.
When there is a large measurement errorerror, it affects the data and may lead to inaccurate decision-
decision
making.
Measurement error is defined as the effect of all sources of measurement variability that cause an
observed value (measured value) to deviate from the true value.
There are several types of measurement error which affect the location and the spread of the
distribution. Accuracy, linearity and stability affect location (the average). Measurement accuracy
describes the difference between the observed average and the true average based on a master
reference value for the measurements. A linearity problem describes a change in accuracy
through the expected operating range of the measuring instrument. A stability problem suggests
that there is a lack of consistency in the measurement over time. Precision is the variability in the
measured value and is quantified like all variation by using the standard deviation of the
distribution of measurements. For estimating accuracy and precision, multiple measurements of
one single characteristic must be taken.
The primary contributors to measurement system error are repeatability and reproducibility
reproducibility.
Repeatability is the variation in measurements obtained by one individual measuring the same
characteristic on the same item with the same measuring instrument. Reproducibility refers to
the variation in the average of measurements of an identical characteristic taken by different
individuals using the same instrument.
Given that Reproducibility and Repeatability are important types of error, they are the object of a
specific study called a Gage Repeatability & Reproducibility study (Gage R&R). This study can be
performed on either attribute-based or variable-based measurement systems. It enables an
evaluation of the consistency in measurements among individuals after having at least two
individuals measure several parts at random on a few trials. If there are inconsistencies, then the
measurement system must be improved.
Measurement Purpose
Measurement is a process within In order to be worth collecting,
g, measurements must provide
p value -
itself. In order to measure something that is, they must provide us with information and ultimately,
you must go through a series of tasks knowledge
and activities in sequence. Usually
there is some from of set-up, there is The question…
an instrument that makes the
measurement, there is a way of
recording the value and it may be
What do I need to know?
done by multiple people.
people Even when
you are making a judgment call about …must be answered before we begin to consider issues of measurements,
metrics, statistics, or data collection systems
something, there is some form of
setup. You become the instrument
and the result of a decision is Too often, organizations build complex data collection and
information management systems without truly understanding how
recorded someway; even if it is verbal
the data collected and metrics calculated actually benefit the
or it is a set of actions that you take.
organization.
The ttypes and
Th d sophistication
hi ti ti off
measurement vary almost infinitely. It is becoming increasingly popular or cost effective to have
computerized measurement systems. The quality of measurements also varies significantly - with
those taken by computer tending to be the best. In some cases the quality of measurement is so
bad that you would be just as well off to guess at what the outcome should be. You will be
primarily concerned with the accuracy, precision and reproducibility of measurements to determine
the usability of the data.
Purpose
The purpose of
conducting an MSA is The purpose of MSA is to assess the error due to
to mathematically measurement systems.
partition sources of
The error can be partitioned into specific sources:
variation within the
measurement system – Precision
itself. This allows us • Repeatability - within an operator or piece of equipment
to create an action • Reproducibility - operator to operator or attribute gage to
plan to reduce the attribute gage
biggest contributors of – Accuracy
measurement error. • Stability - accuracy over time
• Linearity-
Linearity accuracy throughout the measurement range
• Resolution
• Bias – Off-set from true value
– Constant Bias
– Variable Bias – typically seen with electronic
equipment, amount of Bias changes with setting
levels
Measurement systems,
systems like
all things, generate some Accurate
Accuratebut butnotnotprecise
precise--On On Precise
Precisebut
butnotnotaccurate
accurate--The
The
average,
average,thetheshots
shotsare
areininthe average
averageisisnot
noton onthe
thecenter,
center,but
amount of variation in the the but
center
centerofofthe
thetarget
targetbut
butthere
thereisisaa the
thevariability
variabilityisissmall
small
results/data they output. In lot
lotof
ofvariability
variability
measuring, we are primarily
concerned with 3
characteristics:
1. How
1 H accurate
t is
i th
the
measurement? For a
repeated measurement,
where is the average
compared to some known
standard?. Think of the
target as the measurement
system,, the
syste t e known
o
standard is the bulls eye in
the center of the target. In
the first example you can see the “measurements” are very dispersed, there is a lot of variability as
indicated by the Histogram curve at the bottom. But on average, the “measurements” are on target.
When the average is on target, we say the measurement is accurate. However, in this example they
are not very precise.
3. The third characteristic is how reproducible is the measurement from individual to another? What is
the accuracy and precision from person to person. Here you would expect each person that performs
the measurement to be able to reproduce the same amount of accuracy and precision as that of other
person performing
f i the
h same measurement.
Ultimately, we make decisions based on data collected from measurement systems. If the
measurement system does not generate accurate or precise enough data, we will make the decisions
that generate errors, waste and cost. When solving a problem or optimizing a process, we must know
how good our data are and the only way to do this is to perform a Measurement System Analysis.
MSA Uses
M SA ca n be used to:
The measurement system always has some amount of variation and that variation is additive to
the actual amount of true variation that exists in what we are measuring. The only exception is
when the discrimination of the measurement system is so poor that it virtually sees everything the
same.
This means that you may actually be producing a better product or service than you think you are,
providing that the measurement system is accurate; meaning it does not have a bias, linearity or
stability problem. It may also mean that your customer may be making the wrong interpretations
about your product or service.
The components of variation are statistically additive. The primary contributors to measurement
system error are Repeatability and Reproducibility. Repeatability is the variation in measurements
obtained by one individual measuring the same characteristic on the same item with the same
measuring instrument. Reproducibility refers to the variation in the average of measurements of an
identical characteristic taken by different individuals using the same instrument.
Why MSA?
Why is MSA so important?
M ea surem ent System Ana ly sis is important to:
MSA is was allows us to trust
• Study the % of variation in our process that is caused by our
the data generated from our measurement system.
processes. When you charter • Compare measurements between operators.
a project you are taking on a • Compare measurements between two (or more) measurement
significant burden which will devices.
require Statistical Analysis. • Provide criteria to accept new measurement systems (consider new
What happens if you have a equipment).
great project, with lots of data • Evaluate a suspect gage
gage.
from measurement systems • Evaluate a gage before and after repair.
that produce data with no • Determine true process variation.
integrity?
• Evaluate effectiveness of training program.
Appropriate Measures
Sufficient means that are
Sufficient,
measures are available to Appropria te M ea sures are:
be measured regularly, if
not it would take too long • Sufficient – available to be measured regularly
to gather data.
Relevant, means that they • Relevant –help to understand/ isolate the problems
will help to understand
and isolate the problems.
problems
• Representative - of the process across shifts and people
Representative measures
mean that we can detect • Contextual – collected with other relevant information that
variation across shifts and might explain process variability.
people.
Contextual means they are necessary to gather information on other relevant information that actually
ld h
would help
l tto explain
l i sources off variation.
i ti
Poor Measures
It is very common
while working gpprojects
j Poor M ea sures can result from:
to discover that the
current measurement • Poor or non-existent operational definitions
systems are poor. • Difficult measures
Have you ever come
across a situation • Poor sampling
where the data from • Lack of understanding of the definitions
your customer or
supplier doesn’t
doesn t match • Inaccurate,
Inaccurate insufficient or non-calibrated
non calibrated measurement
yours? It happens devices
often. It is likely a
problem with one of
the measurement
M ea surement Error compromises decisions that affect:
systems. We have – Customers
worked MSA projects – Producers
across critical – Suppliers
measurement points
in various companies,
it is not uncommon for more than 80% of the measurements to fail in one way or another.
M SA is a Show Stopper!!!
Components of Variation
Precision Accuracy
Repeatability Reproducibility
p y Stability
y Bias Linearity
y
All measurement systems have error. If you don’t know how much of the
variation you observe is contributed by your measurement system, you
cannot make confident decisions.
We are going to strive to have the measured variation be as close as possible to the true variation.
In any case we want the variation from the measurement system to be a small as possible. We are
now going to investigate the various components of variation of measurements.
Precision
A precise metric is one that returns the same value of a given The spread of the data
is measured by
attribute every time an estimate is made.
Precision. This tells us
how well a measure
can be repeated and
Precise data are independent of who estimates them or when
reproduced.
the estimate is made.
Repeatability
Measurements will be Repea ta bility is the variation in measurements obtained with one
different…expect it! If mea surement instrument used several times by one appraiser
measurement are while measuring the identical characteristic on the sa m e pa rt.
always exactly the
same this is a flag,
sometimes it is Y
because the gauge
does not have the
proper resolution,
meaning the scale
doesn’t go down far Repeatability
enough to get any For example:
variation in the – Manufacturing: One person measures the purity of multiple samples
measurement. of the same vial and gets different purity measures.
– Transactional: One person evaluates a contract multiple times (over a
For example, would
period of time) and makes different determinations of errors.
you use a football field
to measure the gap in a
spark plug?
Reproducibility
Reproducibility will be
present when it is Reproducibility is the variation in the average of the
possible to have more measurements made by different appraisers using the sa me
than one operator or mea suring instrument when measuring the identical
more than one characteristic on the sa me pa rt.
instrument measure the Reproducibility
same part.
Y Operator A
Operator B
For example:
– Manufacturing: Different people perform purity test on samples from
the same vial and get different results.
– Transactional: Different people evaluate the same contract and
make different determinations.
1. Pair up
p with an associate.
2. One person will say start and stop to indicate how
long they think the 10 seconds last. Do this 6 times.
3. The other person will have a watch with a second
hand to actually measure the duration of the estimate.
Record the value where your partner can’t see it.
4 Switch tasks with partner and do it 6 times also.
4. also
5. Record all estimates, what do you notice?
Accuracy
Accuracy and the
average are related. An accurate measurement is the difference between the observed average of
Recall in the Basic the measurement and a reference value.
Statistics module we – W hen a metric or measurement system consistently over or under estimates the
talked about the Mean value of an attribute, it is said to be “ inaccurate”
and the variance of a Accuracy can be assessed in several ways:
distribution. – Measurement of a known standard
– Comparison with another known measurement method
Think of it this – Prediction of a theoretical value
way….If the W hat happens if we don’t have standards, comparisons or theories?
Measurement System
True
is the distribution then Avera ge
accuracy is the Mean
and the precision is
Accura cy
the variance. W a rning, do not a ssume y our
gy reference is g
m etrology gospel.
M ea surement
However, before you invest a lot of time analyzing the data, you
must ensure the data has integrity.
– The analysis should include a comparison with known
reference points.
– For the example of product returns, the transaction details
should add up to the same number that appears on financial
reports, such as the income statement.
ACCURATE PRECISE BO TH
+ =
Bias
Bias Bias
Bias is a component of Accuracy. Constant Bias is when the measurement is off by a constant
value. A scale is a prefect example, if the scale reads 3 lbs when there is no weight on it then there
is a 3lb Bias. Make sense?
Stability
Linearity
0.00
*
-e
*
*
Reference Value (x)
y = a + b.x
y: Bias, x: Ref. Value
a: Slope, b: Intercept
Linearity just evaluates if any Bias is consistent throughout the measurement range of the
instrument. Many times Linearity indicates a need to replace or maintenance measurement
equipment.
Types of MSA’s
Variable Data is
always preferred over M SA’s fa ll into tw o ca tegories:
Attribute because it – Attribute
give us more to work – Va ria ble
with.
Attribute Va ria ble
Now we are gong to – Pa ss/ Fa il – Continuous sca le
review Variable MSA – Go/ N o Go – Discrete sca le
testing
testing. – Document Prepa ra tion – Critica l dimensions
– Surfa ce imperfections – Pull strength
– Customer Service – W a rp
Response
Variable MSA’s
MSA s
MSA’s use a
MIN ITAB™ calculates a column of variance components (VarComp) which are used to
random effects calculate % Gage R&R using the AN OVA Method.
model meaning
that the levels for
Measured Value True Value
the variance
components are
not fixed or
assigned, they are
assumed to be
random. Estimates for a Gage R&R study are obtained by calculating the variance components
for each term and for error. Repeatability, Operator and Operator* Part components
are summed to obtain a total variability due to the measuring system.
W e use variance components to assess the variation contributed by each source of
measurement error relative to the total variation.
Contribution
Contribution ofof variation
variation to
to the
the total
total
variation
variation of
of the
the study.
study.
%
% Contribution,
Contribution, based
based onon variance
variance
components,
components, is is calculated
calculated byby dividing
dividing each
each
value
value in
in VarComp
VarComp by by the
the Total
Total Variation
Variation then
then
multiplying
multiplying the
the result
result by
by 100.
100.
Use
Use %% Study
Study Var
Var when
when you
you are
are interested
interested in
in
comparing
comparing thethe measurement
measurement system
system variation
variation to
to the
the
total variation.
total variation.
%
% Study
Study Var
Var is
is calculated
calculated by
by dividing
dividing each
each value
value in
in
Study
Study Var
Var by
by Total
Total Variation
Variation and
and Multiplying
Multiplying by
by
100
100.
100
100.
Study
Study Var
Var isis calculated
calculated asas 5.15
5.15 times
times the
the Standard
Standard
Deviation
Deviation for
for each
each source.
source.
(5.15
(5.15 is
is used
used because
because when
when data
data are
are normally
normally
distributed,
distributed, 99%
99% ofof the
the data
data fall
fall within
within 5.15
5.15
Standard
Standard Deviations.)
Deviations.)
WWhen
hen the
the process
process tolerance
tolerance is is entered
entered inin the
the
system,
system, MINMINITAB
ITABTMTM calculates
calculates % % Tolerance
Tolerance whichwhich
compares
compares measurements
measurements system
system variation
variation to to
customer
customer specification.
specification. This
specification This allows
allows us
us to
to
determine
determine thethe proportion
proportion of of the
the process
process tolerance
tolerance
that
that is
is used
used by
by the
the variation
variation inin the
the measurement
measurement
system.
system.
R
Recom mended
d d
5 or more Categories
AIAG St
Standards
d d for
f Gage
G Acceptance
A t
50 0.625
0 0.620
Gage R&R Repeat Reprod Part to Part
Part-to-Part Part 1 2 3 4 5 6 7 8 9 10
UCL=0.005936
measurement
measurementsystem systeminto intospecific
specificsources.
sources. Each
Eachcluster
cluster
0.005
ofofbars
bars represents a source of variation. Bydefault,
0.625 represents a source of variation. By default,
R=0.001817
each
each cluster will have two bars, corresponding to
0.000 LCL=0 0.620 cluster will have two bars, corresponding to
0 %Contribution
%Contribution
Operator 1 and
and%StudyVar.
%StudyVar.
2 3 If you add a tolerance
If you add a tolerance
Xbar Chart by Operator and/
and/ ororhistorical sigma,
Operator*Part
historical sigma, bars
Interaction
bars for
for %% Tolerance
Toleranceand/
Operator and/oror
0.632 1 2 3
0.631
UCL=0.6316
%Process
0.631
%Process
0.630 are
areadded.
added.
1
2
Mean
0.630
0.629 3
age
0.629
Sample M
0 628
0.628 Mean=0 6282
Mean=0.6282 0 628
0.628
Avera
0.627
0.626
InInaa good
goodmeasurement
0.627
0.626 measurementsystem,
system,thethelargest
largestcomponent
component
0.625
0.624
LCL=0.6248
ofofvariation
variation is Part-to-Part variation. Ifinstead
0.625
0.624
is Part-to-Part variation. If insteadyou
youhave
have
0
large
largeamounts
Part
amountsofofvariation
1 2 3 4
variationattributed
5 6 7 8
attributedtotoGage
9 10
GageR&R,
R&R,then
then
corrective
correctiveaction
actionisisneeded.
needed.
50 0.625
0.620
0
MIN ITABTMTMprovides an R Chart and Xbar Chart by Operator.
Gage R&R Repeat Reprod Part-to-Part MIN ITAB
Part 1 2 provides
3 4 5 an 6 R7 Chart
8 9 and
10 Xbar Chart by Operator.
The
TheRRchart
chartconsists
consistsofofthe
thefollowing:
following:
R Chart by Operator By Operator
0.010 1 2 3
- The plotted points are the difference between the largest
0.630
- The plotted points are the difference between the largest
Sample Range
UCL=0.005936 and
andsmallest
smallestmeasurements
measurementson oneach
eachpart
partfor
foreach
eachoperator.
operator.
0.005
If the measurements are the same then the range = 0.
0.625
If the measurements are the same then the range = 0.
- The Center Line, is the grand average for the process.
R=0.001817
- The Center Line, is the grand average for the process.
0.000 LCL=0 - -The
0.620 TheControl
ControlLimits
Limitsrepresent
representthetheamount
amountofofvariation
variation
0 expected
Operator 1 for the subgroup
2 ranges
ranges. 3These limits are calculated
expected for the subgroup ranges. These limits are calculated
Xbar Chart by Operator using the variation within subgroups.
using the Operator*Part Interaction
variation within subgroups. Operator
0.632 1 2 3
UCL=0.6316 0.631 1
0.631
If any of the points on the graph go above 2the upper Control
0.630
Sample Mean
0.630 If any of the points on the graph go above3 the upper Control
0.629
Limit (UCL), then that operator is having problems consistently
Average
0.629
0.628 Mean=0.6282 Limit (UCL), then that operator is having problems consistently
0.628
0.627 measuring
measuringparts.
0.627
parts. The
TheUpper
UpperControl
ControlLimit
Limitvalue
valuetakes
takesinto
into
0.626 0.626
0.625 LCL=0.6248 account
accountthe
0.625 thenumber
numberofofmeasurements
measurementsby byananoperator
operatoron onaa
0.624
part and the variability between parts. If the operators are
0.624
0 part and
Part 1 2the3 variability
4 5 6 between
7 8 9 parts.
10 If the operators are
measuring
measuringconsistently,
consistently,then
thenthese
theseranges
rangesshould
shouldbe besmall
small
relative
relativetotothethedata
dataandandthe
thepoints
pointsshould
shouldstay
stayinincontrol.
control.
0.630
id ll show
ideally h llack-of-control.
k f t l Lack-of-control
L k f t l 3exists when many
Sample Mea
0 629
0.629
Average
0.629
0.628 Mean=0.6282 points are above the Upper Control Limit and/ or below the
0.628
0.627 points are above the Upper Control Limit and/ or below the
0.627
Lower Control Limit.
0.626
0.625
Lower Control Limit.
0.626
LCL=0.6248 0.625
0.624 0.624
In this case there are only a 7few8 points out of control which
0 In this case
Part 1 2 there
3 4are
5 only
6 a few points
9 10 out of control which
indicates the measurement system is inadequate.
indicates the measurement system is inadequate.
the
theaverage
averagemeasurements taken
takenofby
Components
measurements each
eachoperator
Variation
by operatoron on By Part
each
eachpart
partininthe
thestudy,
100
study,arranged
arrangedby bypart.
part. Each
Eachline
line
%Contribution 0.630
connects
connectsthe
theaverages
averagesfor
foraasingle
singleoperator.
operator.
%Study Var
%Tolerance
%
nt
Percen
50 0.625
Ideally,
Ideally,the
thelines
lineswill
willfollow
followthethesame
samepattern
patternand andthe
the
part
partaverages
averageswill0
willvary
vary enough
enough that
that differences
differences
0.620
Gage R&R Repeat Reprod Part-to-Part Part 1 2 3 4 5 6 7 8 9 10
between
betweenparts
partsare areclear.
clear. R Chart by Operator By Operator
0.010 1 2 3
0.630
Sample Range
UCL=0.005936
Pa ttern 0.005 M ea ns… 0.625
R=0.001817
0.000 LCL=0
Lines a re virtua lly identica l O pera tors a re m ea suring 0.620
0 O
Operator
t 1 2 3
the pa rts the sa m e
Xbar Chart by Operator Operator*Part Interaction
O ne line is consistently
0.632 Tha
1 t opera
2 tor is mea
3 suring Operator
UCL=0.6316 0.631 1
0.631
higher or low er tha
0.630n the pa rts consistently higher or 0.630 2
Sample Mean
0.629 3
others low er tha n the others
Average
0.629
0.628 Mean=0.6282 0.628
0.627 0.627
Lines a re not pa ra llel or
0.626
0.625
The opera tors a bility to 0.626
LCL=0.6248 0.625
they cross 0.624 mea sure a pa rt depends 0.624
0 on w hich pa rt is being Part 1 2 3 4 5 6 7 8 9 10
Practical Conclusions
For this example, the measuring system contributes a great deal to the overall variation,
as confirmed by both the Gage R&R table and graphs.
The variation due to the measurement system, as a percent of study variation is causing
92.21% of the variation seen in the process.
By AIAG Standards this gage should not be used. By all standards, the
data being produced by this gage is not valid for analysis.
% Tolera nce
or % C
Contribution
t ib ti System is…
is
% Study Va ria nce
Design Types
Crossed Designs are
the workhorse of Crossed Design
• A crossed design is used only in non-destructive testing and assumes that all
MSA. They are the the parts can be measured multiple times by either operators or multiple
most commonly machines.
used design in – Gives the ability to separate part-to-part variation from measurement
industries where it is system variation.
possible to measure – Assesses repeatability and reproducibility.
something more than – Assesses the interaction between the operator and the part.
once. Chemical and
biological systems N ested Design
can use Crossed • A nested design is used for destructive testing (we will learn about this in
MBB training) and also situations where it is not possible to have all
Designs also as long operators or machines measure all the parts multiple times.
as you can assume – Destructive testing assumes that all the parts within a single batch are
that the samples identical enough to claim they are the same.
used come from a – N ested designs are used to test measurement systems where it is not
homogeneous possible (or desirable) to send operators with parts to different locations.
solution and there is – Do not include all possible combinations of factors.
no reason they can – Uses slightly different mathematical model than the crossed design.
be different.
Nested Designs must be used for destructive testing. In a Nested Design, each part is measured by
only one operator. This is due to the fact that after destructive testing, the measured characteristic is
different after the measurement process than it was at the beginning. Crash testing is an example of
destructive testing.
testing
If you need to use destructive testing, you must be able to assume that all parts within a single batch
are identical enough to claim that they are the same part. If you are unable to make that assumption
then part-to-part variation within a batch will mask the measurement system variation.
If you can make that assumption, then choosing between a Crossed or Nested Gage R&R Study for
destructive testing depends on how your measurement process is set up. If all operators measure
parts from each batch,
batch then use Gage R&R Study (Crossed).
(Crossed) If each batch is only measured by a
single operator, then you must use Gage R&R Study (Nested). In fact, whenever operators measure
unique parts, you have a Nested Design. Your Master Black Belt can assist you with the set-up of
your design.
A Gage R&R
R&R, like any study
study, Ga ge R& R Study
requires careful planning. The – Is a set of trials conducted to assess the repeatability and reproducibility
common way of doing an of the measurement system.
Attribute Gage R&R consists – Multiple people measure the same characteristic of the same set of
of having at least two people multiple units multiple times (a crossed study)
measure 20 parts at random, – Example: 10 units are measured by 3 people. These units are then
twice each. This will enable randomized and a second measure on each unit is taken.
you to determine how
y
consistently these people A Blind Study is extremely desirable.
evaluate a set of samples
– Best scenario: operator does not know the measurement is a part of a test
against a known standard. If
– At minimum: operators should not know which of the test parts they are
there is no consistency currently measuring.
among the people, then the
measurement system must
be improved, either by NO, not that kind of R&R!
defining a measurement
method, training, etc. You use
an Excel spreadsheet
template to record your study and then to perform the calculations for the result of the study.
The next few slides show how to create a data collection table in MINITAB™
MINITAB . You can use Excel
also.
Here is the
completed table.
The trial column
will not be used
for the analysis
and can actually
be deleted.
Va ria bles:
– Part
– Operator
– Response
Gage R & R
Graphical Output
Looking at the “ Components of Variation” chart, the Part to Part Variation needs to be larger
than Gage Variation.
If in the “ Components of Variation” chart the “ Gage R&R” bars are larger than the “ Part-to-
Part’ bars, then all your measurement variation is in the measuring tool i.e.… “ maybe the
gage needs to be replaced” . The same concept applies to the “ Response by Operator”
chart. If there is extreme variation within operators, then the training of the operators is
suspect.
Pa rt to Pa rt
Va ria tion needs
to be la rger tha n
Ga ge Va ria tion
O pera tor
Error
Session Window
This output tells us that the part to part variation exceeds the allowable tolerance. This gage is
acceptable.
Signal Averaging
Suppose the Standard Deviation for one part measured by one person
many times is 9.5.
Here we have a problem with Repeatability, not Reproducibility so we calculate what the Standard
Deviation should be in order to meet our desire of a 15% gage.
We are assuming that 15% will be acceptable for the short term until an appropriate fix can be
implemented. The 9.5 represents our estimate for Standard Deviation of population of Repeatability.
Attribute MSA
An Attribute MSA is similar in many ways to the continuous MSA, including the
purposes. Do you have any visual inspections in your processes? In your experience
y been?
how effective have they
When a Continuous MSA is not possible an Attribute MSA can be performed to evaluate the quality
of the data being reported from the process.
Why not? Does everyone know what an “F” (defect) looks like? Was the lighting good in the
room? Was it quite so you could concentrate? Was the writing clear? Was 60 seconds long
enough?
e oug
This is the nature of visual inspections! How many places in your process do you have visual
inspection? How good do you expect them to be?
SCORING REPORT
DATE: 5/10/2006
Attribute Legend5 (used in computations) NAME: Joe Smith
1 pass PRODUCT: My Gadget All operators
2 fail BUSINESS: Unit 1 agree within and All Operators
between each agree with
Other standard
Known Population Operator #1 Operator #2 Operator #3 Y/N Y/N
Sample # Attribute Try #1 Try #2 Try #1 Try #2 Try #1 Try #2 Agree Agree
1 pass pass pass pass pass fail fail N N
2 pass pass pass pass pass fail fail N N
3 fail fail fail fail pass fail fail N N
4 fail fail fail fail fail fail fail Y Y
5 fail fail fail pass fail fail fail N N
6 pass pass pass pass pass pass pass Y Y
7 pass fail fail fail fail fail fail Y N
8 pass pass pass pass pass pass pass Y Y
9 fail pass
p pass
p pass
p pass
p pass
p pass
p Y N
10 fail pass pass fail fail fail fail N N
11 pass pass pass pass pass pass pass Y Y
12 pass pass pass pass pass pass pass Y Y
In order to conduct an Attribute Gage R&R first select a set of samples. These samples should be
a mix of clearly Good/Pass, clearly Bad/Fail and Marginal so we can test an operator’s ability
across different types of attributes.
For each sample an attribute or true status of the part should be documented by an expert or team
of experts, these people have to be different that the operators who will do the study. Each
operator should assign a Pass or Fail to each part on two or three separate occasions.
The requirements for any sort of confidence with Attribute Data are big. Start with 50 samples, that
should give you enough data. If you use more, realistically things will just get worse.
Repea ta bility
Reproducibility R
A
C A
T
The
eggreen
ee ttriangle
a g e represents
ep ese ts tthe
e actua
actual sco
score
eoof tthe
e
U
A
N
appraiser. The range between the red squares is the L
G
Confidence Interval which is a function of the operators
score and the size of the sample they have inspected. E
Statistical Report
M&M Exercise
2 M&M Fail
• Pick 50 M&Ms out of a package.
3 M&M Pass
• Enter results into either the Excel template or MIN ITABTM and
draw conclusions.
• The instructor will represent the customer for the attribute score.
To complete this study you will need, a bag of M&Ms containing 50 or more “pieces”. The Attribute
Value for each piece, which means the “True” value for each piece, in addition to being the
facilitator of this study you will also serve as the customer, so you will have the say as to if the
piece is actually a Pass or Fail piece
piece. Determine this before the inspectors review the pieces
pieces. You
will need to construct a sheet as shown here to keep track of the “pieces” or “parts” in our case
M&Ms it is important to be well organized during these activities. Then the inspectors will
individually judge each piece based on the customer specifications of bright and shiny M&M with
nice M’s.
Notes
Measure Phase
Process Capability
Process Capability
Overview
Continuous
Continuous Capability
Capability
Concept
Concept of
of Stability
Stability
Attribute
Attribute Capability
Capability
W
W ra
rapp Up
Up &
& Action
Action Item
Itemss
Process Ca pa bility:
This is the Definition of Process Capability. We will now begin to learn how to assess it.
Process Capability
Capability Analysis
Capability Analysis
provides you with a The X
X’ss Y = f(X) (Process Function) The Y
Y’ss
Variation – “Voice of
(Inputs) (Outputs)
quantitative assessment of the Process”
Frequency
your processes ability to Verified Op i + 1
Op i
meet the requirements X1
Data for
?
Y1…Yn
Analysis is traditionally
10.44
10.33 10.43
10.12 10.33
X3 Y2 9.86
10.44 10.21
10.43 10.44
10.01
10.21 9.86
9.80 9.90 10.0 10.1 10.2 10.3 10.4 10.5
10.07
9.86
10.29
10.07 10.15
10.01 10.07
10.36
10.29 10.44
10.15 10.29
X5 Correctable
outputs of a process, in ?
undue influence on
10.44 10.33
outputs (CTQ’s) of a
10.36
10.33
process Defects
Defects
You will learn in the lesson how the output variation width of a given process output compares with
the specification width established for that out put. This ratio, the output variation width divided by
th specification
the ifi ti width
idth iis what
h t iis kknow as capability.
bilit
Since the specification is an essential part of this assessment, a rigorous understanding of the
validity of the specification is vitally important, it also has to be accurate. This is why it is important to
perform a RUMBA type analysis on process inputs and outputs.
Process Capability
Re
ss
du
variation is larger than the Capable and ce
ce
on target ro
difference between the upper rp
sp
e
nt
r ea
Average
spec limit minus the lower LSL USL Ce
d
spec limit, our product or
service output will always
produce defects, it will not be
capable of meeting the T
Target
t
customer or process output
requirements.
As you have learned, variation exists in everything. There will always be variability in every process
output. You can’t eliminate it completely, but you can minimize it and control it. You can tolerate
variability if the variability is relatively small compared to the requirements and the process
demonstrates long-term stability, in other words the variability is predictable and the process
performance is on target meaning the average value is near the middle value of the requirements.
The output from a process is either: capable or not capable, centered or not centered. The degree of
capability and/or centering determines the number of defects generated. If the process is not
capable, you must find a way to reduce the variation.
And if it is not centered, it is obvious that you must find a way to shift the performance. But what do
you do if it is both incapable and not centered? It depends, but most of the time you must minimize
and gget control of the variation first, this is because high
g variation creates high
g uncertainty,
y yyou can’t
be sure if your efforts to move the average are valid or not. Of course, if is just a simple adjustment
to shift the average to where you want it, you would do that before addressing the variation.
Problem Solving Options – Shift the Mean
Our efforts in a Six Sigma This involves finding the variables that will shift the process over to the
project that is examining a target. This is usually the easiest option.
process that is p
p performing
g at a
level less than desired is to USL
LSL
Shift the Mean of performance Shift
such that all outputs are within
an acceptable range.
Process Capability
Move the specification limits – Obviously this implies making them wider, not narrower. Customers
Obviously this implies making usually do not go for this option but if they do…it’s the easiest!
them wider,, not narrower.
Customers usually do not go LSL USL USL
for this option.
Move Spec
Process Capability
Capability Studies
Steps to Capability
#1 Verify Customer
Requirements
#2 Validate
Specification
Limits
#3 Collect Sample
Data
#4 Determine
Data Type
(LT or ST)
#5 Check data
for normality
#6 Calculate
Z-Score, PPM,
Yield, Capability
Cp, Cpk, Pp, Ppk
#7
Process Capability
Q uestions
ti to
t consider:
id Specifications must be
verified before
completing the
• W hat is the source of the specifications?
Capability Analysis. It
– Customer requirements (VOC) doesn’t mean that you
– Business requirements (target, benchmark) will be able to change
– Compliance requirements (regulations) them, but on occasion
– Design requirements (blueprint
(blueprint, system) some internal
specifications have
• Are they current? Likely to change? been made much
tighter than the
customer wants.
• Are they understood and agreed upon?
– Operational definitions
– Deployed to the work force
Data Collection
You must know if the
data collected from Ca pa bility Studies should include “ a ll” observa tions (1 0 0 % sa mpling) for a specified period.
process.
Fill Q
Each lot is sampled as it leaves the manufacturing facility on its way to the warehouse. The results
are represented by the graphic where you see the performance data on a lot by lot basis for the
amount of fill based on the samples that were taken. Each lot has its own variability and average as
shown. The variability actually looks reasonable and we notice that the average from lot to lot is
varying as well.
What the customer eventually experiences in the amount of fluid in each bottle is the value across the
full variability of all the lots. It can now be seen and stated that the long-term variability will always be
greater than the short-term variability.
Process Capability
Baseline Performance
As an example, imagine you reported the process performance Baseline was based on distribution 3
in the graphic, you would mislead yourself and others that the process had excellent on target
performance. If you used distribution 2, you would be led to believe that the average performance was
near the USL and that most of the output of the process was above the spec limit. To resolve these
potential problems, it is important to always use long-term data to report the Baseline.
How do you know if the data you have is short or long-term? Here are some guidelines. A somewhat
technical interpretation of long-term data is that the process has had the opportunity to experience
most of the sources of variation that can impact it. Remembering the outputs are a function of the
inputs what we are saying is that most of the combinations of the inputs,
inputs, inputs each with their full range of
variation has been experienced by the process. You may use these situations as guidelines.
Long-term data is a “video” of process performance and is characterized by these types of conditions:
Many shifts Many batches
Many employees Many services and lines
Many suppliers
Long-term variation is larger than short-term variation because of : material differences, fluctuations in
temperature and humidity, different people performing the work, multiple suppliers providing
materials, equipment wear, etc.
As a general rule, short-term data consist of 20 to 30 data points over a relatively short period of time
and long-term data consist of 100 to 200 data points over an extended period of time. Do not be
Process Capability
While we have used a manufacturing example to explain all this, it is exactly the same for a service or
administrative type of process. In these types of processes, there are still different people, different
shifts, different workloads, differences in the way inputs come into the process, different software,
computers,
t temperatures,
t t etc.
t The
Th same exactt conceptst andd rules
l apply.l
You should now appreciate why, when we report process performance, we need to know what the data
is representative of. Using such data we will now demonstrate how to calculate process capability and
then we will show how it is used.
C
Components
t off V
Variation
i ti
In general one or more months of data are probably more long-term than short-term; two weeks or
less is probably more like short-term data.
Process Capability
x x
x
x x
x x x
x x
x
x x x Time
x x x x
x x
x x x
x
Stability
Stability is established by A Sta ble Process is consistent over time. Time Series Plots and
plotting data in a Time Control Charts are the typical graphs used to determine stability.
Series Plot or in a
Control Chart. If the data At this point in the Measure Phase there is no reason to assume the
used in the Control Chart process is stable.
goes out of control, the Time Series Plot of PC Data
data is not stable. 70
Att this
t s point
po t in the
t e 60
Measure Phase there is
no reason to assume the
PC Data
50
process is stable.
Performing a capability Tic toc…
study at this point 40
tic toc…
effectively draws a line in
the sand. 30
1 48 96 144 192 240 288 336 384 432 480
Index
If however, the process
is stable, short-term data
provides a more reliable estimate of true process capability.
Looking at the Time Series Plot shown on this slide, where would you look to determine the
entitlement of this process?
Process Capability
Measures of Capability
Mathematically Cpk and Ppk are the same and Cp and Pp are the same.
The only difference is the source of the data, Short-term and Long-term,
respectively.
– Cp and Pp Hope
p
• W hat is Possible if your process is perfectly Centered
• The Best your process can be
• Process Potential (Entitlement)
Capability Formulas
Sa m ple M ea n
Note: Consider the “K” value the penalty for being off center LSL – Lower specification limit
USL – Upper specification limit
Process Capability
MINITAB™ Example
At this point in time we are only attempting to get a Baseline number that we can compare to at the
end of problem solving. We are not using it to predict a quality, we want to get a snapshot. DO NOT
try and make your process STABLE BEFORE working on it! Your process is a project because
there is something wrong with it so go figure it out, don’t bother playing around with stability.
Crea te a Ca pa bility Ana lysis for both suppliers, a ssume long term
da ta .
N ote the subgroup size for this ex a m ple is 5 .
LSL= 5 9 8 USL=6 0 2
Process Capability
599.548
599 548 is the process
Process Capability of Supplier 1
Mean which falls short of
the target (600) for LSL USL
Supplier 1, and the left P rocess D ata Within
LS L 598 Ov erall
tail of the distribution Target *
P otential (Within) C apability
USL 602
falls outside the lower S ample M ean 599.115 Cp 1.19
C P L 0.66
specification limits. From S ample N
S tD ev (Within)
100
0.559239 C P U 1.72
C pk 0.66
a practical standpoint S tD ev (O v erall) 0.604106
O v erall C apability
p y
what does this mean? Pp 1.10
PPL 0.62
You will have camshafts PPU 1.59
P pk 0.62
that do not meet the C pm *
lower specification of
598 mm.
597.75 598.50 599.25 600.00 600.75 601.50
Next we look at the Cp O bserv ed P erformance E xp. Within P erformance E xp. O v erall P erformance
index.
de Thiss tells
te s us if wee P P M < LS L 30000.00
PPM > USL 0 00
0.00
P P M < LS L 23088.05
PPM > USL 0 12
0.12
P P M < LS L 32467.79
PPM > USL 0 90
0.90
will produce units within P P M Total 30000.00 P P M Total 23088.18 P P M Total 32468.68
600.06
600 06 is the process man
for Supplier 2 and is very Process
Process Capability
Capability of
of Supplier
Supplier 22
close to the target
LSL
LSL USL
USL
although both tails of the PProcess
rocessDData
ata W
Within
ithin
LS
LSLL 598 Ov
distribution fall outside of 598 O verall
erall
Target
Target **
PPotential
otential(Within)
(Within)CCapability
the specification limits. UUSSLL
SSample
ample MMean
602
602
ean 600.061
600.061
CCpp 0.66
0.66
apability
CCPPLL 0.68
The Cpk index is very SSample
ample NN
SStD
100
100
CCPPUU 0.64
0.68
0.64
tDev
ev(Within)
(Within) 1.00606
1.00606
similar to Supplier 1 but SStD
tDev
ev(O
(Ovverall)
erall) 1.14898
1.14898
CCpk
pk 0.64
OOvverall C
0.64
apability
erallll C apability
bilit
this infers that we need to PPpp 0.58
0.58
PPPPLL 0.60
work on reducing PPPPUU
0.60
0.56
0.56
variation. When making a PPpk
pk
CCpm
pm
0.56
0.56
**
comparison between
Supplier 1 and 2 elative to
Cpk vs Ppk we see that 597
597 598
598 599
599 600
600 601
601 602
602 603
603
Supplier 2 process is more OObserv
bserved
edPPerformance
erformance EExp.
xp.Within
WithinPPerformance
erformance EExp.
xp.OOvverall
erallPPerformance
erformance
PPPPMM << LS
LSLL 40000.00 PPPPMM << LS
LSLL 20251.30 PPPPMM <<LS
LSLL 36425.88
prone to shifting over time
time. 40000.00
PPPPMM >> UUSSLL 60000.00
60000.00
20251.30
PPPPMM >> UUSSLL 26969.82
26969.82
36425.88
PPPPMM >>UUSSLL 45746.17
45746.17
That could be a risk to be PPPPMM Total
Total 100000.00
100000.00 PPPPMM Total
Total 47221.11
47221.11 PPPPMM Total
Total 82172.05
82172.05
concerned about.
Again, Compare the PPM levels? What does this tell us? Hint look at PPM < LSL.
So what do we do. In looking only at the means you may claim that Supplier 2 is the best. Although
Supplier 1 has greater potential as depicted by the Cp measure and it will likely be easier to move their
Mean than deal with the variation issues of Supplier 2
2. Therefore we will work with Supplier 1 1.
Process Capability
MINITAB™ Example (cont.)
Process Capability
O ption 1 O ption 2
Enter subgroup size = tota l Go to options, turn off W ithin
num ber of sa m ples subgroup a na lysis
The default of MINITAB™ assumes long-term data. Many times you will have short-term data, be
sure to adjust MINITAB™ based on Option 1 or 2 as shown here to ensure you get a proper
analysis.
For option 1 you will enter the subgroup size as the total number of data points you have in your
short-term study.
For option 2, you will turn off the within subgroup analysis found inside the Options selection.
Process Capability
models assumptions
Sample N 150
StDev(Within) 5.40199 Z.USL 2.74
Cpk 0.91
StDev(Overall) 20.93958
Overall Capability
0.93
Mean
StDev
50.19
20.90
95
90
N
AD
P-Value
150
11.238
<0.005
this because your project Observed Performance Exp. Within Performance Exp. Overall Performance
80
70
Percent
60
50
Here in the Measure Phase stick with observed performance unless your data are Normal. There are
ways to deal with Non-normal Data for predictive capability but we
we’llll look at that once you have
removed some of the Special Causes from the process. Remember here in the Measure Phase we get
a snapshot of what we’re dealing with, at this point don’t worry about predictability, we’ll eventually get
there.
Capability Steps
When we follow the
steps in performing a
capability study on
Select Output for
Improvement
W e can follow the steps for
Attribute Data we hit calculating capability for
a wall at step 6. #1 Verify Customer
Requirements
Continuous Data until we
Attribute Data is not reach the question about
considered Normal Validate
so we will use a
#2
Specification data N ormality…
Limits
different
#3 Collect Sample
mathematical Data
method to estimate
capability. #4 Determine
Data Type
(LT or ST)
#5 Check data
for Normality
#6 Calculate
Z-Score, PPM,
Yield, Capability
Cp, Cpk, Pp, Ppk
#7
Process Capability
#2 Validate
Specification
Li it
Limits
#3 Collect Sample
Data
#4
Calculate
DPU
#5
Find Z-Score
#6 Convert Z-Score
to Cp & Cpk
#7
Z Scores
The Z Score effectively transforms the actual data into standard normal
units. By referring to a standard Z table you can estimate the area under
the N ormal curve.
– Given an average of 50 with a Standard Deviation of 3 what is
the proportion beyond the upper spec limit of 54?
50
54
Process Capability
Z Table
In our case we have
to lookup the
proportion for the Z
score of 1.33. This
means that
approximately 9.1%
of our data falls
beyond the upper
spec limit of 54. If
we are interested in
determining parts
per million defective
we would simply
multiply the
proportion .09176 by
one million
million. In this
case there are
91,760 parts per
million defective.
Attribute Capability
5 0.3 232.7
6 0.0 3.4
Stable process can shift and drift by as much as 1.5 Standard Deviations. Want the theory behind
the 1.5…Google it! It doesn’t matter.
Process Capability
A total of 20,000 calls came in during the month but 2,500 of them
“ dropped” before they were answered (the caller hung up).
Process Capability
"Cpk” is an index (a
11.. Ca
Calcula
lculate
te DPU
DPU
simple number)
22.. Look
Look upup DPU
DPU va
value
lue on
on the
the Z-Ta
Z-Table
ble
which measures how 33.. Find
Find ZZ Score
Score
close a process is 44.. Convert
C
Convert tZZ Score
Score
S to
t Cpk
to C k,, Ppk
Cpk Ppk
P k
running to its
specification limits,
relative to the natural Ex
Ex aample:
mple:
variability of the Look
Look up up ZLT
ZLT
Z LT == 11.1
ZLT .111
process.
Convert
Convert ZLT ZLT to
to ZST
ZST == 11.1
.111++1
1 .5
.5 == 11.6
.611
A Cpk of at least
1.33
1 33 is desired and
is about 4 sigma +
with a yield of
99.3790% .
If you just want to know how much variation the process exhibits, a Ppk measurement is fine.
Remember Cpk represents the short-term capability of the process and Ppk represents the long-
t
term capability
bilit off th
the process.
With the 1.5 shift, the above Ppk process capability will be worse than the Cpk short-term capability.
Process Capability
Notes
Measure Phase
Wrap Up and Action Items
The Measure Phase is now complete. Get ready to apply it. This module will help you create a
plan to implement the Measure Phase for your project.
• Being
B i rigorous,
i di
disciplined
i li d
Listed below are the M ea sure Delivera bles that each candidate
should present in a Power Point presentation to their mentor and project
champion.
Look for the potential roadblocks and plan to address them before they
become problems:
– Team members do not have the time to collect data.
– Data presented is the best guess by functional managers.
– Process participants do not participate in the creation of the X-Y
Matrix, FMEA and Process Map.
It won’t all be
smooth
sailing…..
g
You will run into roadblocks throughout your project. Listed here are some common ones that Belts
have to deal with in the Measure Phase.
DMAIC Roadmap
Process Owner
Champion/
Estimate COPQ
Establish Team
Measure
Measure Phase
rule. The way that you apply the Six Detailed Process Mapping
Sigma problem-solving methods to a Identify All Process X’s Causing Problems (Fishbone, Process Map)
Y
Repeatable &
your path.
y
W HAT W HO W HEN W HY W HY N O T HO W
Identify the com plex ity of the process
Focus on the problem solving process
Define Cha ra cteristics of Da ta
Va lida te Fina ncia l Benefits
Ba la nce a nd Focus Resources
Over the last decade of deploying Six Sigma it has been found that the parallel application of the
tools and techniques in a real project yields the maximum success for the rapid transfer of
knowledge. For maximum benefit you should apply what has been learned in the Measure Phase
to a Six Sigma project. Use this checklist to assist.
Notes
Measure Phase
Quiz
Now we will see what you have retained from the Measure Phase of the course. Please answer
these questions to the best of your ability without referencing the text. The answers are in the
Appendix. Please check your answers against the answers provided and review the sections in
the Measure Phase where your retention of the knowledge is less than you desire.
1 Wh
1. When looking
l ki att precision,
i i th
the primary
i d
desire
i iis tto confirm
fi ththe process measurementt
system has low Repeatability and____________________. (fill in the blank)
2. The difference in Bias values across the process range are known
as_______________________. (fill in the blank)
3. There are many reasons why Basic Statistics are important to a Black Belt. The following
items are good reasons for using Basic Statistics except which one?
A. Makes inferences about the future
B. Foundation for assessing process capability
C. Data collection for streamed orientation
D. Provide a numerical description of the data especially if it´s Normally Distributed
5. A Black Belt was entering data into MINITABTM. The data being entered is the name of
the countries that his company supplies product to. This is an example of:
A. Nominal Scale Data
B. Ration Scale Data
C. Continuous Data
D. Ordinal Scale Data
6. The most frequently occurring number in a distribution set is 7. The 7 is the sample´s?
A. Mean
B. Median
C. Mode
D. Standard Deviation
7. A fundamental rule is that Standard Deviations cannot be summed but variances can be
summed.d
True False
8. The main difference between Special Cause and Common Cause is? (check all that
apply)
A. Sample size impacts if Common Cause variation is found or not.
B. Special Causes are often the focus of BB projects
C. Special Causes are found in short term Process Capability
D. Common Cause variation is larger than Special Cause variation.
9. The Fishbone is a tool to generate ideas about possible causes for defects.
True False
10. The X-Y Diagram is a tool used to identify/collate potential X´s and assess their relative
impact on multiple Y´s.
T
True False
F l
11. The X-Y Diagram serves an important function to a Black Belt. From the list below select
th item
the it th
thatt best
b t describes
d ib ththe importance
i t off the
th X-Y
X Y Di
Diagram.
A. To eliminate the obvious high impact independent variables
B. To help prioritize the independent variables
C. To help prioritize the dependent variables
D. To help with project scope
12. The term FMEA is an abbreviation for Failure Measures Effect Analysis.
True False
13. The FMEA tool is an important tool for a Black Belt. From the list below select the items
that describe the importance of constructing a FMEA. (check all that apply)
A. Predict failure risks and minimize their occurrence
B. Quantifies the severity, occurrence and detection of defects
C. Highlights the non-value added portions of a process
D. Identify ways how a process leads to a failure to meet customer requirements
15. After performing a MSA study if an error occurs, the error can be categorized into which
two specific categories?
A. Precision
B. Detailed
C. Accuracy
D. Random
E. Desirability
16. The following are some good examples of what Black Belt projects should measure:
(check all that apply)
A Primary
A. Pi andd Secondary
S d Metrics
M ti
B. Vital few X´s in the process
C. Before and after process changes
D. All outputs of the process steps
17. The reason for performing a MSA on your system is to confirm minimal variation or
inaccuracy with your measurement systems and reduce the sources for the excessive
variation or inaccuracy.
y
True False
18. Accuracy can be assessed in several ways. From the list below select the least correct
accuracy assessment.
A. Measurement of a known standard
B. Comparison to another recently calibrated instrument with a proven accuracy
C. Comparison with another proven measurement technique
D C
D. Comparison
i with
ith a proven precise
i iinstrument
t t
19. A Crossed Design Gage R&R is best used for destructive testing.
True False