Professional Documents
Culture Documents
Lean Six Sigma Mentor Guide
Lean Six Sigma Mentor Guide
Focus
This guide will use the DMAIC roadmap in discussing Lean Six Sigma tools. 14 questions Managers need to ask their people will step you through the DMAIC process The emphasis will be on proper use and common mistakes with Lean Six Sigma tools and completing projects SPC XL from Air Academy Associates will be used in the computer generated graphs
5.
6. 7.
8.
9. 10.
11.
Control
Which value stream are you supporting and who is the recipient of the value, i.e., who is the customer? Who is the value stream owner and who are the players or team members? How well does the team work together? Within the value stream, which process or processes have the highest priority for improvement? Show me the data that led to this conclusion. How is the process performed? How does the value flow? What activity is value added and what is non-value added? What are the process performance measures, i.e., how ill we gauge if a process is improving? Why did we choose those? How accurate and precise is the measurement system? Show me the data. What are the customer-driven requirements or specifications for all of the performance measures? Are the process performance measures in control and how capable is the process? Show me the data. What are the improvement goals for the value stream or process performance measures? What kinds of waste and cost of poor quality exist in the value stream or process and what is the financial and/or customer impact? Show me the data. What are all the sources of variability in the value stream or process and which of those do we control? How do we control them and what is our method of documenting and maintaining this control? Show me the data. Are any sources of waste or variability supplier-dependant? If so, what are they, who are the suppliers, and how are we working together to eliminate waste and variability? Show me the data. What are the key input variables that affect the average and standard deviation of the measures of performance? How do you know this? Show me the data. What are the relationships between the measures of performance and the key input variables? Do any of the key input variables interact? How do you know for sure? Show me the data. What settings or values for the key input variables will optimize the measures of performance? How do you know for sure? Show me the data. For the optimal settings of the key input variables, what kind of variability still exists in the performance measures? How do you know? Show me the data. Have we implemented a process flow and control system to sustain the gains and continuously improve the process? Show me the data? How much improvement has the value stream or process shown in the past sic months? How much time and/or money have our efforts saved the company? Show me the data.
Improve
Analyze
Measure
Define
Define:
Input Process Output Diagram (IPO), Failure Mode and Effects Analysis (FMEA), Pareto Process Flow (PF) Diagram, Histogram, Cause and Effect (CE) Diagram, Run Chart, Measurement System Analysis (MSA), Process Capability (Cpk) Scatter Plot, Control Chart, Hypothesis Test Design of Experiments (DOE) Standard Operating Procedures (SOPs)
Measure:
Analyze:
Improve:
Control:
IPO Diagram not completed or sufficient COPQ not completely realized Champion support lacking Team not well organized, represented or trained Prioritization is lacking
Following the DMAIC roadmap is the key. If this is done, there is a much higher likelihood of success Understanding of the proper use and interpolation of the Lean Six Sigma tools is a must A project timeline helps to move the project along to completion
5
IPO Diagram
Inputs sources of variation Process a description of the process Outputs measures of performance
Using this tools will allow all involved to have a common picture of the process
Outputs
Process
A blending of Inputs to achieve the desired Output
Inputs and Outputs should be in units of measure. Outputs should be measuring the process performance to be:
Often this is done in a subjective statement, i.e., reducing downtime of pumps Instead, Pump operation might be a better name for the process Often the process listed is too large in scope. If this is the case, one can consider to narrow the process or consider the overall scope to be a macro view. Later the process can be narrowed by making smaller input IPOs cascading into the larger, macro version. See example on next page.
Process
Oil and Gas Production CPI Corporate Level
ROCE (%)
Economic Capital Expenditure ($MM/mo) OEB ($MM/mo) Depreciation ($MM/mo) Inventory ($MM)
Note: This is an overall, macro view, many of the input factors could be incorporated in their own IPO diagram feeding into the overall IPO shown here
O, list the outputs to the process. These are the measures of performance often referred to as:
KPI: Key Performance Indicators CTC: Critical to Customer CTQ: Critical to Quality
These measures should be used to track the process as Better, Faster, Lower Cost as well as Safely and Environmentally Sound All outputs should list the units of measurement
10
Units of measure not listed. Example: Quality does not explain HOW this will be measured. A better performance indicator might be % rejects. Measures of quality should be normalized if possible. Instead of # of rejects, % of rejects normalizes the data. This is important in the area of opportunity changes if production increases, a determination of improvements can be seen % reject depicted data. Use goals for the performance measures as arrows on the far right side. An up arrow would indicate you want that metric to increase. Stay away from writing these goals on the output line. Example: Production rate (units/day) 1200 Write simply: Production rate (units/day) and off to the right put an up arrow indicating you want that metric to increase. All outputs should be agreed upon by the process improvement team and management support. They should be aligned with key business strategies, drive behavior, help to assess accountability and responsibility. They should be captured on a process scorecard as well
11
I, list the inputs variables of the process. These are referred to as sources of variation. As a memory jogging tool, the 6 Ms can be used to form categories:
Known input categories can used instead of, or in conjunction with the 6 Ms. Components of the main Inputs can be added. i.e., Manpower could have branches such as skill, training, morale
12
13
Cascading IPO
Some process are inputs to a downstream process. Some refer to this as SIPOC as shown below
14
IPO Diagram
Water Flooding Process
Produce Oil
Collect Fluid
Sludge Oil
Produce Water Skimmed Oil/Water Chemical Treatment Skimmer Setting Gas Blanket
Remove Oil
Oil Free Water
Remove Solid
Solid & Oil Free Water Clean Water to Wells Maintain Pressure
If the project desire is producing clean water for injection, upstream processes will need to be addressed to Improve the final listed process as injecting clean water
15
Lack of personnel to control doc. No good communication Late request for quotation Lake of manpower Higher approver reluctant to implement END User No status report Quality Unattractive procedures No standard bid package for services No standard lead time Incomplete flowchart Inspection Procedures Tender Commeettee Approver Unclear Role & Respons. No standard lead time Field Proc. Procedures not clearly define Term of Payment Warning letter to supplier Detail sanction for supplier Custom problems Qualified Vendor No stock Partial Delivery (Dpm)
Cycletime (Day/Req)
Procedures
User change spec. Unclear spec. Poor tech. eval. Vendor qualification Late bid evalaluation Improper PO classification Uncommon goods req. Not understand procedure Bypass Authority approver Bad attitude Specific goods preference Vendor Eval. process Urgent need / Emergency Lower price Multi place procurement No back up from principal Improper delivery schedule
End user
Poor handling Unrealistic offer Vendor not profesional
Vendor Qualification
16
Pareto Charts
Used to separate the vital few from the trivial many, answering what key inputs effect the performance measures. Using this tool will help to prioritize what we are to improve in the process. The charts can be constructed by data in tabulated or raw form This tool can be used to determine root cause by forming multiple Pareto charts on various failure mechanisms see examples. Pareto charts should be constructed on both financial and frequency basis.
17
Data Examples
Tabulated form:
Reasons for Pump Failures Frequency 45 33 23 12 Cost $34,567 $24,975 $37,432 $12,643
Seals
Gaskets Seals Electrical Seals Lube Oil Seals Electrical Seals Lube Oil Gaskets Seals
Gaskets Seals Electrical Lube Oil Seals Electrical Seals Seals Electrical Electrical Seals
Raw Form:
Lube Oil Lube Oil Electrical Seals Seals Seals Lube Oil Seals Electrical Electrical
18
Cost:
Cost of Failure
40000
35000
30000
25000
20000
15000
10000
5000
Frequency:
# Observations
45
40
35
30
25
20
15
10
19
# Observations
30
25
20
15
10
30
25
20
15
10
20
Root Cause
One more Pareto shows root cause:
Pareto Chart Reasons for Installation Failure
25
20
# Observations
15
10
21
By constructing Pareto charts by both frequency and cost, one can make the best decision on what to work on first not necessarily the highest frequency failure, might be the most costly one Not always should a team work on the highest failure reason, sometimes the easiest to affect might be the one to improve. Team might not have enough data to make Pareto charts. At that point, measures should be put in place to capture data in the future. In capturing the data, reliability of the data should be of focus. Some have used Access and other data bases to capture this data. Have pull down menus for people to choose from a list of failure types and then train them on how to distinguish failure types as well. If many categories of failure types exist, the user should reduce the number of categories on the Pareto chart capturing the very small categories in an other category as the last column of data.
22
Used to help prioritize where to make process improvements Detailed FMEA generates numerical data to point to problematic areas, however this tool is difficult to construct Basic FMEA is easy to construct but does not give a numeric value. This basic tool can point to root cause if performed correctly see next slide
23
Steps Involved:
Form a team of people closely associated with the process. Ask all involved the question What can cause this process to go wrong/ fail/or other?. Brainstorm a list of answers to the above question. Refrain from commenting on the answers from the group! Just document comments on a flip chart. Clarify the list, ask if anyone needs more information to understand the answers listed. If so, ask the author of the answer to clarify further. Ask the group to combine answers if possible. Every team member should vote for the most important based on a preset criteria such as frequency of occurrence, cost, etc. A good method to determine the number of votes everyone receives is to add the total number of items on the list, then divide that number by 3 (N/3 technique). They can place one of their votes for each item they select (weighing votes should not be used). As a general rule of thumb, circle the top 3-5 items. Perform the 5 Whys to help determine root cause for these problems. Continue to ask ask why does this happen until you can go no further, that answer is typically the root cause. This information is used to prioritize where to work first to improve a process, data collection items needed, and to help identify the most important noise variables on a cause and effect diagram, etc.
24
5 Whys Example
CPI EXAMPLE: Finding Root Cause Using the 5 Whys Duri Well Location Clean-Up Project
1. Why are the locations getting dirty in the first place? Because the operators cannot keep the stuffing boxes from leaking. 2. Why cant the operators keep the stuffing boxes from leaking? Because the packing seals are failing too frequently.
3. Why are the packing seals failing? Because the polish rod is wearing out the seals prematurely.
4. Why is the polish rod wearing out the seals? Because the polish rod is bend. 5. Why is the polish rod bend? Because the transportation trailer is too short and the polish rods are not properly supported. <= Root Cause
25
Standard FMEA
Failure Mode and Effects Analysis (FMEA):
A procedure used to identify and assess risks associated with Product or process failure modes.
Product or Process S E V O C C D E T R P N P S P O P D p r p n
Failure Mode
Failure Effects
Causes
Controls
Actions
Plans
Oil Spill
3 3 3 3
4 5 2 3
4 3 1 1
2 2 3 3
1 1 1 1
1 1 1 1
2 2 3 3
Well Down
Motor Failure Well Down 5 2 broken power outage 2 4 go see go see 3 1 30 go see 8 go see PM 2 2 1 1 1 1 2 2
No Production
5 3 3
2 2 4
4 4 4
PM PM PM
5 3 3
1 1 1
4 4 4
20 12 12
This risk analysis tool can be used to allocate resources to address problem areas. FMEA looks at the Severity, Occurrence, and likelihood and problem will go undetected. Risk can be reduced by lowering one or all of these factors. These charts can be generated in SPC XL: Quality Tools, FMEA.
26
Standard form:
Look at the Risk Priority Number (RPN) column for high numbers. Work on improving those failure reasons Of the highest priority failure reasons, you need not improve every category: Severity, Occurrence and Avoiding Detection, typically only one area will require improvement to reduce the RPN Make a plan to reduce RPN, control with Standard Operating Procedures (SOP) Brainstorm list of failure reasons with people how have process knowledge. Determine priority based on frequency of failure and cost impact Perform 5 Whys to point to root cause of failure. Work to improve high priority areas
27
28
Form a cross-functional team of people with process knowledge Decide the start and stop of the process Agree upon the detail of process steps micro vs. macro Use Post its and have team members list steps Place steps in proper order Go look at the process to confirm accuracy. Make changes to the process flow if needed To make the process flow Lean, mark the steps as to the type of step see next page
29
Transport
Storage
Transport
Operation
Operation
Delay
Transport
Operation
Transport
Operation
Operation
Operation
Operation
Operation
Transport
Operation
Transport
Operation
5
Push to turntable
6
Delay on turn table
10
11
12
13
14
15
16
17
18
Pallet to
Storage at Transport to
Remove
Transport to cleaner
Cleaner
Transport to fill #1
Fill #1
Fill # 2
Fill # 3
Fill Co2
Valve placement
Transport to crimper
Crimping
Operation
Transport
Operation
Operation
Operation
Operation
Transport
Operation
Transport
Operation
Transport
Operation
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
Transport to weighing
Check weigher
Transport to turntable
Waterbath
Actuator
Inspect actuator
Transport to Put cap on Transport to Cleans cans Transport to capping cleaner packing
Operation
Operation
Transport
Operation
41
42
43
44
Seal box
pallet
30
Spill Discovered
Investigate
No
Yes
No
Yes
2 hrs
End
Prepare KKP 2
End
Update Report to CSHE, cc SHE DRI every 3 days 3 days Submit F 059 (Approved by TM Prod, TM HES DRI) to C HES 5 days Submit F 059 to TM Prod. & HES DRI MNGR > 5 BBL VP > 100 BBL
End
Keep Original File Keep copy file & update Database Review Finalize & send KKP - 3 (Incl. Monthly Oil Spill & Prel. Report to EPT JKT
Prepare KKP 3
End
31
Note non-value added steps and remove as many as possible Look for bottlenecks and problem areas mark them appropriately Are process steps out of sync, change where needed Is there enough detail? If not, add steps or detail to the steps listed Do all on the team agree with the process flow? Does it match the actual process? Make changes where needed.
32
Value added steps are those that the customer sees as adding value to the product. A good way to determine value added: Would the customer pay for this step? Non-value added steps usually fall into these categories:
Remove as many non-value added steps as possible in the process Process flow charting points to non-value added steps
33
5S
Sort, Set in Order, Shine, Standardize and Sustain Basic Housekeeping Tool Can reduce clutter and time looking for things After 5S in completed, regular audits are needed to assure compliance A reward system is beneficial to sustain gains Often, extra tool sets and other expenses might be needed to achieve success these items should have a cost benefit analysis done to justify purchase Best practices should be shared
34
Visual Controls
Working along with 5S will help to improve communication in the workplace and reduce time spent looking for things Safety issues can also be addressed with Visual Controls. Marking of danger, fire equipment, caution, etc., could be used. Coloring tools to indicate size and shape might help to reduce mistakes in using the wrong items. Inventory areas can be improved with Visual Controls
35
Commonly called a Fishbone diagram Used to capture the sources of variation in the process Should be constructed in a cross-functional team setting Should have at least 20-25 bones on the fish. This would assure capturing most of the sources of variability in the process Variables should be listed as C for constants, N for noise and X for experimental. These graphs can easily be constructed in the SPC XL software by first filling in the PF/CE/CNX/SOP template (listed under Problem ID Tools). Then, use the SPC XL software to construct the diagram (listed under Problem ID Tools). Branches of the variables can also be added The head of the fish should be the performance variable(s).
36
Located under Problem ID Tools, PF/CE/CNX/SOP Template Here is the blank template
CNX Template
C N X Measurement Variable 1 Variable 2 Variable 3 Variable 4 Variable 5 Variable 6 Variable 7 Variable 8 Variable 9 Variable 10 Variable 11 Variable 12 C N X Method C N X Machine C N X Manpower C N X Materials C N X Environment
You can change the category names if you want. Fill in the template, then choose Problem ID Tools, create PF/CE/CNX/SOP diagram see next page
37
The example shows no labels on the bones since the template was empty. After your graph is constructed you need to fill in the output performance metric
Measurement Method Machine
Output
Manpower
Materials
Environment
38
Bones on the fish could use the 6 Ms or other categories such as step 1, step 2 and so on
M ac
m en t
ea
Note Variables to be: C = Constant N = Noise X = Experimental Goal = Change noise variables to constant, when economically possible through the use of SOPs
an po we r
su re
eth
od s
ate ria ls
s ne hi
Na re tu
39
MACHINE
Equipment Condition (N)
MOTHER NATURE
Lightning, Rain, & Animal (N)
PM/PdM Tools (X) Equip. that identify the potential problem (X) PPE (X) Equip. that reduce/prevent the unnecessary downtime (X) Environment (N) - Dust, - Dirt Accumulation, - Presence of Moisture,
PU3
Literature that identified parts (X) PM/PdM Program (X) : - Schedule (X) (C) - Asset data & condition (X), - Manufacturer recommendation (X) - Eliminating the defect (X) (C) - System that alert the potential problems (X), MP2 Optimization & Development (X) Original specifications & Drawing Ensure Compatibility (X) MP2 Optimization & Development (X) Immediate Shipment from Manufacturing Locations (X) Potential Failure Parameter (X)
METHOD
MATERIAL
MEASUREMENT
Areas are circled that indicate where the work will focus, arrows are used To indicate what the team wanted that metric to do.
40
These should be constructed in a team setting. Not one person can know all the variables in the process Decide how the team is to label the variables. Some choose to mark C, constant, for all variables that are, at the time of the graph construction, constant. Some list C for all variables they WANT to hold constant. Some mark N to C for variables they plan to make C through the use of SOPs. The choice is the teams but it should be understood and constantly done. Remove subjectivity of the variables, i.e., do not write poor condition, simply write condition When brainstorming the list of variables, do not critique the list, just get it on the chart. If the variable IS important, it will surface later. Some IPO diagrams list several performance or outputs. If the variables for each of the outputs are different, then a cause and effect diagram should be constructed for each. If a variable on the CE diagram has many variables associated with it, then it might have to have its own CE diagram After the CE diagram is completed, root cause analysis to determine what variables affect the performance measures should be performed. Then, the team should mark the CE diagram to indicate the variables they plan to improve. Not all noise variables should be controlled.
41
Histogram
This graph is used to see the distribution of the data and key statistically information such as mean and standard deviation. From this information, one can gather much insight as to the performance of the process Before constructing a Histogram, one should question the reliability of the data used. See the section on measurement error and MSA. Using the next slides, many of the common distributions will be shown and conclusions one can make from them.
42
40
# Observations
30
20
10
0 1.5 to 1.783 2.067 2.35 to 2.633 2.917 3.2 to 3.483 3.767 4.05 to 4.333 4.617 4.9 to 5.183 5.467 <= to <= to <= <= to <= to <= <= to <= to <= <= to <= to <= <= to <= to <= 1.783 2.067 2.35 2.633 2.917 3.2 3.483 3.767 4.05 4.333 4.617 4.9 5.183 5.467 5.75 Average of 4 Dice Rolled
This is an example of a normal distribution plotting the average of four dice rolled. With a normal distribution, using the mean and standard deviation, one can use the 68/95/99 rule to assess response probability.
43
60
50
# Observations
40
30
20
10
0 2. to <= 45.6 45.6 89.3 132.9 176.6 220.2 263.9 307.5 351.1 394.8 to <= to <= to <= to <= to <= to <= to <= to <= to <= 89.3 132.9 176.6 220.2 263.9 307.5 351.1 394.8 438.4 BOPD 569.4 to <= 613.
With an exponential distribution, one might want to use the more conservative number of the median instead of the average to access the COPQ. To run a Cpk from this data, you could transform the data by taking the log of the data or simply place the specifications on this chart and physically count the product not in spec.
44
80
70
60
# Observations
50
40
30
20
10
0 0.0 to <= 1.0 1.0 to <= 2.0 2.0 to <= 3.0 3.0 to <= 4.0 Die Number 4.0 to <= 5.0 5.0 to <= 6.0
This is an example of a uniform distribution. Each of these classes of data has an equal chance of occurring in this process.
45
This data set on water densities indicates a bimodal distribution. Typically, bimodal indicates two somethings are going on. In this case further investigation points to old and new data. The process Is changing, so old data will have a different distribution than the new data. Knowing this, the team increased sampling to have all new data to make their process decisions based on.
46
40
# Observations
30
20
10
0 0.0 to <= 1.0 1.0 to <= 2.0 2.0 to <= 3.0 3.0 to <= 4.0 4.0 to <= 5.0
This is an example of a Parabolic Distribution. On controversial topics, survey results often have this type of result. This indicates people have an opinion on one side or the other, very few people are in the middle.
47
Histogram Concerns
There should be enough data to show how the process is performing. A minimum of 25 data points is adequate. A Histogram should be run on all data sets before statements are made about the process. Reliability of the data should always be in question. If the distribution is not normal, a Cpk analysis will not be reliable to determine accurate dpm and other quality measures A histogram looks at all the data without regard to time. A run chart is needed to look at trends over time. For the most part, accept the software defaults when constructing a chart. Changing the settings, might make the chart misleading.
48
Run Charts
Used to track data over time Good tool to see how performance variables are responding over time. Very good tool to motivate teams in making process improvements and sustain results These graphs can be enhanced to show many aspects of process improvements and goals
49
130
Distance in Inches
110
90
70
50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 Shot Number
This example shows Statapult data before and after PF/CE/CNX/SOPs. One can clearly see where the process improved, shot number 21. Many items can be added to this simple chart to help to motivate team members and show people what the process is doing, stretch goals, and much more information. See next slide.
50
1.00%
0.20%
0.00%
1/ 2/ 20 03
1/ 9/ 20 03
2/ 6/ 20 03
1/ 16 /2 00 3
1/ 23 /2 00 3
1/ 30 /2 00 3
2/ 20 /2 00 3
3/ 6/ 20 03
3/ 13 /2 00 3
3/ 20 /2 00 3
X Axis
This example shows the process over time in tracking scrap %. The red line is the current process mean and the green is the stretch goal. The arrow in the upper right corner shows the direction we want the graph to do. The lines and arrows were drawn in. This is a great graph to post in the operations area of a plant to give all a picture of how the process is performing.
2/ 27 /2 00 3
2/ 13 /2 00 3
3/ 27 /2 00 3
51
70000
60000
Cost/Job $
50000
40000
30000 Mean = $ 50,220 20000 Reduce FDA # of Jobs (done prior to Six Sigma effort) Mean = $ 34,316 Mean = $ 28,464 Significant Shift in Mean After Six-Sigma
10000
0
Ja n_ 98 M ar _9 8 O ct _9 8 Ja n_ 99 M ar _9 9 M ay _9 9 Ju l_ 99 Se p_ 99 N ov _9 9 Ja n_ 00 M ar _0 0 M ay _0 0 Ju l_ 00 Se p_ 00 N ov _0 0 Ja n_ 01 M ar _0 1 M ay _0 1 Ju l_ 01 Se p_ 01 N ov _0 1 Ja n_ 02 M ar _0 2
Month
Information has been added to this chart. As in the previous slide, Constructing a Run chart will only produce a simple graph of the data. From this, a person can use the software drawing tools to add, 1) means Of the process before and after process changes, 2) what was done to the Process, 3) performance goals, 4) economic results, etc.
52
Baseline data is not always available. The performance measures should be captured as soon as possible. Often, before and after Six Sigma is put on the graph by means of a drawn in vertical line. Placing the after line should be at the time SOPs and other action items are in place. What should be done with outliers as they will shift the means before and after improvements. Discussion with the process Champion could be best in handling this data. In short, take a conservative approach to assessing improvements and financial gains. These graphs should be easy to read. Avoid using bar graphs as they fill in the area where text can be written detailing improvements. Draw in a trend arrow in the upper right corner so all involved in the process know the direction the graph SHOULD go. Continue to use the run chart after the project is completed. This will help sustain gains.
53
This tool is used to track the performance metrics of the process vs. customer specifications Assure the data is reliable account for measurement error, see MSA Confirm the specifications with the customer. Ask for foundation for the specifications such as economics, operational needs, etc. Determine the distribution of the data to be normal by constructing a histogram. If the distribution is not normal, the measures of quality derived from the Cpk graph will not be accurate
54
Histogram
25
20
# Observations
15
10
0 17.82 to 19.07 to 20.31 to <= 19.07 <= 20.31 <= 21.56 21.56 to 22.8 to <= 24.05 to 25.29 to 26.54 to 27.78 to 29.03 to <= 22.8 24.05 <= 25.29 <= 26.54 <= 27.78 <= 29.03 <= 30.27 Class
Mean = 24.752 StdDev = 2.4711 USL = 28.5 LSL = 21.5 Sigma Level = 1.3159 Sigma Capability = 1.4164 Cpk = .4386 Cp = .4721 DPM = 158,759 N = 100
Cpk Analysis
13.6
14.3
15
15.6
16.3
17
17.6
18.3
19
19.6
20.3
21
21.7
22.3
23
23.7
24.3
25
25.7
26.3
27
27.7
28.3
29
29.7
30.3
31
31.7
32.3
33
33.7
34.3
35
35.7
55
Measures of Quality
Cp, Cpk, Sigma level, Sigma capability, Dpm Cp and Sigma Capability will not be generated with a one sided specification Other information regarding the process will be shown: mean and standard deviation
56
57
If the Cpk is 2 or more, the process is very good and might not need improvement If the Cp and the Cpk are not the same, the process is not centered. Centering the process performance between the specs involves adjusting the mean of the process. This is typically easier to do that reducing process variance. This will get the team a quick win. Ask the customer to reconsider the specifications. They might be in a position to relax the specifications Reduce the variation in the process. This is done by PF/CE/CNX/SOPs.
58
Cpk Concerns
Do not use x bar data for this tool. Using x bars will reduce the standard deviation of the distribution and might fausely indicate a capable process. Instead, us the raw data used to calculate the x bars. Work to make the process stable before assessing capability. Stability can be assessed with control charts. Stability can be achieved by PF/CE/CNX/SOPs An adequate amount of data is needed to assess accurate capability
59
Cpk Analysis
4.98
4.99
5.01
5.02
5.04
5.05
5.07
5.08
5.1
5.11
5.13
5.14
5.16
5.17
5.19
5.21
5.22
5.24
5.25
5.27
5.28
5.3
5.31
5.33
5.34
5.36
5.37
5.39
5.4
5.42
5.43
5.45
5.46
Steps to improve the process: 1. Center the process 2. Ask the customer if they can relax the specifications 3. Reduce the process variance PF/CE/CNX/SOPs
5.48
60
Cpk Analysis
4.98
4.99
5.01
5.02
5.04
5.05
5.07
5.08
5.1
5.11
5.13
5.14
5.16
5.17
5.19
5.21
5.22
5.24
5.25
5.27
5.28
5.3
5.31
5.33
5.34
5.36
5.37
5.39
5.4
5.42
5.43
5.45
5.46
By centering the process, Dpm were reduced from 365,114 to 37,612. Further improvement could be achieved through variance reduction.
5.48
61
62
MSA Planning
What are the performance metrics and how are they measured? In an MSA, both repeatability and reproducibility are determined. To conduct an MSA looking for differences in operators, the parts measured, SOPs and measurement materials should be constant, only the operators change. 70% of the process variance should be represented in the part measured A general rule of thumb, measure at least 10 parts twice to meet resolution requirements Assure the parts are marked blindly so the operators are not aware of the part they are measuring
63
Set up an MSA
First, look at a histogram of the performance metrics. In the example on the next page, there is a huge spread in the data. Upon further investigation, there are three processes going on with this data. A first stage, second stage and third stage of treatment.
64
Histogram
30
25
# Observations
20
15
10
There are three distributions to this data. To determine the Measurement error, three separate MSAs should be conducted. by doing this, you can see if the measurement is reliable in each range of data. You should have samples representing 70% of the process variance for each of the three data sets to conduce each MSA. If all the data were put into one large MSA, the total variance compared to the measurement error might be skewed on the low side.
34 .7 to < 50 .2 = 5 0. to 2 < 65 .8 = 6 5 to .8 < 81 .3 = 8 1 t 96 o < .3 = .8 9 11 to < 6.8 = 2. 4 11 to 2 <= .4 12 7. 9 14 3. 5 15 to < = 9. 1 17 to < 59 . = 4. 5 17 19 to < 4.5 = 0. 19 1 20 to < 0.1 = 5. 20 6 22 to < 5.6 = 1. 1 22 to 1 <= .1 23 6. 7
Class
65
Conduct an MSA
Part # 1 2 3 4 5 6 7 8 9 10
66
If a known standard is available, place that in the reference column on the template If specifications are known, use them when building the template Develop SOPs to conduct the MSA and make sure all involved know and follow them Do not throw out data Make sure all data is imputed correctly BEFORE analyzing
67
The preferred method to analyze the data is ANOVA. However, if you have only one measurement per part, you cannot use this method. ANOVA analysis will generate a part to operator interaction and is less sensitive to outliers. First, is the measurement error (PTOL) 10% or less, if not, which is highest, repeatability or reproducibility? This will point to process improvements. Repeatability problems point to inadequate SOPs. Reproducibility points to some operators performing fine with others are not. What are the differences? How could the SOPs be changed to have all operators perform best? If customer specifications were used, was the PTOL less than 10%? If the PTOL < 10% but the PTOT > 10%, the measurement is still OK. This point to a capable process and will not suffer from misclassification.
68
A completed Template
Customer Specifications are listed
MSA Data Template
Date: Part Type: USL: LSL: 4/22/2003 For Attribute data enter A for Accept and R for Reject Description:
5.5 3.5 Operator 2 Rep 1 Rep 2 5.1 5.3 4.9 5.2 4.6 4.9 4 4.3 4.9 5.2 4 3.8 5.6 5.9 4.9 5.3 4.3 4.6 3.8 3.4 Operator 3 Rep 1 Rep 2 5 5.3 4.9 4.7 4.7 5 4.2 3.9 4.8 4.9 3.9 4.2 5.4 5.6 5 5.1 4.7 4.4 3.4 3.6
Part # 1 2 3 4 5 6 7 8 9 10
Operator 1 Reference Rep 1 Rep 2 5 5.1 4.9 5.1 4.5 4.3 3.8 4 4.9 5.2 3.9 3.8 5.5 5.7 5 5.3 4.5 4.6 3.5 3.7
Look over the data making sure it is imputed correctly. Look for decimal placement and such. After checked, the data can be analyzed. No reference was used in this example.
69
Both PTOL and PTOT are too high, well over 10% error
Bias
With repeatability the main issue, the overall SOPs should be reviewed. There is no bias analysis due to lack of a reference or standard in this analysis.
70
Measurement
Here you see where operator #1 had a much different readying than operators 2 and 3.
0 1 2 3 4 5 Part # 6 7 8 9 10
This graph shows the average measurement of each part by each 0perator. The best result of this graph would be three completely overlying lines indicating they all made the same readings on average.
71
With this graph you can see the difference between the Sigma Total and the Sigma Product. When there is a gap on the top aspect of the curve, that illustrates the degree of measurement error. If the measurement system was good, this graph would have one curve with one line superposed over the other. The red lines are the spec limits
72
Misclassification
dpm Potentially Misclassified = 461,226.805
Sigma Total LSL Sigma Meas USL Sigma Meas LSL USL
This graph takes the distribution of the measurement error and places it over the spec area. This example shows that with the large measurement error, over 46.1 % of the product would be misclassified as being in spec.
73
0.35
0.3
0.25 Z Axis 0.2 0.15 0.1 0.05 0 Part-to-Part Repeatability XAxis Category Reproducibility
The repeatability is much higher Than reproducibility, indicating Repeatability is the problem
This is the variability in the process itself. This is supposed to be The largest bar.
74
X Bar Chart
MSA- Xbar Chart
7 6
Part Average
0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Part Number
In this graph the control chart limits are established from +/- 3 standard deviations of the measurement error. The smaller the limits, the better the measurement process is. The rule of thumb is to see at least 50% of the measurements outside the control chart limits.
75
Range Chart
MSA- Range Chart
0.9 0.8
0.7
Part Range
0.5
0.4
0.3
0.2
0.1
0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Part Number
The Range charts shows the range of measurements for each operator. The control chart limits are based on +/- 3 standard deviations of the overall range. The lower the range the better, so operators with data near 0 would be the best.
76
MSA Conclusions
Data should not be used to make process decisions until the measurement error is known. If the measurement error is < 10%, the measurement system is fine. If the error is > 10%, the system should be improved. Spend enough planning time as necessary to plan and conduct an MSA. The better the experimental discipline, the more accurate the results will be.
77
Hypothesis Testing
This tool is used to determine if a process has changed, either in mean, standard deviation, both or neither This is a paired test, so we look pairing of various data sets to see if they are significantly different from one another. The rule of thumb for significance is a p value 0.05, this would indicate less than a 5% chance of falsely stated the data sets are different.
78
79
Sample Data
Statapult Launching Data: Highlight each column or row, and run a T test and/or F test
Group 1 Before 130 133 136 138 134 119 116 110 57 104 113 112 117 110 110 115 124 121 142 144 146
After 121.5 122 119 119.5 119 118.5 120 116.5 117 120 118 118.5 119 121 116 118 119 121
80
T-test is NOT significant, the P-value > 0.05. Conclusion, the SOPs On launching the Statapult did not affect the mean of the process
The results below represent the p-values from a 2 sample F-test. This means the probability of falsely concluding the alternative hypothesis is the value shown (where the alternate hypothesis is that the variances are NOT equal). Another way of interpreting this result is that you can have (1-pvalue)*100% confidence that the variances are not equal.
F-test IS significant, the P-value > 0.05. Conclusion, the SOPs on launching the Statapult does affect the standard deviation of the process
81
The T-test and F-test results show only significance, not the process means and standard deviations The means and standard deviations should be calculated then cut and pasted into a document along with the hypothesis test results explaining what the team was trying to prove The mean and standard deviation can be obtained by running a summary stats of the data see next page The results summary might look like the following example
82
Summary Stats
Here is the data summary before and after PF/CE/CNX/SOPs on launching the statapults
Before Count Mean Median Mode Max Min Range Std Dev (Pop) Std Dev (Sample) Variance (Pop) Variance (Sample) Skewness Kurtosis 95% Conf. Interval for Mean Upper Limit Lower Limit 99% Conf. Interval for Mean Upper Limit Lower Limit 21 120.52 119 110 146 57 89 18.87 19.34 356.25 374.06 -1.62 4.97 After 18 119.08 119 119 122 116 6 1.62 1.66 2.62 2.77 -0.06 -0.43
129.33 111.72
119.91 118.26
132.53 108.52
120.22 117.95
83
7100 ?
7090
7090 ?
7100
If the lots are signifantly different, it could be concluded that machine settings effect fill weight
t test = looking for a difference in means Lot Number 7090 7100 Mean 5.2668 5.2075 Machine Setting 5.25 5.2
The results below represent the p-values from a 2 sample t-test. This means that the probability of falsely concluding the alternative hypothesis is the value shown (where the alternate hypothesis is that the means are not equal). Another way of interpreting this result is that you can have (1pvalue)*100% confidence that the means are not equal.
Note: To determine optimal settings, regression analysis might be needed as well as confirmation of results
f test = looking for a difference in standard deviation Lot Number 7090 7100 Standard Deviation 0.0364 0.0644 Machine Setting 5.25 5.2
The results below represent the p-values from a 2 sample F-test. This means the probability of falsely concluding the alternative hypothesis is the value shown (where the alternate hypothesis is that the variances are NOT equal). Another way of interpreting this result is that you can have (1-pvalue)*100% confidence that the variances are not equal.
Note: To determine optimal settings, regression analysis might be needed as well as confirmation of results
84
Sometimes the process did change, but not enough after data is there to see a P-value 0.05. Gather more data, test again and see if there is a significant shift. Do not use Xbar data for this analysis. Use the raw data it took to make the Xbars with. This would apply with any form of average listed data. T-test and F-test is for continuous data only. The next slide explains how to hypothesis testing with attribute data sets
85
Test of Proportions
Used to perform hypothesis testing on attribute data sets. More data is typically needed to make good decisions with this tool Make a process change and look for process results. Change speed, monitor failures. Use SPC XL, Analysis Tools, Test of Proportions
86
Results
Test of Proportions
User defined parameters
Number Defective Group #1 (x1) Size of Sample #1 (n1) Number Defective Group #2 (x2) Size of Sample #2 (n2)
Results
14 54 14 77
Measure the defects and the total number in each sample set, type in the sample size first, then the defects Proportion of defects for each sample is shown.
SPC XL is Copyright (C) 1999 Digital Computations, Inc. and Air Academy Associates, LLC. All Rights Reserved. Unauthorized duplication prohibited by law.
The P-value is shown here. For a significant shift in the Proportion of defects, the P-value should be 0.05. The results show the proportion of group 1 is 0.25926, the Proportion of group #2 is 0.18182. Most would say there IS A significant difference. But, the P-value is not close to 0.05. On the next page, the sample size was doubled With each data set, keeping the proportion the same. See results next page.
87
Results Continued
Test of Proportions
User defined parameters
Number Defective Group #1 (x1) Size of Sample #1 (n1) Number Defective Group #2 (x2) Size of Sample #2 (n2)
Results
42 162 42 231
These are the same proportions as before. The sample size is triple in size.
SPC XL is Copyright (C) 1999 Digital Computations, Inc. and Air Academy Associates, LLC. All Rights Reserved. Unauthorized duplication prohibited by law.
With the sample size tripled, the P-value points to a number almost meeting the criteria of being significantly different (P-value 0.05). The take away is, in using attribute data, more data is often needed to prove significance.
88
Xbar R (plotting averages of data sub sets and the range within those subsets) Xbar S (plotting averages of data sub sets and the standard deviation of those subsets) IMR (individuals moving range: plotting one data set instead of an average of a subgroup, range is established by the difference of one point from the last point plotted
90
For continuous data sets, it is best to sample in sub sets, such as 4 random samples daily. The spreadsheet to capture this might look like this
Date
1/1/2003 1/2/2003 1/3/2003 1/4/2003 1/5/2003 1/6/2003 1/7/2003 1/8/2003 1/9/2003 1/10/2003 1/11/2003 1/12/2003 1/13/2003 1/14/2003 1/15/2003 1/16/2003 1/17/2003 1/18/2003 1/19/2003 1/20/2003 1/21/2003 1/22/2003 1/23/2003 1/24/2003
Sample 1
59.92 56.88 46.70 43.36 48.77 41.88 52.82 46.13 50.34 50.40 53.52 50.68 52.16 48.37 56.05 59.15 44.53 53.15 49.89 55.94 51.79 50.66 40.43 48.23
Sample 2
48.90 44.19 42.21 45.81 43.77 47.13 47.07 46.87 43.42 42.22 41.51 50.27 51.95 61.60 51.03 55.08 59.57 49.86 53.25 44.27 47.87 55.73 48.95 54.15
Sample 3
53.88 55.49 48.08 43.11 40.84 43.73 51.72 43.53 48.56 52.18 45.62 46.06 57.29 50.95 44.48 56.99 48.52 48.17 52.31 48.51 47.65 55.95 52.56 46.58
Sample 4
46.48 57.39 39.31 47.08 42.90 42.50 51.21 48.64 42.03 41.93 53.81 54.71 56.63 51.72 48.23 53.54 48.70 49.40 54.64 47.63 51.06 50.13 48.14 54.91
Xbar
52.29 53.49 44.07 44.84 44.07 43.81 50.71 46.29 46.09 46.68 48.62 50.43 54.51 53.16 49.95 56.19 50.33 50.14 52.52 49.09 49.60 53.11 47.52 50.97
91
Here is a histogram using the raw data from the previous example
Histogram - Raw Data
20
Normal Distribution Mean = 49.52 Std Dev = 5.0168 KS Test p-value = .4941
15
# Observations
10
0 39.31 to 41.54 to 43.76 to 45.99 to 48.22 to 50.45 to 52.68 to 54.91 to 57.14 to 59.37 to <= <= <= <= <= <= <= <= <= <= 61.6 41.54 43.76 45.99 48.22 50.45 52.68 54.91 57.14 59.37 Class
The standard deviation of this process is 5. the distribution is normal. Does this mean the process is stable? We can only determine that by constructing a control chart looking at the data over time.
92
Histograms Continued
Here is a histogram of the XBar data using the averages of the subgroups
Histogram - XBar Data
9 8
Normal Distribution Mean = 49.52 Std Dev = 3.5057 KS Test p-value = .6292
# Observations
0 43.81 to <= 46.29 46.29 to <= 48.76 48.76 to <= 51.24 Class 51.24 to <= 53.72 53.72 to <= 56.19
Note the standard deviation of this distribution is 3.5 This is much lower than the raw data due to the central limit theorem. The means are the same, but the standard deviation of the Xbar is less. This is the reason NOT to use Xbars with Cpk analysis, it gives a false reading on the process standard deviation.
93
The SPC XL Control Chart Wizard can be used to help determine what control chart to use and build a data collection template. You can cut and paste data from an existing spreadsheet into this template. You will not need to paste in Xbar information, just the raw data. After the data is either in the SPC XL template or on your own spreadsheet, go into control chart menu, choose the chart you want, in the example with the previous spreadsheet, XBar R is the choice, highlight data if using your own spreadsheet, just click if using the SPC XL template and the chart will appear.
94
XBar R Chart
Here is the charts generated on a stacked format, one on top of the other
Xbar Chart
60 50 40 30 20 10 0
R Chart
25 20 15 10 5 0
UCL=19.545
CEN=8.565
LCL=0.0
1/ 1/ 20 03 1/ 2/ 20 03 1/ 3/ 20 03 1/ 4/ 20 03 1/ 5/ 20 03 1/ 6/ 20 03 1/ 7/ 20 03 1/ 8/ 20 03 1/ 9/ 20 03 1/ 10 /2 00 3 1/ 11 /2 00 3 1/ 12 /2 00 3 1/ 13 /2 00 3 1/ 14 /2 00 3 1/ 15 /2 00 3 1/ 16 /2 00 3 1/ 17 /2 00 3 1/ 18 /2 00 3 1/ 19 /2 00 3 1/ 20 /2 00 3 1/ 21 /2 00 3 1/ 22 /2 00 3 1/ 23 /2 00 3 1/ 24 /2 00 3
The top chart is the Xbar the bottom one is the range chart
95
XBar Chart
Red points indicate out of control symptoms. To find out what symptom they are, pull down the menu called Out of Control located directly above the chart. Choose a symptom and the chart will be reconstructed showing the points in red for only that symptom. Never print out only one symptom and show to others as they will assume only that symptom exists. This feature is for you to look at symptoms one by one.
Xbar Chart 60 50 40 30 20 10 0
The goal of the XBar chart would be to have the mean of the process match the target from the customer with no out of control symptoms. Small control chart limits indicate low process variability and better process performance vs. customer specifications. These charts look at BETWEEN group variability.
96
Range Chart
The range chart is plotting the range of the sample sets. A range chart control limits are +/- 3 standard deviations of the overall range. The closer to zero the points are the less variability in the sample. This chart has no RED points indicating the range is stable in this process This graph looks at WITHIN group variability.
R Chart
25 20 15 10 5 0
UCL=19.545
CEN=8.565
LCL=0.0
1/ 1/ 20 03 1/ 2/ 20 03 1/ 3/ 20 03 1/ 4/ 20 03 1/ 5/ 20 03 1/ 6/ 20 03 1/ 7/ 20 03 1/ 8/ 20 03 1/ 9/ 20 03 1/ 10 /2 00 3 1/ 11 /2 00 3 1/ 12 /2 00 3 1/ 13 /2 00 3 1/ 14 /2 00 3 1/ 15 /2 00 3 1/ 16 /2 00 3 1/ 17 /2 00 3 1/ 18 /2 00 3 1/ 19 /2 00 3 1/ 20 /2 00 3 1/ 21 /2 00 3 1/ 22 /2 00 3 1/ 23 /2 00 3 1/ 24 /2 00 3
Overall, a good range chart is one indicating points near zero with small control limits.
97
98