Professional Documents
Culture Documents
As13006 Appendix D - Guidance Materials
As13006 Appendix D - Guidance Materials
GUIDANCE MATERIALS
Appendix D 2018-OCT-09
INTRODUCTION
The following guidance supports AS13006. Within AS13006 this guidance is referenced from appendix D. Many of the
graphics in this guidance are produced using Minitab software – a recognized statistical software application.
TABLE OF CONTENTS
1. BENEFITS OF STATISTICAL PROCESS CONTROL (SPC) ................................................................................... 3
1.1. Background ............................................................................................................................................................ 3
1.2. Benefits .................................................................................................................................................................. 3
1.3. Resistance to SPC ................................................................................................................................................. 3
2. PROCESS CONTROL METHODS ............................................................................................................................ 6
2.1. Error/Mistake Proofing ........................................................................................................................................... 6
2.2. Control Charts for Variable Data ............................................................................................................................ 7
2.3. Run Charts with Non-Statistical Limits ................................................................................................................. 10
2.4. Pre-Control Charts ............................................................................................................................................... 12
2.5. Life/Usage Control................................................................................................................................................ 15
2.6. Control Charts for Attribute Data .......................................................................................................................... 16
2.7. Visual Process Check & Checklist ....................................................................................................................... 22
2.8. First Piece Check ................................................................................................................................................. 23
2.9. Test Piece Evaluation .......................................................................................................................................... 24
3. PROCESS CAPABILITY INDICES .......................................................................................................................... 25
3.1. Fundamentals for Variable data ........................................................................................................................... 25
3.2. Process Stability in Practice ................................................................................................................................. 29
3.3. Process Capability for Attribute Data ................................................................................................................... 32
4. GUIDANCE FOR NON-NORMAL DATA ................................................................................................................. 35
4.1. Using Control Charts with Non-Normal Data ....................................................................................................... 39
4.2. Capability Analysis for Non-Normal data ............................................................................................................. 42
5. COMMON SOURCES OF VARIATION ................................................................................................................... 45
6. SCENARIOS REQUIRING SPECIFIC ANALYSIS METHODS ............................................................................... 46
1.1. Background
The overall objective of Statistical Process Control (SPC) is to operate processes economically with minimum disruption
due to stoppages and non-conformances.
• The use of statistical tools – and others, within a closed loop system to manage process variation.
• The state of statistical control, when a process behaves in a random and predictable way within its natural range.
It is hard to see how a state of statistical control can be achieved without the use of process control techniques.
Processes have a tendency to behave in an unstable manner unless they are managed into a state of control; and to be
effective this management needs to be early (in cycle / point of process) as opposed to after the event (e.g., final
inspection).
Statistical Process Control techniques are not new, being originally used in the early 1920’s. Some other techniques, such
as mistake proofing go back much further.
Industry uses process control extensively to control quality. The benefits are easy to see; total cost of quality is reduced
and the process can be depended on to consistently deliver conforming product.
b. To steer the process to behave in the desired way, often towards a specific target.
With correct process control, end of line inspection moves from being an exercise of sorting good and bad product to one
of routine validation of goodness ‘as expected’.
For SPC to be most effective it needs to operate within an inherently stable environment. The relevant Foundational
Activities (refer to AS13006) should be in place and managed, to underpin the control strategy. Without these
fundamentals in place SPC will fail.
1.2. Benefits
• Reduced costs due to scrap, screening, rework, repair, downtime, and material outages.
• The ability to maintain a process to a target value where deviation from the target results in some loss (typically in
performance) – a concept known as Taguchi’s Loss Function
Resistance to implementing SPC techniques is not uncommon. Common reasons given for not implementing are:
Over reliance on ‘end of line’ inspection leads to quality becoming an exercise of ‘sorting product good from bad’. It is
not possible to reach a level of 100% conformance through inspection alone; all that can be done is to react to non-
conforming product, and investigate. The approach drives a culture of firefighting and results in higher product non-
conformance than would be the case had ‘point of process’ statistical control been in place. SPC benefits both the
supplier and the customer.
Management of variation is not exclusive to high volume. Most manufacturing problems have variation at their source;
and most low volume operations have high consequence of failure, whether that be cost or time to replace or rework
defective items. A rigorous process control strategy, inputs, parameters, and setup standards is vital to maintain
conformance. These items can be controlled before the operation is performed using statistical or non-statistical
techniques to prevent non-conformance rather than managing after the event.
Complex products tend to have large numbers of characteristics. One may argue against running SPC on all of these
characteristics. Strategies can be employed that enable proper selection of ‘controlling’ characteristics (input or output)
that give indication of the health of a process. These characteristics are included in the control strategy. Variation studies
can be performed on feature groups collectively to reduce the burden of analysis (see 6.1 - Assessing Control and
Capability of Multiple Variable Features).
NOTE: On some products, sources of variation exist that affect the variation between features ‘within part’. For
example, groups of features in large components affected by distortion and material stress relief during
processing can display characteristics of ‘out of round’. This type of behavior can be better understood using
‘Between/Within’ charting strategies. This type of behavior is typically difficult to detect using traditional
inspection output such as CMM reports or single feature by feature analysis. (see 6.1 - Assessing Control
and Capability of Multiple Variable Features).
For high product mix situations, it is often useful to focus on characteristics that are common to the process rather than
measure and monitor separate products by different mechanisms. Short run or part family approaches may be used in
which the deviation from target is monitored (see 6.2 - Assessing Control and Capability of Variable Data by Process or
Part Family)
SPC analysis allows the manufacturer to see if differences between products are evident, thereby prioritizing
improvement.
There are many pitfalls in SPC deployment and criticism of it is often based on historic issues and past experience of
poor deployment. Causes of issues in deployment of SPC can be due to:
• Poor engagement of those recording and monitoring the data.
• Failure to do anything useful with the data (e.g., failure to investigate and correct special causes).
• Failure to development an adequate control strategy (e.g., SPC not being ‘closed loop’ and timely)
• SPC done in isolation, with inadequate attention given to the ‘fundamentals’
• Failing to develop the SPC approach as experience grows.
It is true that confidence in the accuracy of control limits and capability indices is higher as more data is gathered, but
to wait for an arbitrary number of points before review may result in a missed opportunity for improvement. This is not
to say that process tampering (making unnecessary adjustments), is to be encouraged, but obvious issues may be
seen with relatively few data points, e.g., a process that is running significantly off target may be corrected without
initial need for control limits, but once on target control limits can be used to recognize when corrections are
necessary, thus keeping the process stable. Initial assessment may be as simple as using a run chart or Pre-control
chart in the early stages of production.
SPC can be used to monitor rate, frequency, proportion, and count for attribute type characteristics and defects. The
benefit of monitoring these attributes through control charts is that change in the rate, frequency or incidence of the
attributes can trigger positive (and prescriptive) action rather than relying on subjective ‘gut feel’ decisions or no action
at all.
• Rate of rare event type defects (similar to mean time between failure for machinery)
Knowledge is also an enabler to success. The following publications contain additional information (technical and non-
technical) relating to the application of statistical methods for quality improvement and control:
• Advanced Product Quality Planning (APQP), Automotive Industry Action Group (AIAG), ISBN 1605341371
• Statistical Process Control (SPC), Automotive Industry Action Group (AIAG), ISBN 1605341088
• Understanding Variation - The Key to Managing Chaos. Donald J. Wheeler. Published by SPC Press, ISBN: 0-
945320-53-1
• “Mistake Proofing for Operators: The ZQC System”, by Productivity Press, ISBN 1-56327-127-3
The following sections expand on the Process Control Methods in AS13006 – Table 1 – Process Control Methods
Error proofing is the use of an automatic device or method that either makes error impossible or makes its occurrence
immediately apparent. Error proofing should be chosen when the process is at risk of human error. The process risk
analysis (PFMEA) should identify where human error is likely (occurrence), where it has a high impact (severity) or may
not be easily detected (detection). Safety related risks often require mistake proofed solutions.
Error proofing devices can take four forms. The hierarchy of these are:
1. Elimination – design the product or process hardware in such a way that an error is not possible.
NOTE: Error proofing methods are not industry specific. Some industrial sectors have a particularly well developed
mistake-proofing culture often extending into product as well as process design. The automotive industry is very
well known for its use of error proofing both from the manufacturing processes to the operation of the final
product.
Examples:
• Guide Pins used to assure a one-way fit of a tool, fixture or part to prevent incorrect orientation.
• An alarm used to alert an operator that a machine cycle has been attempted with a misaligned tool. The
operator can take action to correct the problem.
• Counters can be used to help an operator track the correct number of components needed in an assembly.
• A checklist used to assure all key steps are completed by the operator to prevent missing something that
could cause an escape and/or defect. This approach is also described further in 2.7 - Visual Process Check &
Checklist.
• Use of machine probing as either a control during manufacturing to check a size before final cut or as a signal
after final cut to detect an anomaly or identify that an adjustment may be needed.
• Use of a Stopper Gate (physical barrier) affixed to a Fan Compressor assembly fixture to ensure an oil fill
tube is installed in the correct port when there are multiple ports to choose from.
• Asymmetrical design of a nameplate that assures it is installed in only one possible orientation preventing
backwards or upside down installation.
• A left/right two button hand operated system with foot switch operation to ensure hands are free prior to
cycling a forging press.
• Automated weighing of a part or batch to ensure part is completely processed or batch is complete and
present before moving to the next operation.
To ensure error proofing devices are robust, it is good practice to check that the failure of the device does not cause a
problem (test to see what happens if the device fails to detect the error). Depending on the result (and the criticality of
failure), revisit the design and maintenance requirements of the device and improve it.
This document does not contain ITAR or EAR technical data.
Copyright © 2018 AESQ Strategy Group, a Program of SAE ITC. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted,
in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of SAE ITC.
AS13006 Guidance Materials Page 7 of 68
If it is not possible to have an automated error proofing device, some of the other methods included in this standard may
offer some level of protection.
For further reading on the subject of Error/Mistake-Proofing the following may be referred to:
• “Mistake-Proofing for Operators: The ZQC System”, by Productivity Press, ISBN 1-56327-127-3
This section outlines 4 recognized control charts for variable data and provides guidance as to when they may be used.
The list is not exhaustive. There are many more types of control charts not covered here that may be used for specific
situations.
Figure 2.2-1 and Table 2.2-1 outline the basis for variable control chart selection.
Monitoring and control of characteristics on products being produced at a volume where typically a
sample (subgroup) will be taken periodically to maintain quality.
Example: From a high volume process, five parts per hour are sampled from the line and measured.
The average and range is plotted to understand if the process has changed (due to moving off target or
Xbar and R
through an increase in variation).
Xbar and S
Can also be used for multiple similar products where it can used to plot ‘deviation from target’ thus
avoiding the need for multiple charts.
The X bar chart displays the average of the subgroup. The R or S chart displays the variation within the
subgroup (either the Range or Standard Deviation).
NOTE: The variation within the subgroups is assumed to be representative of the overall variation (no
between batch effects expected). When this assumption is not met the process may appear out of
control when in fact it is not. Consult an experienced practitioner if this appears to be the case.
Monitoring and control of characteristics on individual products being produced from continuous
processes at a rate where subgrouping of data is not feasible.
NOTE: The variation from item to item is assumed to be representative of the overall process variation
(no batching effects or systemic drifts/wear expected). When this assumption is not met the process
may appear out of control when in fact it is stable. Consult a process control specialist if this appears to
be the case.
Characteristics where the variation within the subgroup is not representative of the overall variation
I-MR-R/S between them, usually the case when monitoring processes with ‘batching’ effects or multiple
characteristics (a group of identical features) within a part are studied where the assumptions for an
Xbar-MR-R/S Xbar/R or S chart are not met.
There are eight industry standard tests for statistical control; to determine if the process data contains evidence of special
causes of variation.
A process can be judged to be in statistical control (i.e., only common causes of variation present) when there is an
absence of the patterns shown in Figure 2.2-3. An example of a stable process is shown in Figure 2.2-2. It should be
noted when seeking to improve a process that the more tests used, the more signals will be detected. It may be worth
using a selected few when starting out using SPC.
For process control purposes manufacturers often select the most appropriate tests for the process being operated, taking
into account the actions that would be needed when they occur. Tests most frequently used by operators are Test 1 and 5
(Figure 2.2-3) however software applications make the use of all tests relatively simple.
Individual Value
_
X
LCL
1 4 7 10 13 16 19 22 25 28
Observation
UCL
Moving Range
__
MR
LCL=0
1 4 7 10 13 16 19 22 25 28
Observation
This section outlines the control of characteristics subject to systematic (predictable) drift where operation of traditional
statistical control limits provides little benefit when compared to the characteristics’ ‘loss function’ and the cost and other
implications of adjustment or reset.
Some processes have characteristics that naturally drift in a certain direction as the process runs. These processes when
viewed on commonly used control charts tend to break tests for out of control condition long before the drift becomes a
meaningful issue.
Processes where this behaviour may exist naturally are chemical etching (concentration changes), investment casting
slurry control (through evaporation) and in some cases machining cutting tools (if they exhibit significant wear/drift with
use).
An approach to manage this variation is to set limits on a time series chart. This limit will be set such that it detects drifts
to avoid problems, but not so soon as it becomes uneconomic to adjust. This type of control is generally only useful when
operated at the process rather than at an end of line inspection.
With appropriately set limits this method can be used effectively to control quality even using simpler measurement
systems than downstream measurement equipment such as a CMM.
NOTE: Some processes have recommended standards that use such controls. For example, ARP4992 “Periodic Testing
for Processing Solutions” provides recommended guidelines for establishing at test plan for solutions used in
processing of metals such as electro-polish, anodizing, and conversion coatings and can be applied to other
similar processes.
2. If the variable is an input or process variable, study, and quantify its relationship to the process outputs.
3. Establish the optimal process limits to be applied. In most cases this should be done using process data, to best
ensure the limits are not too wide to allow a non-conformance.
4. Establish the adjustment to be made when the limit is reached. For example, this may be to adjust towards a
lower limit, or an optimal setting, or in the case of a cutting tool, replace it. This reaction will be documented in the
Control Plan and process instructions.
6. If the process limit is reached, adjust/set the process (see step four). Confirm the adjustment has had the desired
effect. If so continue. If not take action to understand why.
Figure 2.3-1 demonstrates how a chart of this type may be used. The process drifts upwards so a lower limit is not
discussed within this example (for simplicity). It may however be wise to have one to mitigate other risks.
Diameter
1 0.075
USL = 1 0.065
1 0.050
Process Limit = 1 0.04
When limit is
1 0.025 reached process
is reset to 'process
Diameter
set-point'
1 0.000
9.975
LSL = 9.935
2 4 6 8 10 12 14 16 18 20
Index
Process improvements can be made using the data from the run chart, for example in the following ways:
• Use process data and related process output to determine tighter reaction limits
• Incorporation of automatic adjustments to the process to tighten the adjustment interval. This will decrease the
spread between the limits.
• Make changes to the process or tools that decrease the rate of change of the process variable being controlled.
• Optimise the initial location for the process to increase the time between adjustments.
Features controlled in the way described should typically have a relatively flat ‘loss function’ when compared to the cost of
reset or adjustment. The design authority should be consulted where implications of process drift is not understood.
Process Capability
Processes with systematic drift and infrequent ‘large adjustments’ may produce distorted capability analysis. There are
two reasons for this.
1. The within subgroup range is typically small relative to the overall variation, resulting in Cp metrics being overly
optimistic and not representative of the spread of the process.
2. The distribution of the data may not fit a distribution well enough to make accurate capability predictions. Both Cp
and Pp derived capability may be inaccurate and alternative methods (e.g., Non-normal methods such as
Johnson Transformation, Box-Cox Transformation (see 4. Guidance for Non-Normal Data) may be required. If
these methods do not help then the process performance may need to be characterized by other means.
Background
The use of Pre-Control dates back to the 1950s when it was developed by an employee of the Rath and Strong
consultancy group. The merits of its use are often debated, with some favoring and some opposing its use. There are
definitely valid arguments for and against which should be considered.
Pre-Control is a method for monitoring and controlling the process within specification limits. It may be particularly useful
when applied to process (outputs or parameters) that have a tendency to drift but for which the process is not overly
sensitive to small changes. For example, a measurement taken on a ground feature where the grinding wheel wears over
time.
Pre-Control may also be useful where it is important to maintain a capable process centered or ‘on target’, when detection
of process ‘special causes’ are less important.
Pre-Control uses a chart that monitors items by classifying the measurements into colored zones (Red, Yellow, or Green).
Decisions are made whether to adjust or stop the process based on where in these zones the measurements lie.
The advantages of Pre-Control are its simplicity and that it drives a behaviour towards on-target thinking.
NOTE: It is commonplace for the bands to be set as follows (see Figure 2.4-1):
• Green – the central 50% of the tolerance band (or 50% tolerance around a specific target)
Where tolerance is unilateral the chart will have a single green, yellow, and red zone (see Figure 2.4-2).
Method
Following setup, a qualification phase runs according to a predefined ruleset to ensure the process is ‘on target’. Typically,
qualification is passed after five consecutive units are produced in the Green zone.
1. Classical Pre-Control: Rules based around sampling two consecutive items periodically from a production run:
3. Modified Pre-Control:
• A standard control chart with colored zones applied as described for Classical Precontrol (but to control limits, not
tolerances).
NOTE: If analyzing the capability of a process that uses Pre-Control methods, a statistical control chart should be
constructed to ensure the process is stable prior to analysis of capability and communication of capability indices
such as Cp/Cpk.
Despite the concern of an unstable process on capability, a measure of goodness such as extended period in Green zone
on a Pre-Control Chart may serve as satisfactory evidence of capability to meet customer requirements if the customer
permits this. This is more likely for minor characteristics than for KCs or special characteristics such as those categorized
as Major or Critical.
For further reading on the subject of Pre-Control refer to Implementing Six Sigma (2nd Edition) – Breyfogle 2003. ISBN 0-
471-26572-1)
Pre-Control Example:
An aerospace manufacturer produces a Fuel Air Bracket (see Figure 2.4-3) with a key feature having an engineering
tolerance of 0.386 +/- 0.005 inches. The central 50% of the total tolerance (+/- 0.0025 inches) defines the green zone).
KEY
Set-Up Procedure
Following successful setup the process operator runs five parts and records the dimensions of the features being
controlled. If all five parts fall within the green zone on the Pre-Control chart (UPC = 0.3885 inches and LPC = 0.3835
inches) the setup is judged to be targeted properly and sample measurements are taken at a frequency of 20% (check
every 5th part). This measurement frequency is for the purpose of maintaining process control, and does not relate to
product inspection frequency.
The 10th piece comes up for inspection. It has a measured value of 0.387 inches. This is within the Pre-Control (UPC
and LPC) limits, and the operator continues with production. The next piece to be inspected is the 15th. Its measurement
is 0.3854 inches, well within the Pre-Control limits so the operator continues. The 20th part measures 0.3892 inches. This
value is outside the UPC limit. The reaction plan referenced in the Control Plan determines that the operator now
measures the next part produced, in this case the 21st. This part measures 0.3867 inches, again outside the UPC limit.
The operator stops the process and investigates according to the prescribed reaction plan.
Pre-Control Rule 1: If the measured value is within the green zone (Pre-Control limits UPC and LPC) the operator may
continue to check every 5th part (apply a 20% monitoring frequency).
Pre-Control Rule 2: When two consecutive measured values fall outside the same Pre-Control limit (UPC and LPC), the
operator should react making an appropriate process adjustment. The reaction plan reference in the Control Plan (refer to
AS13004) should describe the actions required.
Pre-Control Rule 3: When one measurement violates one Pre-Control limit and the following part violates the opposite
Pre-Control limit, the variability may have increased. The operator should investigate the cause engaging support if
needed (e.g., Quality/Manufacturing Engineer). The reaction plan referenced in the Control Plan (refer to AS13004)
should describe the actions required.
Processes may have factors that are dynamic in nature and change through use or over time. Such processes may
require control methods that prevent the process (or its factors) reaching a condition that will adversely affect the product
of the process. Such controls can be placed on, e.g. chemicals, wearable items such as cutting tools, and other
consumables.
The control criteria for life/usage controls may be defined in many ways. Control is often not simply a question of ‘how
old’. Examples of control criteria are: number of parts processed, total running time, number of cycles, once opened use
by date, weight of parts processed, and surface area processed.
• A cutting tool has a maximum operating time. The tool life is recorded on a machine readable chip. The machine
program includes code that checks the life of the tool prior to use. When cutting tips are replaced and the tool is
set a pre-setting operation resets the readable chip to zero.
• A peening operation has media that is controlled based on the total equipment running time. A timer is installed
on the equipment to indicate how close the process is to a media change. In addition to this method of control, the
process also has assessment for media quality and uses test pieces to qualify the process for correct operation.
• The concentration of a chemical etch bath is routinely maintained with an auto-dosing system. However once a
month the entire system is emptied, cleaned out, and refilled. To keep the planning of this control simple this is
done at a defined time regardless of use – for example the morning of the first Monday in every month.
A life/usage limit may also incorporate a check and reset. For example a wearable item may be tested after a number of
cycles and found to have not reached a point where change is required. The tool may be returned for use for a defined
number of cycles. It should be noted that this does not imply the tool will be run to the point of failure.
The life/usage limits should ideally be determined to maximize the process quality. Statistical studies and experiments will
allow the life to be optimized for other factors such as cost. These studies may be performed on test pieces and scaled to
the production process. The life/usage limits should be validated however usually at process qualification
NOTE: These guidelines and examples do not replace specific process standards or customer requirements that may
exist to govern the life/usage controls.
Attributes are characteristics, or conditions characterized as present or not-present or counted. A number of charts may
be used depending on the attribute being studied.
NOTE: Process control via attributes is less effective than variable methods. Some checking methods may provide
attribute data despite being variable in their nature. An example is hole size, that may be checked via variable
methods or attribute (e.g., plug gauge). If an attribute method were selected based on its speed and simplicity, it
should be on the basis that the process is proven capable, because an attribute go/no-go gauge will not give early
warning of emerging issues, the way a variable gauge does. A robust control strategy in the case of hole size may
be to use a variable tool measurement device such as a presetter to assure the quality of the tool, and an attribute
style plug gauge as a quick conformance check but with a periodic sample taken from production for variable
measurement.
Figure 2.6-1 and Table 2.6-1 outline the basis for attribute control chart selection.
Control type
Scenario When to use Example
(which chart)
A process that observes Appropriate: P-chart Plot the monthly percent
discrete values, such as When it is important to Plot the percent defective rate of a critical
pass/fail, go/no-go, control the number or % of defective – classifying supplier; plot the On Time
present/absent, or defects over a given time product as good or Delivery performance of a
conforming/ non- period, lot to lot, or unit to bad with changing or critical supplier
conforming. unit such as measuring constant subgroup
For example a circuit card improvement over time, size
could consist of a number when go/no-go gauges are NP-chart A machining cell produces
of solder joints that either employed or when visual Plot the number fuel control valves in
conform or do not conform inspections are used. defective – classifying standard lot sizes of 50.
to a set standard parts as good or bad Final Inspection performs a
Not Appropriate: with constant 100% inspection of the
Cannot be used for subgroup size product and plots the
establishing process control number of valves that are
or process capability in the determined to be
same way as variables data nonconforming.
due to the scale not being C-chart An aerospace manufacturer
continuous. Measures of Plot the count of produces one type of heat
performance and stability defects based where exchanger for a customer.
can be undertaken with a the same area of After vacuum braze a leak
view to directing opportunity (constant check is performed. A c-
improvement activities but subgroup size) exists chart is used to plot the
true process control needs number of leaks requiring
to be done through process weld repair.
variables, inputs, and U-chart An aerospace manufacturer
foundational activities Plot Defects Per Unit operating Production Part
Not appropriate for rare (DPU) based on Approval Process (PPAP)
events. counts and varying or tracks the DPU on a
constant area of monthly basis for all the
opportunity (changing inspected PPAP packages.
or constant subgroup An accompanying Pareto
size) the defects come Diagram suggests the
from categories driving the DPU
rate are poor PFMEAs, part
marking errors and poorly
written Control Plans.
Projects are established to
address these issues in
order to reduce the overall
DPU rate shown on the u-
chart.
P Chart Example:
P Chart of Defective(%)
0.09
UCL=0.08423
0.08
0.07
0.06
Proportion
0.05
0.04
0.03
_
P=0.022
0.02
0.01
0.00 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample (N=50)
Example: the non-conformities from a series of batches of 50 parts are monitored by the manufacturer on a P-Chart
(Figure 2.6-2). The manufacturer observes an overall defective rate of 2.2%. The manufacturer concludes from the control
chart that – despite the variability from batch to batch - the rate of defectives is statistically stable over time.
P Chart of % Yield
1 .00 1 1
1 1 1
0.95
UCL=0.9429
Proportion
0.90 _
P=0.8873
0.85
LCL=0.8317
1
1
1 1
0.80
1 4 7 10 13 16 19 22 25 28 31 34
Sample
Tests are performed with unequal sample sizes.
C Chart Example:
15
UCL=13.08
Sample Count
10
_
C=5.83
5
2 2
2
2
0 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample
Example: A manufacturer produces a similar quantity of product each day. The number of defects noted from a visual
inspection area is plotted on a C Chart (Figure 2.6-4) in order to understand the process performance and behaviour over
time. In this case the supplier notes a run of improved performance between days 12 and 22, and an increase in defects
on day 30. In reaction to the defect rate on day 30 the manufacturer launches a problem solving activity.
NOTE: The use of nP charts and U charts are not illustrated in this document. Implementing Six Sigma – Breyfogle 2003.
ISBN 0-471-26572-1 may be referred to for explanation and examples of their use.
The tests for special causes of variation for attribute control charts are as follows:
• A run of eight or more points on the same side of the center line
It is considered good practice to use a Pareto chart to support attribute methods to allow further prioritization and insight
on the defects/defectives within the attributes plotted.
In some scenarios, attribute data may be monitored quite adequately using variables control charts. For example the
Right First Time measure of a manufacturing operation whilst based on an attribute (good/bad), may be expressed as a
ratio and plotted on a simple individuals control chart. In many cases an Individuals chart is simpler to interpret and
construct than attributes charts. Also of consideration is the sample sizes used, that when large may result in tighter
For rare/infrequent events, attribute control charts can give less definitive results. The absence of events/defects/failures
for example will have an adverse effect on the control limits and averages. In these cases a time between failures may be
a more useful measure to track. Mean Time Between Failure (MTBF) is a commonly used measure of equipment
reliability for example.
UCL=1.049
1 .0
0.8
Sample Count
0.6
0.4
0.2
_
C=0.1
0.0 LCL=0
1 11 21 31 41 51 61 71 81 91
Sample
Example – A manufacturer plots the failures of a machine tool, counting how many failures were experienced over a 100
day period (Figure 2.6-5). The chart is not very informative.
16 UCL=16.27
14
12
Individual Value
10
_
8 X=7.7
0
LCL=-0.87
1 2 3 4 5 6 7 8 9 10
Observation
Example: The manufacturer plots the time between failures for the data on an Individuals chart (Figure 2.6-6). The chart is
much more informative. The average days between failures of 7.7 days and the control limits can help guide the
manufacturer on equipment reliability and maintenance activity planning.
A visual process check provides positive confirmation of goodness either prior to allowing a process to run, or during its
operation.
The process checks need to become part of routine operation. The personnel conducting the check will ideally understand
the importance of the check and also understand the reaction if the check fails against the criteria. In many cases the
check will confirm that a particular step of the sequence has been done correctly.
The checks may be conducted by a single person, however on important items or high consequence failure items the
method may use two persons who jointly confirm that the correct condition is achieved. An example of this approach is the
standard pre-flight checks that are undertaken by pilot and co-pilot when preparing for a flight. One pilot calls out the
check, the other performs the check and confirms as correct, and then the first records the check on a checklist before
proceeding.
To increase robustness, a “double scrutiny”, and/or “buddy check” may involve two personnel to positively confirm an
action or result of a check; or the check may be performed by someone independent of the operation.
A single person check may have some inherent risks of error. A preferred approach is automation or error proofing
devices, (see 2.1 – Error/Mistake Proofing). Prior to finalizing the check it is advisable to confirm the PFMEA risk level –
as the method of control relates to the detection score in the PFMEA (refer to AS13004).
Check Check item Result of check Reaction (if Fail) Sign off
item (Pass/Fail) (initial and date)
number
1 Health/Safety check Stop and isolate equipment.
Contact cell leader
2 Work instructions are latest Contact Manufacturing
version Engineer – obtain instructions
3 Machine asset care checks Raise issue with cell leader
complete and correct
4 Gages in calibration Contact Quality engineer
8 Etc
The objective of a first piece check is to validate the set-up and quality of a process prior to the full production run.
Alongside other controls it serves to verify and confirm the integrity of the production system (man, machine, fixture, tool,
NC program, etc.) at a point in time, and hence to avoid economic damage of non-conformance (through timely action to
ensure process conformance).
Prerequisite to a first piece check should be the adherence and confirmation that all other foundational control
requirements are met (e.g., calibration, machine tool diagnostics, tooling within prescribed life limits, acceptable
parameter settings, consumables level, etc.) typically approved through positive confirmation (see 2.7)
As a general rule, all manufacturing processes can be subject to first piece inspection.
First-piece checking/inspection may be independent from the production method in a number of ways:
• Inspection by an operator other than the person having performed the operation (two person rule); thus avoiding risks
due to bias and other human factors
• Inspection using another inspection tool or inspection method (where possible); thus avoiding/highlighting
measurement discrepancies
If independent inspection is to be used the method should be at least as good as the production method, free from bias
and have adequate resolution to make the decisions valid. Tighter limits may apply to first piece checks and this should be
considered when evaluating such measurement equipment.
In order that the process is correctly judged as sufficiently good to continue additional criteria may be applied. Such
criteria should have a rational and/or scientific basis for its application. For instance a process capability study or designed
experiments.
Example 1: a machined dimension with a known adequate level of capability, achieved at first part check may be deemed
sufficient if within 50% of process tolerance; a measurement close to normal limits of operation may result in adjustment
and further measurement to bring the process on target.
Example 2: a process with a tendency towards upward drift may have a zone in the lower region of the specification band
that provides a standard for process acceptance of the first item. Continued conformity as the process drifts naturally
through use is provided by a tool life/usage control. The zone has been determined through a previous tool wear study. If
the measurement is outside this zone, the operator refers to a process guidance document (referenced in the Control
Plan) to determine appropriate action (e.g., tool replacement, or adjustment to the tool life/usage standard).
A first piece check strategy may extend to multiple parts – depending on process risk and behavior. For example a very
large batch of parts, a rapidly cycling process or high cost parts may require inspection of the first five parts (Pre-Control
may be beneficial (see 2.4))
It is good practice to require formal record keeping for approval of first piece checks (e.g., a signature, and/or
countersignature/ inspection report).
NOTE: The method should be used in conjunction with other methods to make the control strategy robust to variations
that may occur as production continues.
Some characteristics and properties that are created or changed through processing may not be directly measurable
other than through destructive or damaging testing. Use of test pieces processed alongside the product may help to
determine the result of the process and also its stability. These test pieces are tested following processing to validate the
products of the process and/or confirm the effectiveness of the other process controls.
Such processes should be highly controlled through process parameter controls and monitoring and may be categorized
as ‘fixed processes’ or ‘special processes’ often with regulatory control requirements.
A test piece/coupon should be to a defined standard (thus minimizing the variation in the test material itself).
In some instances a test piece may be operated within a first piece check to qualify the process setup prior to the full
production run (see 2.8).
Examples of processes that use representative test pieces include the following:
• A forging that has extra material outside the finished part envelope that will be removed for testing
Once a result has been obtained from a test piece the result can be analyzed with a variety of process control tools such
as control charts (variable and attribute) and run charts.
Acceptance of process results by the use of test specimens or coupons is typically approved and agreed to by the
customer.
NOTE: There may be regulatory, customer, product specifications, and other requirements that address the extent to
which test piece evaluation, or requirements are permissible and established as part of process qualification.
Equivalence between test piece and physical product should be understood.
Process Capability is the ability of a process/product to consistently meet a specification or customer requirement.
Various indices are computed to assess the Process Capability of a given product characteristic.
The definition and calculation of these is often misunderstood and thus misinterpreted. The methods described within this
section are based on recognized industry methods. Software tools such as Minitab calculate capability in line with these
methods and additionally cater for some specific scenarios that exist such as batch processing where information may be
sought about the capability both within and between batches of production.
At the heart of capability for variable data, is the need to manage process variation and location to align with customer
specification to ensure that requirements can be continually met.
Variability of the process is calculated through statistical methods; these methods aim to anticipate the total process
variation rather than just the range seen in the data collected for the capability study. A process spread of 6 standard
deviations is used to represent this spread. This 6 standard deviation range theoretically covers 99.73% of the area under
a normal distribution curve. Data is assumed to be normally distributed (symmetrical, bell shaped).
Many processes have a tendency – even naturally – to periodic drift or shift. Therefore borderline capability is not
desirable for either supplier or customer. A capability of 1.33 is often seen as a minimum to assure continued
conformance while allowing for minor process drift. However depending on the process, a higher level of capability may
be required. Products with large numbers of characteristics that cannot be controlled independently may require some
additional margin for small drifts that may occur through production.
For any capability calculation to be reliable, it is important that the process be in a state of statistical control thus behaving
in a predictable manner - otherwise any perceived goodness may be short-lived. It is possible for a process with a ‘good’
capability index to be producing non-conforming product if a state of control is not reached. Process stability is therefore a
prerequisite to capability calculation.
Cp and Pp indices are simply a ratio of specification width to process variation thus calculating the ‘potential of the
process if centered’. The indices increase if variation is reduced. A Cp or Pp of exactly 1.0 indicates that 6 standard-
deviations of process variation match the width of the specification. Such a process if centralized within the specification
would be intolerant to even minor drift over time. Not an ideal situation.
6σ
The process shown in Figure 3.1-1 has a Cp or Pp>1. The process is less variable than allowed by the specification.
Cp and Pp use different methods for estimating process variability. Cp uses ranges of the data within subgroups (or
difference between individual values) to estimate the process variation. A statistical constant d2 is used to adjust for the
subgroup size. This method estimates the standard deviation of the process rather than calculating by the more involved
‘root sum of squares’ method (which is used to calculate Pp).
The average range over d2 method generates the estimate denoted by sigma hat (Eq. 1).
(Eq. 1)
The root sum of squares method generates the standard deviation denoted by s (see Eq. 2).
∑𝑛𝑛𝑖𝑖=1(𝑥𝑥𝑖𝑖 − 𝑥𝑥̅ )2
𝑠𝑠 = �
𝑛𝑛 − 1
(Eq. 2)
These are incorporated into the formulae (Eq. 3 and Eq. 4) as follows:
(Eq. 3)
(Eq. 4)
For a stable continuous process behaving in a random manner, Cp, and Pp calculations can be expected to deliver similar
values.
In order to estimate the likely performance - against a specification - of the process Cpk and Ppk indices are used. These
indices are similar ratios to Cp and Pp but additionally take into account the process location.
Cpl & Cpu, and Ppl & Ppu measure capability against each of the specification limits. The ‘l’ and ‘u’ indices will be equal
only if the process is centered. The Cpk or Ppk is the smaller of the upper and lower values.
The ‘l’ and ‘u’ indices can be used to determine how the process is located relative to specifications, however a visual
assessment of the capability histogram is usually preferred to understand this situation.
(Eq. 5) (Eq. 8)
(Eq. 6) (Eq. 9)
LSL USL
3σ 3σ
The process shown in Figure 3.1-2 has a Cp of approximately 1.0 but due to being too close to the upper specification
limit (with the tail of the distribution outside it) the Cpk is <1 If the process average is outside the specification, the Cpk
will be negative.
NOTE: It will not be possible to calculate Cp or Pp indices for processes with unilateral (single sided) tolerances as the
tolerance width cannot be defined. However Cpk and Ppk can be calculated from the Cpl/Ppl or Cpu/Ppu
(whichever can be calculated).
For the Cpk and Ppk calculations in this section, the process is assumed normally distributed. If the data are non- normal
(skewed for example) alternative methods can be used (see Section 4 – Guidance for Non-Normal Data)
NOTE: The descriptions in this section are fundamentals. Some additional methods for specific situations are described
in Section 6 – Scenarios requiring specific analysis methods.
Some characteristics may benefit from being ‘targeted’ to a particular nominal value. These are usually characteristics that
influence performance of the product, that have a loss associated with deviation from target even within the specification.
These characteristics may have additional requirements communicated by the customer. For these types of
characteristics it is important to examine the location of the process relative to this target. It should be noted that due to
the calculation methods, high Cpk/Ppk indices do not necessarily imply the process is on target as their calculations use
the distance of the process mean to the specification limits. The nominal location is not considered in the calculation.
A target based process capability index (Cpm) may be used in these situations. Cpm is not covered in this standard but is
described in statistical texts and provided in statistical software applications.
Whilst a state of perfect statistical control is desirable, it is uncommon for manufacturing processes to maintain complete
statistical control over long periods. Failure of tests for special causes can occur despite the process being reasonably
stable. The important thing is that the capability metric does allow reliable prediction of future performance.
Therefore some process capability analysis on processes which contain minor out of control points may be necessary on
occasions where out of control conditions are not considered to be of practical significance (looks at whether the
difference is large enough to be of value in a practical sense).
Examples include rare instances of out of control conditions and instances where control limits are broken by negligible
amounts.
Processes with points well beyond the control limits (such as beyond 4 sigma), should not be considered stable for
capability calculations.
To mitigate the effect an out of control process can have on the capability calculation it is recommended to calculate Ppk,
since it includes all sources of variation and thus be a more reliable statistic than Cpk.
In situations such as these, the advice of a process control specialist should be obtained.
Process and product attribute data differs from variable data in that measurement is not done on a continuous scale (as is
usually the case for geometric requirements). For attribute data the use of indices such as Cpk or Ppk do not make sense.
However the stability of the process can be demonstrated by control charts specific to attribute data, and the performance
against standard can be quantified in a number of ways.
It is important that the process is stable (in a state of statistical control). This is done using the appropriate attribute control
chart.
It is also important when measuring performance that the rate or proportion of defects has reached a level where it has
stabilized and is accurate. This is done by plotting the cumulative defective proportion. As more data is collected the
cumulative proportion should stabilize (flatten out) indicating enough data has been collected for the capability
assessment to be reliable.
The specific type of capability analysis will depend on the nature of the data:
• When examining performance, where the measure is proportion defective, the data is expected to follow a
binomial distribution. A binomial capability study is appropriate using a P or nP chart to assess the capability.
• If the data is measuring the number of defects per item (or group of items) the data is expected to follow a
Poisson distribution. A Poisson capability study is appropriate using a C or U chart to assess the capability.
Expected Defectives
0.04 2
Proportion
0.02 1
_
P=0.01190
0.00 LCL=0 0
1 4 7 10 13 16 19 22 25 28 31 34 0.0 1 .5 3.0
Sample Observed Defectives
Frequency
PPM Def: 1 1 905
1 .0
Lower CI: 771 9
8
Upper CI: 1 7524
Process Z: 2.2602
0.5 Lower CI: 2.1 078
4
Upper CI: 2.4220
0.0
0
5 10 15 20 25 30 35 0.0 1 .2 2.4 3.6 4.8
Sample %Defective
Figure 3.2-1 shows a Binomial Capability Study. The proportion defective from 35 batches of parts. The proportion
defective is stable and is running at 1.19%.
Expected Defects
4
3.0
2 _
U=1.743 1 .5
0 LCL=0 0.0
1 4 7 10 13 16 19 22 25 28 31 34 0 2 4
Sample Observed Defects
0.5 0
5 10 15 20 25 30 35 0 1 2 3 4 5
Sample DPU
Figure 3.2-2 shows a Poisson Capability Study. The capability is expressed as Defects Per Unit (DPU). The capability is
1.74 DPU (represented by ῡ on the U chart shown).
NOTE: For the analysis to be effective there are some underlying assumptions with regard to the distribution of defectives
and defects within the sample. These assumptions are covered in various SPC texts. In summary of these, the
user should ensure defectives are random and independent (not occurring in clusters). And defects or defectives
within a sample subgroup are not so infrequent as to make the analysis of stability meaningless. If this cannot be
The capability methods discussed for variables data (in section 3.1) are based on an assumption that the underlying data
distribution is Normal (i.e., follows a Normal Distribution Curve).
Where data fails to meet this assumption, capability indices can lead to wrong conclusions. For example a predicted
defect rate may be inaccurate, or a Cpk/Ppk level may be judged to be adequate when it shouldn’t be. In extreme cases a
seemingly adequate Cpk may produce a large proportion of defects. Figure 4-1 shows data following a non-normal
distribution. There is a discrepancy between the observed level of non-conformance (4%) and that expected based on an
analysis that assumes normality (0.63%).This may lead to incorrect estimates for cost of poor quality, factory flow,
capacity and lead time, and related planning.
LB USL
Overall Capability
Pp *
PPL *
PPU 0.83
Ppk 0.83
Cpm *
Potential (Within) Capability
Cp *
CPL *
CPU 0.90
Cpk 0.90
Performance
Observed Expected Overall Expected Within
% < LB 0.00 * *
% > USL 4.00 0.63 0.33
% Total 4.00 0.63 0.33
• A natural skew caused by a boundary condition that cannot be exceeded (e.g., flatness, roundness, runout)
• Data are calculated from two (or more) components of variation (e.g., the true position of a hole derived from x
and y coordinates)
• Human factors (e.g., purposely stopping at a maximum limit when machining down to a size)
• Biasing the sample of data (e.g., selectively removing parts of certain dimensions)
Figures 4-2 & 4-3 show the effect on capability analysis caused by a bi-modal process. In this case one with oscillation
present between data points. There is a discrepancy between the expected overall and within subgroup capability. Here
the total non-conformance estimate may be overly pessimistic. It should be noted that this obvious pattern of behaviour
should ideally be recognized in the analysis of the process stability.
_ Overall Capability
X=0.058 Pp 1 .00
0.0
PPL 1 .1 2
PPU 0.89
Ppk 0.89
-0.5 Cpm *
LCL=-0.722 Potential (Within) Capability
1 7 13 19 25 31 37 43 49 55 Cp 0.64
CPL 0.72
Observation CPU 0.57
Cpk 0.57
1 .00
UCL=0.959
0.75
Moving Range
0.50
__
-0.4 -0.2 0.0 0.2 0.4 0.6
MR=0.293
0.25
Performance
Observed Expected Overall Expected Within
0.00 LCL=0 % < LSL 0.00 0.04 1 .59
1 7 13 19 25 31 37 43 49 55 % > USL 0.00 0.39 4.48
% Total 0.00 0.43 6.07
Observation
Figures 4-4 & 4-5 show the effect on a capability analysis due to step changes in the process. This should ideally be
recognised during a stability assessment. Note that the Cpk index is a misleading 1.4 despite the process generating
defects. Because the process is out of statistical control, the value of Cpk is not reliable.
_ Overall Capability
0.00 X=-0.0203 Pp 0.57
PPL 0.53
PPU 0.61
-0.1 5 Ppk 0.53
LCL=-0.2202 Cpm *
1
-0.30 Potential (Within) Capability
1 7 13 19 25 31 37 43 49 55 Cp 1 .50
CPL 1 .40
Observation CPU 1 .60
Cpk 1 .40
1
0.4
1
1 1
0.3
Moving Range
UCL=0.2456
0.2
0.1 __
MR=0.0752 Performance
Observed Expected Overall Expected Within
0.0 LCL=0 % < LSL 0.00 5.46 0.00
1 7 13 19 25 31 37 43 49 55 % > USL 3.33 3.33 0.00
% Total 3.33 8.79 0.00
Observation
Scenario Guidance
The process is out of statistical control Capability analysis cannot adequately describe or predict
with no pattern to the data or to the future process behaviour.
signals of special cause
Conduct improvement activity, problem solving, and
standardization and use process control charts to confirm
stability has been achieved before undertaking capability
analysis. Consider containment to protect customer.
The process has a natural skew that Explore the following alternative methods for capability
can be explained either due to a natural analysis:
boundary or the type of characteristic
being measured. Identify an alternative distribution that is the closest fit to the
data and conduct a non-normal capability analysis based on
that distribution. (see 4.2.1)
The process has a ‘batching effect’ and Confirm the cause of this behaviour and confirm it is a natural
exhibits a variation due to within batch and unavoidable consequence.
variability and a step change due to
variation from batch to batch. If the batch averages are stable (when viewed on an I-MR
control chart) a Between/Within capability analysis may be
possible (discussed in 6.1). This type of analysis considers
both sources of variation to make more accurate prediction of
conformity level. Data should be taken from a number of
batches to ensure the process location is of adequate
precision.
Bimodal data due to oscillation or due to Understand the cause of the bimodal process behaviour and
differences in tooling, machines, etc. attempt to limit it. (e.g., two machines may be aligned
differently – calibration may rectify this).
In the event that eliminating the source is not possible, the data
may be analyzed by population group to assess capability
(such as each machine analyzed separately).
Assessing a process distribution is easily performed using computer software applications. The examples used within this
standard are created with a software application called Minitab.
A normal probability plot (for example Figure 4-6) helps if the data is a good fit to the selected distribution. The data is
plotted against a line of best-fit and its confidence interval and an assessment made. If the data deviates significantly from
the interval then the population is judged to be non-normal. If nearly all the data lies within the confidence interval a
capability analysis using the selected distribution would be appropriate.
In addition to the visual assessment, statistical software applications include statistical tests such as the Anderson Darling
which assesses normality and generates statistics such as a ‘p-value’. The p-value, in this case, is the probability of
getting a result that is more extreme than the ones in your sample, if the distribution is actually normal. It is commonplace
to reject normality if the p-value is less than 0.05 (this threshold allows a 5% chance of accepting non-normality incorrectly
– an error known as alpha risk).
Figures 4-6 to Figure 4-11 show some possible outcomes of this analysis.
70
Percent
60 15
50
40
30
20 10
10
5
5
1
0.1 0
-0.2 -0.1 0.0 0.1 0.2 -0.1 0 -0.05 0.00 0.05 0.1 0 0.1 5
Normal Measurement
A process following an approximately normal distribution is shown in Figure F7. When plotted on a probability plot (Figure
4-6) most data points fall within the confidence interval. P-Value 0.228 indicates support for normality since it’s greater
than .05
80
70
Percent
60 10
50
40
30
20
10
5
5
0.1 0
-0.2 -0.1 0.0 0.1 0.2 0.3 0.00 0.04 0.08 0.1 2 0.1 6 0.20 0.24 0.28
Flatness Flatness
Frequency
70 10
Percent
60
50
40 8
30
20 6
10
5 4
1 2
0.1 0
-0.75 -0.50 -0.25 0.00 0.25 0.50 -0.24 -0.1 2 0.00 0.1 2 0.24
Measurement Measurement
A bimodal set of data is shown in Figure 4-11. When plotted on a normal probability plot (Figure 4-10) the data deviates
completely from the line of best-fit and two clusters are clearly visible. The user would conclude that the data is non-
normal. A capability analysis with the assumption of a normal distribution should not be performed. This type of behaviour
should be visible through simpler histogram or control chart analysis, and an approach may be decided upon without the
need for further and more complex distribution identification or data transformation.
Control charts such as I-MR are reasonably robust to slight deviations from normality. However in some cases control
charts based on non-normal data distributions can lead to limits that do not accurately represent the natural variation of
the process. This results typically in control charts with ‘false signals’ of special causes.
The distribution of averages is known to tend towards normality as the sample size increases
(known as central limit theorem). If the process is such that items can be subgrouped and
averages plotted then this may be adequate to avoid the use of more complex methods. An X-
Bar and R chart may be used.
Figure 4-12 shows the effect using a uniform distribution. The distribution of averages becomes
normally distributed (and less variable) as the sample sizes increases
In certain cases applying a mathematical transformation to each data value (e.g., x²) may result in the distribution
changing shape. If the resulting distribution is approximately normal a regular control chart may be used to assess the
stability of the process.
However for process monitoring these transformed values may not make sense to the operator. And if the operator is
plotting the chart manually, introducing any calculation into the process adds complexity. It would be most desirable for
NOTE: Alternatively in some situations a simpler method such as Pre-Control may be useful.
The user should consider the benefits of using transformations against the potential for confusion brought about by
complexity. Alternative methods may be used if more practical. For more information Statistical Process Control (SPC) -
AIAG ISBN 1605341088 may be referred to.
Figure 4-13 shows a non-normal (heavily skewed) process using a regular I-MR Control chart. In this example the lower
limit <0 and the upper control limit does not take into account the skewed distribution. This chart would trigger some
inappropriate reaction to ‘special cause’ signals.
0.2 UCL=0.2151
Individual Value
0.1 _
X=0.0577
0.0
-0.1 LCL=-0.0996
1 11 21 31 41 51 61 71 81 91
Observation
0.24
1 1 1
UCL=0.1933
0.1 8
Moving Range
0.1 2
__
0.06 MR=0.0592
0.00 LCL=0
1 11 21 31 41 51 61 71 81 91
Observation
0.4
_
0.2 X=0.2117
0.0 LB=0
1 11 21 31 41 51 61 71 81 91
Observation
0.45
UCL=0.4276
Moving Range
0.30
__
0.1 5 MR=0.1309
0.00 LCL=0
1 11 21 31 41 51 61 71 81 91
Observation
Figure 4-15 shows a control chart using the original measured values. The control limits are derived from the limits based
on the transformed data from Figure F14. The upper control limit (UCL) is calculated by reversing the transformation. In
this case squaring the limit from the transformed chart (UCL = 0.55982 = 0.3134). The operator may now continue to plot
the measured values against this limit and react appropriately to special causes.
UCL= 0.31 34
0.30
0.25
0.20
Flatness
0.1 5
0.1 0
0.05 Xbar=0.0577
0.00 LCL= 0
1 10 20 30 40 50 60 70 80 90 1 00
Index
A number of methods exist for analysis of non-normal data for capability purposes. Two are described in brief for
awareness. For further information a specialist should be consulted.
When these analyses are performed it should be declared in any reports to ensure analysis transparency.
This method involves using probability plots for a range of possible distributions and finding the distribution with the best
fit. The capability will then be calculated using this distribution. The interpretation of probability plots is essentially the
same for other distributions as the methods used for assessing normality.
In the example shown in Figure 4-16 the data is not normal but seems to fit three other distributions available. In this
example the user continues with a Weibull distribution capability analysis shown in Figure 4-17. Either of the other 2
distributions shown on the chart appear suitable alternatives as the data aligns with their confidence intervals. The user
may seek further guidance on distribution selection. Some distributions are known to fit certain scenarios well, as
described in Implementing Six Sigma – Breyfogle 2003. ISBN 0-471-26572-1
Percent
10 Exponential
50
AD = 0.378
10 P-Value = 0.682
1
1 Weibull
0.1 0.1 AD = 0.305
-0.1 0.0 0.1 0.2 0.0001 0.001 0.01 0.1 1
P-Value > 0.250
Flatness Flatness
Gamma
Weibull - 95% CI Gamma - 95% CI AD = 0.286
P-Value > 0.250
99.9 99.9
99
90 90
50 50
Percent
Percent
10 10
1 1
0.1 0.1
0.00001 0.0001 0.001 0.01 0.1 1 0.00001 0.0001 0.001 0.01 0.1 1
Flatness Flatness
LB USL
Process Data Overall Capability
LB 0 Pp *
Target * PPL *
USL 0.3 PPU 0.73
Sample Mean 0.0577495 Ppk 0.73
Sample N 1 00
Shape 0.97091 3 Exp. Overall Performance
Scale 0.0570221 PPM < LB *
PPM > USL 6650.45
Observed Performance PPM Total 6650.45
PPM < LB 0.00
PPM > USL 0.00
PPM Total 0.00
If a transformation can be found that fits a normal distribution then methods based on the normal distribution as described
in Section 3 can be used.
In the example shown in Figure 4-18 software has been used to perform a Box-Cox transformation on the data. The
transformation performed is to raise the data values to power of lambda where λ = 0.26. The transformed data has been
checked against a probability plot to ensure it is approximately normal, and then a capability analysis has been performed.
80 80
Percent
Percent
50 50
20 20
5 5
1 1
0.1 0.1
-0.1 0.0 0.1 0.2 0.00 0.25 0.50 0.75
Flatness Flatness
After Box-Cox transformation (λ = 0.26)
Figure 4-18 – PROBABILITY PLOT OF ORIGINAL DATA (LEFT) AND TRANSFORMED DATA (RIGHT)
LB* USL*
Process Data transformed data Overall
LB 0 Within
Target *
USL 0.3 Overall Capability
Sample Mean 0.0577495 Pp *
Sample N 1 00 PPL *
StDev(Overall) 0.057041 4 PPU 0.78
StDev(Within) 0.0524398 Ppk 0.78
Cpm *
After Transformation
Potential (Within) Capability
LB* 0
Target* * Cp *
USL* 0.731 424 CPL *
Sample Mean* 0.42862 CPU 0.75
StDev(Overall)* 0.1 30041 Cpk 0.75
StDev(Within)* 0.1 34083
Performance
Observed Expected Overall* Expected Within*
% < LB 0.00 * *
% > USL 0.00 0.99 1 .20
% Total 0.00 0.99 1 .20
Figure 4-19 – CAPABILITY ANALYSIS OF TRANSFORMED DATA. THE CAPABILITY IS NOT IDEAL.
Zero values can present some problems when conducting certain transformations and alternative distribution analysis
(Weibull for example). In this case a ‘data shift’ may be performed. A method known as ‘McAdam’s Zero Shift’ involves
adjusting all zero values upwards by 20% of the data resolution.(i.e., if the measurement resolution is 0.0001” substitute
all zero values with 0.00002”). The analysis should record that this adjustment has been performed.
Figure 5-1, describes some of the common causes of variation for manufacturing processes. It is not an exhaustive list but
may offer some areas of focus when considering a control strategy.
The following section discusses a few situations that are perceived a challenge in certain manufacturing environments. It
does not provide an exhaustive list but offers some ideas that may also be transferrable. The examples attempt to
improve effectiveness of efficiency of the analysis.
When dealing with products with groups of characteristics (e.g., patterns of holes) assessing control and capability of each
characteristic separately can be both time consuming and runs the risk of missing important aspects of control of the
process/product.
The following methods can be used in these situations. They could be modified according to the specific situation. Table
6-1 summarizes the methods and their uses.
When multiple Appropriate: When the variation within the The Xbar chart shows the trend of
identical group of features on a part is averages and special causes
features are to This method may be roughly similar to the variation relating to them. Signals on this
be controlled (for employed when it can be between parts an Xbar-R chart chart should be investigated from a
example a large shown that there is a may be used. This chart will plot perspective of “a source of
pattern of holes logical rationale for the the average of the characteristics variation between averages
on a casing) features to be grouped. on the Xbar chart, and the range contributing to the total variation.”
within the feature group on the R For example a setup related issue
Not Appropriate: chart. An example is discussed in or machine alignment.
6.1.1
When the features are The R chart plots the variation
grouped from a definition When the variation within the within the groups. Signals on this
perspective, but not from group of features on a part is less chart should be investigated from
a manufacturing one. than the variation between parts the perspective of “a cause
For example features an Xbar-R chart will lead to false affecting the variation within the
produced at different signals on the Xbar chart. In this group”. For example distortion on a
operations. case an I-MR-R/S (also known as large casing or a misalignment of a
a 3 way control chart) can be pattern of holes where position of
used. This chart plots the average the holes is being monitored (and
of the characteristics on the Xbar this misalignment causes a
chart, the moving range between systematic pattern such a ‘sine
the averages on the MR chart, wave’ to be introduced. Also a
and the range within on the R single outlying characteristic or a
chart. An example is discussed in shift in characteristics mid-way
6.1.2 through the production cycle (for
example caused by a tool
NOTE: In many cases the Xbar-R damaged mid cycle).
chart will not work in practice
because the assumption that the The MR chart shows the trend
variation within and between between parts and signals
subgroups is not met. A stable unusually large fluctuations
process may appear out of between parts and emerging
control. Due to the theory that trends caused by increasing (or
variation in averages decreases decreasing) overall variation.
as sample sizes increase the Signals on this chart do no not
Xbar-R chart becomes less useful necessarily result in signals on the
as the number of characteristics Xbar or R charts but may do in
becomes high. In these cases a certain circumstances.
3-way control chart is useful.
Table 6-1 – CONTROLLING MULTIPLE VARIABLES USING AVERAGE AND RANGE CHARTS
SCENARIO 1 – THE VARIATION WITHIN THE GROUP IS REPRESENTATIVE OF THE OVERALL VARIATION
A product with 20 identical characteristics is analyzed (Figure 6-1) and found to have a level of variation within each part
that represents the overall variation fairly well (i.e., the process location does not appear to shift significantly between
parts). In this case the control limits on an Xbar-R chart (Figure 6-2) with subgroup size set to 20 (i.e., the number of
identical features in the group) provide a good approximation of the natural process variation.
Part 08
Part 09
0.0 Part 1 0
Part 1 1
Part 1 2
Part 1 3
-2.5 Part 1 4
Part 1 5
-5.0
Specification limit
0.4
Sample Mean
__
0.0 X=-0.007
-0.4
LCL=-0.732
-0.8
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
UCL=6.398
6
5
Sample Range
_
4 R=4.036
2
LCL=1.674
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
Figure 6-2 – XBAR-R CHART PRODUCED FROM DATA FROM Figure 6-1.
A second product with 20 identical characteristics is analyzed (Figure 6-3) and found to have a level of variation within
each part which does not represent the overall variation well (i.e., the process location appear to shift between parts by a
greater amount than the change from feature to feature within the part). In this case the control limits on an Xbar-R chart
(Figure 6-4) become too narrow to represent the natural variation between parts and in this case all the points on the Xbar
chart fall outside the limits. This is due to the limits on the Xbar chart being derived from the range within the subgroup (in
this case set at 20 to demonstrate the effect). This chart will be of no use in practice.
An I-MR chart also is of little use (Figure 6-5) due to limits being based typically on short term ‘point to point variation.
A three way style control chart shown (Figure 6-6) is more useful. The R chart allows the user to examine the variation
within the part. The MR chart shows the state of control between part averages that allows the user to detect any unusual
shifts and the Xbar chart allows the user to see when the process goes outside its normal range, or drifts over time.
Part 08
Part 09
Part 1 0
0.0
Part 1 1
Part 1 2
Part 1 3
-2.5 Part 1 4
Part 1 5
-5.0
Specification limit
In Figure 6-3 the variation within the group of features can be seen to be less than the variation from part to part. This is
natural behaviour in this context as setup variation is not present within part but causes variation from part to part. Using
control charts that calculate their control limits based on variation within the group can lead to incorrect limits. As shown in
the charts Figure 6-4 and 6-5.
1
1
2 1 1
Sample Mean
1
__
UCL=0.905
X=0.564
LCL=0.223
0 1 1 1 1 1
1
1
1
-2 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
3.0 UCL=3.009
2.5
Sample Range
_
2.0
R=1.898
1 .5
1 .0
LCL=0.787
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
In Figure 6-4 the data are plotted on an Xbar-R chart. The resulting limits are much narrower than is appropriate. The
process is varying normally (but with different levels of ‘within’ and ‘between’ variation). In this situation an Xbar-R chart is
not useful.
2 2 22 222
2 2 22 22
2 2222222
22 _
2 2 2
X=0.564
2
0 22 22 2
2 2222 22
222 2 2
2 2 222 22 22 2 2222 22
2222 2 2 2
2 22
1 1 11 1 2 1
2 2 2
1
LCL=-1.092
1 1 11 1 1
-2 11 111 1 1
11 11 1
11 1
1 1 11
1 31 61 91 1 21 1 51 1 81 21 1 241 271
Observation
1
3 1
1
Moving Range
2 UCL=2.034
1 2 __
2 MR=0.623
222
0 22 2 LCL=0
1 31 61 91 1 21 1 51 1 81 21 1 241 271
Observation
In Figure 6-5 an I-MR chart illustrates the issue of using such a chart when the ‘within’ and ‘between’ variation is different.
The chart is giving many false signals due to the limits not being representative of the natural process variation.
Subgroup Mean
_
X=0.564
0
-4 LCL=-3.705
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
UCL=5.244
MR of Subgroup Mean
__
2
MR=1.605
0 LCL=0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
UCL=0.7347
Sample StDev
0.6 _
S=0.4932
0.4
LCL=0.2516
0.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Sample
In Figure 6-6 where the pattern of 20 holes is plotted on a 3-way control chart, the process can be seen to be stable. The
average values show only random behaviour on the I chart, as does the moving range chart (between parts) and the S
chart (within parts).
Capability assessment for this scenario may present some added complications beyond the generic method described in
Section 3.
Often in situations where multiple identical features are being analyzed, the variation within the feature group is not
representative of the overall process variation due to other sources of variation (setups, tool changes, material variation).
In these situations the ‘short term’ estimate for capability provided by typical Cpk calculations provides an overly optimistic
view of the capability that cannot be relied upon.
Figure 6-7 shows this effect using the data discussed in the previous section on Figure 6-3. The capability indices Cp and
Cpk are not representative of process performance. The Pp and Ppk indices appear to be a better representation.
Performance is estimated at 54 parts per million defectives.
LSL USL
Overall
Within
Overall Capability
Pp 1 .43
PPL 1 .56
PPU 1 .29
Ppk 1 .29
Cpm *
Potential (Within) Capability
Cp 4.01
CPL 4.39
CPU 3.63
Cpk 3.63
Performance
Observed Expected Overall Expected Within
PPM < LSL 0.00 1 .41 0.00
PPM > USL 0.00 52.45 0.00
PPM Total 0.00 53.86 0.00
In many situations an assessment of Ppk will provide adequate information on the overall process capability. This is due
to the fact that the method of calculation recognizes the variance from the average for each data point, whereas the Cpk
method only looks at variation within the subgroups.
Occasionally a scenario may present itself that warrants a more complex assessment to take into account both ‘within’
and ‘between’ variation. This may be most relevant in cases of borderline capability.
The method for calculating capability in this scenario involves calculation of the ‘within’ group variability and the ‘between’
group variability and taking the square root of the sum of the variances to achieve a total variability known as
‘between/within’ (the concept is illustrated in Eq. 11) . This is then used as the variation component in a regular Cpk
calculation. This type of analysis is shown in Figure 6-8.
LSL USL
Overall
B/W
Overall Capability
Pp 1 .43
PPL 1 .56
PPU 1 .29
Ppk 1 .29
Cpm *
B/W Capability
Cp 1 .33
CPL 1 .46
CPU 1 .21
Cpk 1 .21
Performance
Observed Expected Overall Expected B/W
PPM < LSL 0.00 1 .41 6.35
PPM > USL 0.00 52.45 1 49.90
PPM Total 0.00 53.86 1 56.25
Using the data from the previous scenario (from Figure 6-7), a Between/Within capability analysis produces a Cpk of 1.21
which is more representative of process performance. The expected PPM defective is estimated at 156 as opposed to 54
produced by Ppk analysis using a regular method of calculation.
From a practical perspective, where capability is clearly at a high level, a regular Ppk calculation will usually suffice
however for borderline situations the method that considers between and within capability is advisable.
A more comprehensive guide on capability methods can be found in Implementing Six Sigma – Breyfogle 2003. ISBN 0-
471-26572-1.
6.2. Assessing Control and Capability of Variable Data by Process or Part Family
Process based studies may be acceptable to allow qualification by similarity to be undertaken, e.g., similar parts,
geometries, tolerances, and design characteristics. The supplier should liaise with their purchaser to confirm suitability for
this approach.
For manufacturers practicing cellular manufacturing of part families, Target I-MR Charts can be more efficient than
operating separate product specific charts. Rather than implement control charts for each distinct part number, a supplier
may choose to combine similarly made part numbers on the same chart. The basic assumptions of this method are that
these similar products share common processing methods and exhibit similar process behaviour and variation.
Tolerances & materials will likely be similar as differences may give rise to differing levels of capability. What follows is an
illustrated example.
An aerospace manufacturer produces a variety of machined products for several aerospace engine customers. The
company recently reorganized its operations into cells making common products formulated into part families. The part
families are a collection of specific products with common material specifications, characteristic tolerances as well as
sharing similar process operations. One family is the Housing Bushing family. The bushings are made out of brass and
press fitted into customer housings. The supplier selects a control chart as the control method for the outside diameter of
the parts.
The original process control approach utilized an I-MR Control Chart for each specific part number. With smaller lot sizes
being manufactured to reduce inventories, the manufacturer decides to utilize the Target I-MR Chart for the part family.
Table 6-3 shows a list of diameter characteristics in a part family manufactured in a cell:
Methodology
The Target I-MR Chart, Figure 6-9 (showing both Individuals and Moving Ranges) illustrates the initial 20 piece production
run executed during the week. The number of consecutive parts made for each part number is lower than would be
required for individual control charts per part number (e.g., 3, 4, 5 etc.). This is due to the quick-change set-up methods
employed by the manufacturer enabling production of individual items as opposed to batches of parts.
The data plotted on the I chart are the deviations from the nominal value for the part being measured.
The values plotted on the MR chart are the absolute differences between consecutive deviations (from the I chart). This
means there are no negative values on this chart.
This type of chart is a variation of the standard Individuals & Moving Range (I-MR) Chart (see 2.2).
For each part produced, the deviation from the nominal value for that part number is calculated and plotted (on the I
chart). Next the moving ranges between each point on the Individuals chart are calculated. These values are plotted on
the moving range chart (MR).
The control limits are then calculated in the same way as a regular I-MR Chart.
Thus all actual values that are measured are “normalized” by their nominal values. This allows different part numbers with
different feature nominals to be combined on the same chart.
1 VALUE: .252 .249 .250 .501 .498 .500 .502 .497 .976 .977 .974 .974 .250 .250 .249 .502 .500 .498 .975 .974
2 NOMINAL: .250 .250 .250 .500 .500 .500 .500 .500 .975 .975 .975 .975 .250 .250 .250 .500 .500 .500 .975 .975
VALUE-NOMINAL .002 -.001 0 .001 -.002 0 .002 -.003 .001 .002 -.001 -.001 0 0 -.001 .002 0 -.002 0 -.001
3 (ROW 1 - ROW 2)
__ .003 .001 .001 .003 .002 .002 .005 .004 .001 .003 0 .001 0 .001 .003 .002 .002 .002 .001
4 MOVING RANGE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
+.006
(X) +.004 UCLx
VALUE - NOMINAL
+.002
0
-.002
LCLx
-.004
-.006
.008
(MR)
.006
MOVING RANGE
UCLr
.004
.002
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Figure 6-9 – TARGET I-MR CHART FOR THE BUSHING PROCESS EXAMPLE
It can be seen on the Target I-MR Chart in Figure 6-9 that the bushing process is in a state of statistical control. This
assures the manufacturer that the process is stable. All three part numbers exhibit a similar level of variation (precision)
and process location (average).
The calculations for the Target I-MR Chart control limits, Moving Range control limits, and Process Capability Indexes Cp
& Cpk are illustrated in Figure 6-10.
NOTE: The chart uses standard I-MR control limit calculations. Care should be taken for the calculation of Cpk. For
capability of the ‘normalized’ values to make sense the tolerances should also be normalized (i.e., expressed as
deviation from nominal). For example the lower specification limit for Part A would be -0.005 not 0.245 and the
upper specification limit 0.005 not 0.255.
k = Number of Subgroups = 20
ET (Part A) = .010” ET (Part C) = .010”
PART
INDEX PART A PART B PART C PART D
Cpk
Check one:
-.0001 - (-.005) -.0001 - (-.004) -.0001 - (-.005)
= .96 = .76 = .96
Cpl X_ 3 * .0017 3 * .0017 3 * .0017
Cpu ___
Interpretation of Results
In Figure 6-9 the process appears to be in a state of statistical control due to the absence of patterns that indicate special
causes of variation. The supplier concludes that the three different part numbers are ‘in family’ and that grouping on the
same chart is valid. However the process capability shown in Figure 6-10 shows need for improvement. The tightest
tolerance part, which is Part B, has a Cp = 0.78 and a Cpk = 0.76 while parts A and C, that share the same tolerance
band, have a Cp = 0.98 and Cpk = 0.96. Given the goal of having a process Cpk of minimum 1.33, and the fact the
overall process is stable and centered, an investigation will be required on the common cause sources of variation to see
what can be changed to improve the overall process capability.
NOTE: Prior to calculating control limits and process capability indexes it is good practice - because the data displayed
are individual values - to perform a normality test. This is easily done using statistical software. Figure 6-11
shows the Probability Plot illustrating that the data can be judged to be a normal distribution (p-value 0.114 is
greater than the 0.05 threshold typically used – assuming a 5% alpha risk is acceptable).
60
50
40
30
20
10
1
-0.004 -0.003 -0.002 -0.001 0.000 0.001 0.002 0.003 0.004
Delta
The capability of the process can be calculated from the deviation from target provided the process is stable. It is wise to
analyse both Cpk and Ppk indices in this situation to check that they are similar as seen in Figure 6-12 for the two
bushings that share the +/- 0.005” tolerance.
LSL USL
P rocess D ata W ithin
LS L -0.005 Ov erall
Target *
USL 0.005 P otential (Within) C apability
S ample M ean -0.0001 Cp 0.97
S ample N 20 C P L 0.95
S tD ev (Within) 0.00172639 C P U 0.98
S tD ev (O v erall) 0.00144732 C pk 0.95
O v erall C apability
Pp 1.15
PPL 1.13
PPU 1.17
P pk 1.13
C pm *
Many of the methods used in this standard can be implemented using traditional ‘pencil and paper’ solutions. Control
charting is a relatively simple task requiring nothing more complex than calculation of averages and basic multiplication
and division.
Control charts can be manually created and plotted and many suggest this lends a level of understanding and
engagement to the deployment.
However there are drawbacks as the use of these tools becomes more mature and demand increases:
• The risk of errors being made both in data capture and computation
• The cost of administration keeping the manual charts up to date and replenished when complete
• The limit to the time available for analysis, more so with complex product with multiple characteristics
SPC systems make the task much easier and have the following advantages:
• Direct linkage to gauging for data input (either via interfaces such as RS232 standard interface or wireless
technologies)
• More advanced capability analysis methods (feature groups, and non-normal process capability analysis)
Systems tend to fall into categories of data collection and process monitoring (real time) and off line analytics. Additionally
tools are available that provide configurable management information dashboards containing Yield, Overall Equipment
Effectiveness (OEE) and other performance trends, Pareto, and other defect analysis in real time.
Systems may be provided by metrology vendors to provide functionality to their systems or as standalone. The benefits of
equipment manufacturers’ proprietary systems include the ability to simply interface with their offerings whilst the benefits
of offerings by independents tend to be flexibility; the ease of configuration to multiple data formats from different
equipment vendors.
Generally speaking a computer based solution tends to be more robust than a paper based one.
The statistical formulae provided in Tables 8-1 and 8-2 can be used for calculating the center lines and control limits on
commonly used control charts. Other methods may be used depending on the application.
(Eq. 8.1)
(Eq. 8.3)
(Eq. 8.4)
(Eq. 8.5)
(Eq. 8.6)
Only used if the range is calculated over a number of data points. This
will default to 0 for moving range between consecutive data points.
(Eq. 8.7)
(Eq. 8.9)
(Eq. 8.10)
(Eq. 8.11)
(Eq. 8.12)
(Eq. 8.13)
(Eq. 8.16)
(Eq. 8.17)
(Eq. 8.18)
(Eq. 8.19)
(Eq. 8.20)
(Eq. 8.21)
(Eq. 8.22)
(Eq. 8.23)
(Eq. 8.24)
Centre line
(Eq. 8.25)
(Eq. 8.26)
(Eq. 8.27)
C Chart
Centre line
(Eq. 8.28)
(Eq. 8.29)
(Eq. 8.30)
U Chart
Centre line
(Eq. 8.31)
(Eq. 8.32)
(Eq. 8.33)
(Eq. 8.34)
(Eq. 8.35)
Cp
(Eq. 8.36)
Cpu
(Eq. 8.37)
Cpl
(Eq. 8.38)
Cpk
(Eq. 8.39)
Pp
(Eq. 8.40)
Ppu
(Eq. 8.41)
Ppl
(Eq. 8.42)
Ppk
(Eq. 8.43)
Table 8-4 is used as reference to determine the relevant value of the statistical constants in the formulae provided
Subgroup
Size
A2 A3 d2 D3 D4 B3 B4
2 1.880 2.659 1.128 0 3.267 0 3.267
3 1.023 1.954 1.693 0 2.574 0 2.568
4 0.729 1.628 2.059 0 2.282 0 2.266
5 0.577 1.427 2.326 0 2.114 0 2.089
6 0.483 1.287 2.534 0 2.004 0.030 1.970
7 0.419 1.182 2.704 0.076 1.924 0.118 1.882
8 0.373 1.099 2.847 0.136 1.864 0.185 1.815
9 0.337 1.032 2.970 0.184 1.816 0.239 1.761
10 0.308 0.975 3.078 0.223 1.777 0.284 1.716
11 0.285 0.927 3.173 0.256 1.744 0.321 1.679
12 0.266 0.886 3.258 0.283 1.717 0.354 1.646
13 0.249 0.850 3.336 0.307 1.693 0.382 1.618
14 0.235 0.817 3.407 0.328 1.672 0.406 1.594
15 0.223 0.789 3.472 0.347 1.653 0.428 1.572
16 0.212 0.763 3.532 0.363 1.637 0.448 1.552
17 0.203 0.739 3.588 0.378 1.622 0.466 1.534
18 0.194 0.718 3.640 0.391 1.608 0.482 1.518
19 0.187 0.698 3.689 0.403 1.597 0.497 1.503
20 0.180 0.680 3.735 0.415 1.585 0.510 1.490
21 0.173 0.663 3.778 0.425 1.575 0.523 1.477
22 0.167 0.647 3.819 0.434 1.566 0.534 1.466
23 0.162 0.633 3.858 0.443 1.557 0.545 1.455
24 0.157 0.619 3.895 0.451 1.548 0.555 1.445
25 0.153 0.606 3.931 0.459 1.541 0.565 1.435
Table – 9-1 contains some questions that can be used to assess the health of process control within a factory. The
question set is not exhaustive.
Leadership Management provides the Discuss process control with business management
necessary leadership, Is process control built into the business management system
infrastructure and environment and procedures?
for a robust process control Do they champion the focus on process control (adoption of
system. PFMEA, Control Plan)?
Management are able to see Are they making changes to enable better process control (e.g.,
and deal with performance systems and software solutions both in process and for offline
trends analysis)?
Reaction Plan The actions required when Review the Control Plan and supporting documents
control criteria are not met are Is there evidence that any exceptions to process control criteria
clear, understood, and are actioned appropriately?
embedded.
Does the reaction plan call for any immediate actions to be
carried out to determine the cause of the problem (recovery
actions) and also when further advice should be sought?
Foundational The organization effectively Is the handling, storage, and packaging of parts well enough
Activities demonstrates foundational defined to avoid risk of damage, including application of 5S
process control activities principles for workplace organization (i.e., sort, set in order,
shine, standardize and sustain)?