Professional Documents
Culture Documents
VDA 5 Yellow Volume 3rd Edition 2020
VDA 5 Yellow Volume 3rd Edition 2020
VDA 5 Yellow Volume 3rd Edition 2020
1
Measurement and
Inspection Processes
Capability, Planning, Management
2
ISSN 0943-9412
Release: Online Document October 2020
Copyright 2020 by
3
Foreword
After more than a decade, the time has come for a fundamental revision of one of the stand-
ard works on inspection process capability. The focus when drawing up this 3rd edition of
VDA 5, with its new title “Measurement and Inspection Processes, Capability, Planning and
Management”, was on comprehensibility of methodology in order to achieve better applicabil-
ity for the user in practice. The VDA 5 was completely revised and updated with change no-
tices collected in the VDA QMC since 2011. The current changes from the standards envi-
ronment and technical development have been integrated during the review.
One of the innovations in VDA 5 is the division of topics into a main volume and a practical
handbook. The main volume gives users technical guidance and orientation in the procedure.
The practical handbook shows practical implementation of the topic from the main volume
using examples and use cases. Logic has been implemented in the test planning method so
that the contents are taken into account right at the beginning of the product development
process. The inspection process capability thus acts as a systems engineering tool for verifi-
cation, validation in the early phase of the project. At the same time, for the first time, a con-
nection and consistency to the adjacent processes test equipment management, test plan-
ning and inspection process management (including defined roles) was applied. The risk-
based approach is appropriately and efficiently embedded in the inspection process manage-
ment, offers specific assistance in selecting the procedure to back up audit decisions and al-
lows a differentiated approach while being mindful of economic requirements. The idea be-
hind the VDA 5 volume is to provide as complete an overview as possible of the handling of
proof of capability for measurement processes.
The following points were also implemented during the course of this revision:
Clarification of terms and definitions closely following VIM [17], and ISO 3534-1 [13]
Transparency in the “Test system capability for inspection process capability” proce-
dure
Strategies for harmonisation with the AIAG Core Tool MSA (4th Edition) [1]
Recommendations for the procurement of test systems (e.g. specifications)
Transferability of proof of capability
Handling of incapable measurement systems/processes
Dealing with small tolerances (FT regulation)
Procedure in case of insufficient sample sizes during test measurement system and
measurement process (e.g. engine test bench)
Procedure for small pre-series and production lots in development and production
Consideration and assessment of the continuous capability using stability measure-
ments
It is necessary to comply with specified tolerances of individual parts and assemblies to guar-
antee the function of technical systems. According to ISO 8015 [32], it is assumed that the
tolerance limits correspond to the functional limits when defining the required tolerances in
the design process.
Inspection process capability is more than just the acceptance of the test equipment, it also
includes the handling of measurement uncertainty in product and manufacturing design. A
comprehensive measurement result consists of a determined measured value and a meas-
urement uncertainty in the measurement process. In the area of tolerance limits, no reliable
statement can be made about compliance or non-compliance with the tolerances due to the
measurement uncertainty. This can lead to incorrect evaluations of measurement results. Dif-
ferent standards and guidelines contain requirements for estimating and considering the
measurement uncertainty. For this reason, both the measurement system and measurement
4
process uncertainty must be taken into account as early as the planning stage of measure-
ment processes. In this respect, companies must address various questions in implementing
and certifying their quality management system. This document shows how to meet these
many demands. The procedures described here are based on ISO/IEC Guide 98-3 (2008-09)
[28] and DIN EN ISO 14253-1 [24].
The topic of conformity according to DIN EN ISO 14253-1 [24] cannot be comprehensively
represented with methods of measurement system analysis (MSA). Some important reasons
for this are that certain influencing variables such as calibration uncertainty, quality of the set-
ting standards, error limits, temperature effects were not sufficiently catered for. Furthermore,
MSA methods are used only to assess individual components separately, but not the entire
inspection process.
Even a comparison of the headings of the old and new requirements of DIN EN ISO 9001
[14] reveals significant differences. To date, the “control of monitoring and measuring equip-
ment” has been sufficient. On the other hand, DIN EN ISO 9001 [14] has been referring to
“resources for monitoring and measurement” since 2015. This clearly shows that it is not, as
has been in the past, only about the monitoring and measuring equipment, but about all nec-
essary resources too, from spatial conditions, appropriately trained personnel, suitable test
equipment, measuring equipment including software, to the necessary assistive devices and
methods that are involved in the formation of valid and reliable monitoring and measurement
results. All of this must serve the purpose of conformity of products and, more recently, also
of services, i.e. it must be ensured that only assured quality is delivered to the customer.
5
Table of contents
Foreword ............................................................................................................................... 4
Table of contents ................................................................................................................... 6
List of Illustrations.................................................................................................................10
List of tables .........................................................................................................................12
1 Standards and guidelines ..............................................................................................13
2 Benefits and scope ........................................................................................................14
3 Terms and definitions ....................................................................................................16
4 Inspection process management ...................................................................................25
4.1 Inspection process management tasks...................................................................26
Securing test results taking into account measurement uncertainty according to
DIN EN ISO 14253-1 .....................................................................................................26
Effect of the measurement uncertainty on the manufacturing process .............28
4.2 Roles and qualifications in inspection process management ..................................30
Roles in inspection process management .......................................................30
Qualification in inspection process management .............................................33
4.3 Risk-based safeguarding ........................................................................................35
Preselection of inspection processes for risk-based safeguarding ...................35
Procedure for risk-based safeguarding ............................................................37
Complaint process of test systems, inspection processes in the application ....42
4.4 Inspection process planning ...................................................................................43
4.5 Inspection equipment management ........................................................................48
Test equipment management system ..............................................................48
Calibration of test equipment ...........................................................................49
4.6 Proof of capability of measurement processes .......................................................50
Analysis, grouping and modelling of inspection processes ..............................52
Measurement system and measurement process capability............................53
Transferability to new inspection processes ....................................................55
Dealing with unattained inspection process capability .....................................58
Documentation of proof of capability ...............................................................59
5 5 General procedure for inspection process capability...................................................60
5.1 Influences on the measurement uncertainty results ................................................60
Influencing variables in the measurement system ...........................................61
Influencing variables in the measurement process ..........................................64
5.2 Phases of inspection process capability .................................................................66
5.3 Standard uncertainties ...........................................................................................68
Method A (Experimental determination) ..........................................................68
Method B (use of prior information) .................................................................69
6
5.4 Combined standard uncertainty ..............................................................................70
5.5 Expanded measurement uncertainty ......................................................................71
5.6 Uncertainty budget .................................................................................................72
5.7 Proof of conformity .................................................................................................72
5.8 Proof of capability of the measurement process .....................................................73
6 Measurement uncertainty determination in measurement process ................................75
6.1 Basic procedure .....................................................................................................75
6.2 Practical determination of typical standard uncertainties ........................................75
6.3 Influencing variables in measurement system ........................................................77
MPE Maximum permissible error of the measurement system – uMPE .............77
Display resolution – uRE ...................................................................................78
Calibration uncertainty of the standard – uCAL ..................................................78
Repeatability at standard – uEVR ......................................................................79
Systematic measurement error – uBI ...............................................................79
Standard uncertainty from linearity error – uLIN ................................................80
Further influencing variables in measurement system – uMS-REST .....................83
Determination of the uncertainties according to the “measurement system test”
(MS test) .......................................................................................................................83
6.4 Measurement process influencing variables ...........................................................85
Repeatability on the test part – uEVO ................................................................86
Reproducibility - uAV ........................................................................................86
Interaction – uIA ...............................................................................................86
Reproducibility of measurement systems - uGV ................................................87
Stability of the measurement process – uSTAB (short-term stability) ..................87
Inhomogeneity of the test part - uOBJ................................................................88
Temperature - uTEMP ........................................................................................89
Other influencing variables in the measurement process – uMP-REST ................93
6.4.9 Determining the uncertainties according to the “Test Measurement
Process” (Test MP) .......................................................................................................93
6.5 Typical measurement uncertainty budget ...............................................................94
6.6 Overview of typical measurement process models .................................................94
6.7 Preselection of measurement systems ...................................................................96
Motivation, requirements .................................................................................96
Sources of information for determining important specifications of measuring
equipment .....................................................................................................................97
Characteristic values for the evaluation of the selection of measuring/test
equipment and assistive devices ...................................................................................98
Categories of measuring equipment and sources of information of the
specifications/characteristic values ................................................................................99
7
7 Proof of capability of the measurement process ..........................................................100
7.1 Calculation of capability ratios ..............................................................................100
Capability ratio 𝑄𝑀𝑆 for the measurement system ........................................100
Capability ratio 𝑸𝑴𝑷 for the measurement process ......................................101
Capability ratios 𝑸𝑴𝑺 and 𝑸𝑴𝑷 with one-sided specification limits...............102
7.1.4 Minimum possible tolerance for measurement systems/measurement
processes ....................................................................................................................106
Capability of measurement processes and capability of manufacturing
processes ....................................................................................................................106
7.2 Evaluation of capability ratios ...............................................................................108
7.3 Documentation and reporting of proof of capability ...............................................109
Test report of the proof of capability ..............................................................110
Documentation of the inspection process capability ......................................111
7.4 Handling of unsuitable measurement systems/processes ....................................111
Procedures for process optimisation .............................................................112
Risk analysis and conditional approval ..........................................................113
Reflection and, if necessary, coverage of the limit values ..............................113
Coverage of the characteristic tolerances......................................................114
Special strategies ..........................................................................................114
8 8Special measurement processes ...............................................................................117
8.1 Classification and mating......................................................................................117
8.2 Validation of measurement software ....................................................................120
8.3 Insufficient sample sizes for MS and MP test .......................................................121
8.4 Consideration of the measurement uncertainty in the development ......................122
9 Proof of capability of attribute inspection processes ....................................................124
9.1 Basic preliminary remarks ....................................................................................124
9.2 Proof of capability for attributive inspection processes .........................................125
9.3 Notes on the composition of a representative test lot ...........................................127
9.4 Notes on the composition of the test lot on the basis of conditional probabilities .127
9.5 Possible methods for the evaluation of attributive inspection processes ...............129
Methods for characteristics that have been made discrete ............................129
Methods for discrete characteristics ..............................................................136
10 Assessment of continuous capability .......................................................................145
10.1 Methods ...............................................................................................................145
10.2 General notes on planning, implementation and documentation...........................146
10.3 10.3 Application of a stability chart (control chart) ................................................147
10.4 Consideration in the uncertainty budget ...............................................................150
11 Index of formula symbols .........................................................................................151
8
12 References ..............................................................................................................154
13 Index........................................................................................................................157
9
List of Illustrations
Figure 2-1: Overview of the VDA 5 Chapters, new contents compared to the previous volume
(blue) ....................................................................................................................................14
Figure 3-1: Relationship between measurement system, measurement process and test
process based on VIM and ISO 3534 ...................................................................................16
Figure 3-2: Relationships in the attributive case analogous to Figure 3-1 .............................17
Figure 4-1: Consideration of measurement uncertainty at the specification limit ...................26
Figure 4-2: α and β errors in the test decision – as a graph ..................................................27
Figure 4-3: α and β errors in the test decision – as a table ...................................................27
Figure 4-4: Consideration of measurement uncertainty in the test decision ..........................28
Figure 4-5: Superimposition of process dispersion and measurement uncertainty ................29
Figure 4-6: Influence of increasing measurement uncertainty on the acceptance zone ........30
Figure 4-7: Roles in the test process management ...............................................................31
Figure 4-8: Preselection of test processes for risk-based safeguarding (read from left to right)
.............................................................................................................................................36
Figure 4-9: Exemplary Matrix for determining the level of protection.....................................41
Figure 4-10: Requirements for the specification of products .................................................43
Figure 4-11: Schematic sequence of test process planning (supplementary to graphic) .......44
Figure 4-12: Extended sequence of test process planning ...................................................46
Figure 4-13: Procedure of a measurement process capability ..............................................51
Figure 4-14: Ishikawa diagram with the 5M of the measuring technique ...............................52
Figure 4-15: Measurement system and measurement process capability .............................54
Figure 4-16: Spider’s web diagram for variation of the input parameters ..............................57
Figure 4-17: Handling of unsuitable measurement systems/measurement processes ..........58
Figure 5-1: Important influences on the measurement uncertainty results ............................60
Figure 5-2: Measurement errors for measurement in accordance with DIN EN ISO 14253-2
[25] .......................................................................................................................................62
Figure 5-3: Procedure for assessing the capability of test processes....................................66
Figure 5-4: Complete measurement result............................................................................71
Figure 5-5: Representation of the guard bands to prove conformity......................................73
Figure 6-1: Determination of the linearity with maximum bias ...............................................82
Figure 6-2: Determination of linearity with ANOVA ...............................................................83
Figure 6-3: Recommended position of the dimensional scale (2 standards) .........................84
Figure 6-4: Recommended position of the material measures (3 standards) ........................85
Figure 6-5: Influence of temperature on the test process ......................................................89
Figure 7-1: Unilateral tolerance ..........................................................................................103
Figure 7-2: Lower one-sided tolerance with ranges for calculating the capability quotient ..104
Figure 7-3: Upper one-sided tolerance with operating point/nominal value .........................105
Figure 7-4: Representation of the observed C-value 𝐶𝑝𝑜𝑏𝑠above the actual C-value 𝐶𝑝𝑟𝑒𝑎𝑙
dependent from 𝑄𝑀𝑃..........................................................................................................107
Figure 7-5: Handling of unsuitable measurement systems/processes ................................111
Figure 7-6: Schematic representation of the FT rule ...........................................................115
Figure 7-7: Reduction of the measurement uncertainty by increasing the number of repeat
measurements n* ...............................................................................................................116
Figure 8-1: General classification model .............................................................................118
Figure 8-2: Example: Result of a suitable measurement process .......................................119
Figure 8-3: Result of an unsuitable measurement process .................................................120
Figure 8-4: Effect is detectable ...........................................................................................122
Figure 8-5: Effect is not detectable .....................................................................................123
10
Figure 9-1: Possible wrong decisions depending on the capability of the production process
...........................................................................................................................................125
Figure 9-2: Characteristics that are discreet or have been made discreet...........................127
Figure 9-3: Meaningfulness in relation to the uncertainty as a function of the position of the part
in the tolerance ...................................................................................................................128
Figure 9-4: Selection of test parts for the signal detection method ......................................130
Figure 9-5: Results of the signal detection method .............................................................131
Figure 9-6: Value progression of the reference values with determined measurement
uncertainties .......................................................................................................................132
Figure 9-7: Bowker test results ...........................................................................................140
Figure 10-1: Stability map as \̅ - and individual/moving range map ......................................148
Figure 10-2: Example manual definition of the action limits with small fluctuations in the range
of 1 digit .............................................................................................................................149
11
List of tables
Table 1-1: Objectives of selected technical standards, recommendations and guidelines for the
evaluation of test equipment .................................................................................................13
Table 4-1: Recommendations for role-specific qualification in test process management .....34
Table 4-2: Example categories of the consequences of incorrect measurements results/test
decisions ..............................................................................................................................38
Table 4-3: Categories of probability of occurrence of incorrect measurement results/test
decisions ..............................................................................................................................39
Table 4-4: Example for determining the risk class ................................................................40
Table 4-5: Example for determining the risk class in development .......................................40
Table 5-1: General procedure for proving the capability of Measurement processes ............67
Table 5-2: k-factors ..............................................................................................................71
Table 5-3: Example uncertainty budget ................................................................................72
Table 6-1: Recommendations for determining uncertainty components ................................76
Table 6-2: Example measurement process models and their uncertainty components .........95
Table 7-1: : Relationship between 𝐶𝑃𝑟𝑒𝑎𝑙 and 𝐶𝑃𝑜𝑏𝑠 for typical 𝐶𝑃 values ........................108
Table 8-1: k values for 95.45% as a function of the degree of freedom ..............................121
Table 9-1: Result matrix for the Bowker Test ......................................................................139
Table 9-2: Results matrix for two examiners .......................................................................142
12
1 Standards and guidelines
Relevant standards and guidelines for quality management require knowledge of the meas-
urement uncertainty or proof of the capability of the measurement system or measurement
process, often also called capability. Requirements for measurement and test processes are
contained in the documents listed in Table 1-1 as examples.
Table 1-1: Objectives of selected technical standards, recommendations and guidelines for the evalua-
tion of test equipment
VDA 5 aims to combine the requirements and procedures of the existing standards and
guidelines to gain a standardised and practical model for determining and consideration of
the expanded measurement uncertainty. If necessary, the methods of capability analysis
(see MSA [1] and company standards) established in practice are integrated. Appropriate an-
swers are given to typical problems regarding the determination of standard uncertainties as
well as the expanded measurement uncertainty Fehler! Verweisquelle konnte nicht gefun-
den werden..
13
2 Benefits and scope
Measurement systems and measurement processes must be adequately and comprehen-
sively evaluated. This evaluation needs to include the consideration of those factors that may
affect the measurement result. This also includes the calibration uncertainty of reference
standards, its traceability to a national and international measurement standard, the influence
of the test part or the stability of the measurement process.
The benefit of suitable inspection processes is very high for the user, since reliable and cor-
rect measurement results form the basis of important decisions, such as
Figure 2-1: Overview of the VDA 5 Chapters, new contents compared to the previous volume (blue)
The VDA 5 in its 3rd edition describes methods of inspection process management with spe-
cial consideration of a risk-based safeguarding. One of the focal points is inspection process
planning. Furthermore, it presents methods for the determination of the capability ratio for
measurement systems and processes based on characteristic tolerance and measurement
uncertainty.
14
The volume was primarily developed for geometric measurement procedures, but can also
be used for other measured quantities for which the essential boundary conditions for the
simplified determination of the measurement uncertainty in relation to GUM [28] are fulfilled:
15
3 Terms and definitions
The most important terms for the application of this document are defined below. Further-
more, the terms and definitions according to ISO 3534-1 [13], DIN ISO 10012 [10], VIM (In-
ternational Dictionary of Metrology) [17], DIN V EN 13005 (GUM) [28], DIN EN ISO 14253
[24] and DIN 1319 [6-8] apply.
Figure 3-1: Relationship between measurement system, measurement process and test pro-
cess based on VIM and ISO 3534
The following illustration applies for the attributive case:
16
Figure 3-2: Relationships in the attributive case analogous to Figure 3-1
Most of the following terms are taken from standards (see the relevant literature reference). Some
terms are often colloquially referred to by other names. These terms are added in brackets and are
used in several places throughout the text.
User
Person with relevant qualifications who carries out the measurement and inspection process.
17
Standard uncertainty u(xi) [28]
(standard measurement uncertainty or uncertainty component)
Uncertainty of the result of a measurement expressed as standard deviation.
18
Expanded measurement uncertainty (measurement uncertainty) [28]
A characteristic value that identifies a range for the measurement result that can be expected
to comprise a large proportion of the distribution of values that could reasonably be attributed
to the measured variable.
Note 1: The proportion can be regarded as the coverage probability or confidence
level of the range.
Note 2: To associate a specific level of confidence to the range characterised by
the expanded measurement uncertainty requires explicit or implicit assump-
tions about the probability distribution characterised by the measurement
result and the combined standard uncertainty. The level of reliability that
can be attributed to this range can only be known to the extent that such
assumptions are justified.
Remark: The GUM [28] or DIN EN ISO 14253-1 [24] use the formula symbol U for
the expanded measurement uncertainty. Latest standards, e.g. 3534-2 []
refer to the upper tolerance limit as U. In order to avoid confusion, this doc-
ument uses the symbol UMS for the expanded measurement uncertainty
where the text refers to a measurement system and UMP where the text
refers to a measurement process.
Testing
(conformity assessment)) [13, 28]
Determining one or more characteristics on an object included in the conformity assessment,
according to a certain procedure.
Conformity assessment by observation and assessment accompanied, where appropriate,
by measurement, testing or comparison.
Characteristic [17]
Distinguishing property
19
Measurement result Y [17]
Set of quantity values being attributed to a measured variable together with any other availa-
ble relevant information.
Note: A measurement result is generally expressed as a single measured value
and a measurement uncertainty Y = y i ± UMP . If the measurement uncer-
tainty is considered negligible for some purpose, the measurement result
may be expressed as a single measured value. In many fields, this is the
common way of expressing a measurement result.
Bias / Bi [17]
Estimated value of a systematic measurement error.
MSA [1]
MSA stands for Measurement System Analysis. This is a guideline from QS-9000 for the as-
sessment and acceptance of measuring equipment.
ANOVA
ANOVA (Analysis of Variance) is a mathematical method for determining variances from
which standard uncertainties can be estimated.
Measuring [17]
Process in which one or more values of a variable that can reasonably be assigned to a vari-
able are experimentally determined
20
Correct value [28]
Value recognised by agreement, attributed to a specific variable under consideration and
subject to an uncertainty appropriate to the purpose.
Note 1: A correct value is sometimes called assigned value, best estimate, agreed
value or reference value.
Note 2: To determine a correct value, numerous measurement results are often
evaluated.
Standard [17]
Realisation of the definition of a variable with specified variable value and associated meas-
urement uncertainty used as a reference.
Sample [41]
Sample (parts) define the quality limits (according to tolerance limits or limits agreed with the
customer).
The term “sample” may only be used for non-measurable (attributive characteristics).
Reference part
A reference part is a representative test body or test part (e.g. component) with which a
measurement process can be tested, supported, regularly checked or analysed under series
conditions.
Calibration [17]
An operation which, under specified conditions, in a first step establishes a relationship be-
tween the variables provided by standards with measurement uncertainties and the corre-
sponding indications with their associated measurement uncertainties and in a second step,
uses this information to establish a relation for obtaining a measurement result from an indi-
cation.
Note: Calibration should not be confused with adjustment of a measurement sys-
tem, which is often wrongly called “self-calibration”.
Remark: Comparison measurement taken under specified conditions between a
more precise calibration device and the object to be calibrated in order to
estimate the systematic measurement error.
21
Adjustment [17]
A series of operations performed on a measurement system so that it provides prescribed
readings corresponding to values of a quantity to be measured.
Note 1: Adjustment of a measurement system should not be confused with calibra-
tion, which is a prerequisite for adjustment.
Note 2: After adjusting a measurement system, the measurement system usually
has to be recalibrated.
Remark: Eliminate the systematic errors of the calibration object detected during cal-
ibration. Adjustment includes all necessary measures to ensure that the er-
ror of the display is minimised.
Setting
Setting means the calibrated actual value of the adjustment standard (material measure) is
transferred to the measuring machine under real operating conditions; the user prepares the
instrument for operation on site. Setting can include calibration and adjustment.
Resolution [17]
The smallest change of a measured variable that causes a noticeable change in the corre-
sponding display.
22
Measurement system [17]
A combination of measuring machines and often other equipment and, where necessary, re-
agents and utilities arranged and adapted to provide information to obtain readings within
specified intervals for quantities of specified kinds.
Inspection process
This is the result of the measurement process, taking into account the determined measure-
ment uncertainty, in comparison with a given specification.
Carrying out testing and determining a test decision
23
Measurement stability (stability) [17]
Property of a measuring machine according to which its metrological characteristics remain
accurate over time.
Note: Measurement stability can be quantified in various ways as:
Example 1: The duration of a time interval over which a metrological property changes
by a given amount.
Example 2: Change of a property over a given time interval.
Remark: The verification of the measurement stability is demonstrated by continuous
monitoring of the measurement process capability (see Chapter 10).
Verification [17]
Provision of objective evidence that a unit of observation fulfils specified requirements.
Validation [17]
Verification that the specified requirements are appropriate for the intended purpose.
Control chart
A control chart, also referred to as quality control chart or QCC, is applied to statistical pro-
cess control. A QCC generally consists of a level path and a dispersion path together with
specified action limits. Statistical values such as sample means and sample standard errors
are plotted on the respective path of the QCC.
24
4 Inspection process management
The inspection process management has two central tasks (Chapter 4.1):
1. Securing test results as a necessary prerequisite for the assessment of product safety
and conformity (Chapter 4.1.1)
2. Ensuring the evaluation of the capability of inspection processes as a necessary pre-
requisite for the industrialisation of production processes in terms of economic pro-
duction (Chapter 4.1.2).
Inspection process management must be regulated in the form of processes, procedures and
responsibilities (Chapter 4.2 and Chapter 4.6). The effort for the inspection process manage-
ment should be commensurate with the relevance of the characteristic for the quality of the
final product (risk-based approach, Chapter 4.3). The inspection process management con-
sists of the following subprocesses:
Inspection processes are included in the entire product creation process (development and
production). A consistently implemented inspection process management brings numerous
benefits and advantages:
The liability risk is minimised (reduced beta errors / type 2 error in test decision; a
beta error means that a test part is accepted although it is actually outside the specifi-
cation, Chapter 4.1.1)
Capable and regulated inspection processes form the basis for ensuring efficient and
economical procedures and achieving significant competitive advantages. Manufac-
turing costs are reduced due to less scrap and rework.
Information gained supports the inspection process planning and production control
to a large extent and makes a considerable contribution to the company’s success.
The inspection process management effectiveness must be evaluated at planned intervals in
accordance with DIN EN ISO 9001 [14] or IATF 16949 [2].
25
4.1 Inspection process management tasks
Securing test results taking into account measurement uncertainty according
to DIN EN ISO 14253-1
The measurement results uncertainty [or of the attributive test] may result in an incorrect test
decision, resulting in test parts that are actually within the specification being rejected (type 1
or α error) and test parts outside the specification being accepted (type 2 or β error). See the
following Figure 4-3 and Figure 4-2. Both incorrect decisions can have more or less serious
technical, economic and legal consequences (liability). [49] [VDI/VDE 2600, Sheet 1:2013]
26
Figure 4-3: α and β errors in the test decision – as a table
27
Unless otherwise agreed between manufacturer and purchaser, the decision rules based on
DIN EN ISO 14253-1:2018 apply: To reduce the risk of a α or β error, the measurement un-
certainty must be determined and taken into account at the specification limits. Figure 4-4 rep-
resents the acceptance zone exemplary.
One way to significantly reduce the risk of a β error is to limit the specification range/toler-
ance in the acceptance range. The limit for confirming the conformity of a characteristic is
therefore no longer the lower or upper specification limit (LSL/USL) but the upper or lower
acceptance limit (AL). Measured values in the shaded uncertainty range and in the non-ac-
ceptance range are assessed as a non-conformity of the characteristic. The definition of the
guard band ensures that decisions on the conformity of characteristic values are made with a
sufficiently low probability of error.
Further details on the consideration of the measurement uncertainty in the test decision are
described in Chapter 5.7.
The measurement processes are influenced by the measurement uncertainty. The dispersion
that can be assigned to the production process – known as process-specific dispersion – is
overlaid by the measurement uncertainty (see Figure 4-5). Only the observed process disper-
sion (total process dispersion) is visible.
28
Figure 4-5: Superimposition of process dispersion and measurement uncertainty
The measurement uncertainty thus has a negative effect on quality assurance in two ways: on
the one hand, as the measurement uncertainty increases, an ever greater guard band from
the specification limit must be maintained to minimise the risk of a beta error. On the other
hand, the observed process dispersion increases with increasing measurement uncertainty,
so that more and more measurement results are observed near the specification limit (see
Figure 4-6).
The tolerance, the dispersion of the production process and the permissible measurement
uncertainty must therefore be coordinated with each other for the purpose of economic pro-
duction in such a way that a capable and stable production process is guaranteed. In addi-
tion, it is also specified that a defined ratio of measurement uncertainty to the tolerance of the
characteristic must not be exceeded for the verification of the capability of the inspection pro-
cess.
29
Figure 4-6: Influence of increasing measurement uncertainty on the acceptance zone
The roles are listed below, with examples of responsibility (see Figure 4-7). Several roles can
be performed by one responsible person. In the event of conflicts of interest, impartiality and
confidentiality must be maintained (see ISO/IEC 17025:2017 [22]). The tasks are assigned to
roles by way of example but can also be assigned differently in companies in individual cases.
In addition, the role of the auditor, who carries out process audits for inspection process man-
agement thus checking compliance with specifications, is described.
30
Figure 4-7: Roles in the test process management
Product Developer
Development and construction of the product
Determination of the product characteristics including specification limits (tolerance)
Determination of the relevance of the characteristic for the function of the product
(e.g. in the context of a design FMEA: see FMEA manual AIAG/VDA:2019 [45])
Solving technical tasks
31
Determination of the probability of the occurrence of products bordering on the speci-
fication limits (e.g. within the scope of a process FMEA: see FMEA manual
AIAG/VDA:2019 [45])
Carrying out inspection process planning based on the characteristics of product de-
velopment and the production process of planning (production process)
Evaluating the inspection process in the context of the risk-based safeguarding of test
decisions (determination of the risk class and the resulting degree of safeguarding)
Creating specifications for the test equipment including definition of the acceptance
criterion for the proof of measurement system capability (for universal measurement
systems such as a coordinate measuring machine: define representative characteris-
tics)
Initiation of the purchase order by the procurement department (test equipment)
Planning of the initial training on the test equipment by the supplier
Validation of the measurement system software e.g. by comparison measurement
with a known sample component or standard
Organisation of the initial acceptance of the measurement system including proof of
the measurement process capability
Transfer of the measurement system for monitoring to the test equipment manage-
ment system including definition of the boundary conditions for monitoring the test
equipment such as the calibration interval
Transfer of the measurement system to the test equipment operator
It is essential that the roles of product development, production process planning and inspec-
tion process planning are coordinated in order to attune tolerances, production dispersion
and measurement uncertainty in the sense of a capable production process (see Chapter
4.4).
32
Administrative activities concerning the test equipment
Organising regular monitoring of test equipment
Commissioning the calibration with the internal or external service provider
Identification of the calibration status
Checking compliance with the calibration date
Blocking faulty test equipment
In the case of non-compliant test equipment: triggering the associated process
Bears the main responsibility for measurement and test system/test process
Managing the measuring equipment/test equipment used
Introducing and implementing the required processes
Creating process descriptions
Ensuring the necessary employee qualifications
Determining test intervals
Blocking faulty test equipment
All employees must be suitably qualified for the roles assigned to them. The following Table
4-1 shows recommendations for role-specific qualifications.
33
Table 4-1: Recommendations for role-specific qualification in test process management
34
The following list describes the minimum requirements for the corresponding qualification
from the point of view of VDA 5 and does not claim to be exhaustive:
Quality management
Placing products on the market (product liability)
Monitoring and measurement resource requirements
Control of documented information
Specifying product characteristics
Conformity, non-conformity and their consequences
Release processes, response in case of non-conformity
Opportunities and risks
Continuous improvement
Measurement technology
Measuring and testing
Measurement uncertainty and its effect on testing
Factors influencing measurement uncertainty
Need for regular calibration
Qualification for test equipment management
Requirements for a calibration certificate
Defining calibration intervals
Need for documentation of calibration procedures
The risk-based safeguarding of inspection process capability may not be applied to the fol-
lowing characteristics.
In testing as part of the development and qualification phase:
Release check,
Type testing, and
Legal guidelines.
In the remaining development and production:
Special characteristics SC S
(Safety requirement/product safety/safety-relevant consequences, with immediate
danger to life and limb)
Special characteristics SC A
(Approval-relevant, legal and official requirements at the time the product is placed on
the market)
35
The highest degree of protection must be ensured for these characteristics.
Figure 4-8: Preselection of test processes for risk-based safeguarding (read from left to right)
If no product information is determined, or if adjustment and assembly aids are used where
the resulting characteristic is monitored at a later date with a test device, the lowest degree
of protection may be used (see Figure 4-8Fehler! Verweisquelle konnte nicht gefunden
werden.).
Safeguarding at the lowest risk (1) according to Figure 4-8Fehler! Verweisquelle konnte
nicht gefunden werden.:
Verification of capability for the measurement task (e.g. by means of the data sheet
for the measuring equipment)
Is not subject to the obligation to monitor test equipment
No statistical proof of capability required
Continuously ensure damage-free and fully functional
36
The calibration uncertainty must be determined and documented
The calibration uncertainty must be taken into account for the statement of conformity
In case of a negative calibration result, a documented risk management process must
be initiated. This process must be evaluated in terms of its effectiveness and effi-
ciency.
Traceability must be ensured
The risk-based safeguarding of test decisions is based on the determination of a risk class
for the respective inspection process based on two dimensions
1. Consequences and
2. probability
of an incorrect test decision. Alternatively, the risk assessment for the respective inspection
process can also be based on a preceding FMEA.
The risk class defines the degree of protection in the processes
37
Table 4-2: Example categories of the consequences of incorrect measurements results/test decisions
The consequence of an incorrect test decision must be assessed by the technical bodies,
which can evaluate the relevance of the characteristic for the quality of the final product or
process.
The probability of occurrence of an incorrect test decision depends on the process perfor-
mance and the measurement uncertainty of the measurement process or measurement sys-
tem used. The estimation of the probability of occurrence of an incorrect test decision is car-
ried out, for example, according to Table 4-3.
38
Table 4-3: Categories of probability of occurrence of incorrect measurement results/test decisions
The probability of an incorrect test decision shall be assessed by the body having the compe-
tence to assess the capability of manufacturing processes and the measurement uncertainty
of inspection processes.
39
4.3.2.2 Derivation of the risk class for the individual inspection process
The risk class is determined depending on the consequences and probability of occurrence
of an incorrect test decision. Alternatively, the assessment of the risk for the respective in-
spection process can also be based on a preceding FMEA.
The evaluation of the risk class may differ in development from the evaluation in production.
The result of the risk assessment must be subjected to document control as documented in-
formation.
40
4.3.2.3 Degree of protection
Depending on the risk class of the characteristic, the effort and quality of the assurance of
measurement results / test decisions can be scaled according to the following matrix (Figure
4-9).
41
Complaint process of test systems, inspection processes in the application
The complaint process is only used if deviations from the standard process occur during the
use of a test equipment or recalibration. This may occur in the following cases, for example:
If the test equipment is damaged during use so that the test equipment can no longer
be calibrated. It is no longer possible to safeguard the results by a re-calibration from
the time of the last calibration until the point they were damaged.
If the error limit is exceeded during calibration.
If it is determined that the test equipment is no longer suitable for its intended use.
If one of the above-mentioned cases occurs, the responsible operator of the test equipment
must initiate measures for both the test equipment and the product.
The measures for the product shall include as a minimum:
Performing a risk assessment on the impact of the potential NOK test equipment on
the test result. This risk assessment must involve all relevant parties (planning body,
user and test equipment body). The result must be documented in a suitable way.
Introduction of measures, also retroactively, to ensure product and process quality.
The measures are to be documented and their effectiveness checked.
Provision of demonstrably suitable replacement test equipment (if necessary).
The measures for the test equipment include as a minimum:
The test equipment shall be marked as “blocked” and protected against unauthorised
use. If it is not possible to place a blocking mark on the test equipment, the test
equipment user must be informed about the errors on the test equipment and the fur-
ther procedure.
Inform all parties involved.
Document the calibration results before any repair or adjustment.
The authorised department (e.g. cost centre manager) decides on the further proce-
dure
o Repair,
o Scrapping,
o Continued use with limited measuring range.
This decision will be as a result of advice from the test equipment officer and the test
equipment office regarding repair, calibration, new procurement and/or scrapping of the
test equipment.
In case of scrapping, the test equipment department documents this in the test equip-
ment monitoring system. Before the test equipment is scrapped, approval markings
and the test equipment accompanying card, if available, must be removed.
42
4.4 Inspection process planning
Inspection process planning as an integral part of the product development process is an es-
sential part of inspection process management. Inspection process planning describes a
possible path ranging from the specification of a characteristic (e.g. a geometric specification
in a drawing), the correct selection of the required resources with measuring and test equip-
ment, the complete proof of capability with continuous monitoring, up to a statement of con-
formity for a product manufactured according to this specification. Inspection process plan-
ning is thus both a basis for proving that the functional requirements of a product are fulfilled
and the basis for avoiding product liability risks.
Interdisciplinary cooperation between all responsible departments is necessary to be able to
fulfil this task effectively. In this way, department-specific comprehensive knowledge, e.g. on
the intended function of a component or characteristic, the specific properties of the
(planned) manufacturing processes and the expected environmental conditions, as well as a
comprehensive understanding of measurement processes can be taken into account to the
maximum extent possible.
In addition, comprehensive, clear and correct specification of the characteristic to be tested
and an understanding of it forms the basis of all subsequent planning activities.
In this context, special attention is paid to the definition of the specification limit. This repre-
sents the variation limit of a characteristic up to which the functionality of a product is guaran-
teed. The mechanisms of Inspection process planning with the components measurement
process capability (determination and consideration of the influence of the Inspection pro-
cess) and test concept (determination and consideration of the characteristic values and their
position relative to the specification limit) serve to prove compliance with this limit within the
framework of the manufacture of a product.
In this context, information regarding the criticality of the characteristic to be tested (cc/s,
cc/h, sc/f) is derived from risk considerations for the component or its production process and
43
is used both in the preparation of the test concept and in the assessment of the criticality of
the Inspection process.
Since the focus of the development during the preparation of the specification is on the “func-
tionally appropriate” property, the integration of the Inspection process planning with the fo-
cus on the “testable” property can reduce the effort and, above all, the necessary iteration
loops.
44
The flow chart above shows the main components of Inspection process planning and their
interrelationships. Furthermore, all participating roles are assigned as examples.
The test concept is created on the basis of the input variables described above. In addition to
developing the test method (e.g. inline testing, offline testing, ...), determining the test fre-
quency (100% testing/sample testing), the response to NOK test results, the test concept
also includes all information required to carry out the tests.
When defining the test concept, attributive tests should be avoided as far as possible in fa-
vour of measuring methods. Attributive tests have clear systemic weaknesses (see Chapter
9) and should therefore be used only in exceptional cases and with special consideration of
the criticality of the criterion to be tested.
If a test concept is available, the measurement concept can be derived from it with the es-
sential measurement process and measurement system components. The measurement
concept forms the basis for the specification of the measurement systems to be procured. In
this context, the definition of the required environmental conditions and the measurement un-
certainty of the measurement system and measurement process required for the fulfilment of
the testing task shall be explicitly stated. These steps allow a pre-selection of the measure-
ment system to be procured.
The safeguarding of the Inspection process can be planned on the basis of the measurement
concept, together with the test concept and, if necessary, further internal specifications, ac-
cording to Chapter 4.3.
The type of safeguarding influences the requirements and the procedure for validation of the
measurement system and the measurement process. When planning the validation, the fol-
lowing points are among those which must be considered:
The measurement system is procured on the basis of the specification described above. Ac-
ceptance criteria include proof of the above-described requirements for the capability of the
measurement system and the acceptance procedure described.
Carrying out validation of the measuring equipment and, based on this, the measurement
process based on the planned risk-based safeguarding form the validation phase in the In-
spection process planning.
45
Figure 4-12: Extended sequence of test process planning
46
A positive result of this validation is the prerequisite for the handover of a Inspection process
in series. If this cannot be achieved, improvements must be made.
The last planning component in the Inspection process planning is the definition of specifica-
tions for the test equipment management, especially in the field of calibration activities such
as procedure and frequency. The risk associated with the Inspection process and the results
of the validation of the measurement process are significant influencing factors.
Furthermore, in this context, specifications are made for the continuous monitoring of the
measurement systems (see Chapter 10)
The Inspection process planning is completed with the handover of the test equipment to the
operator after positive validation. All necessary documentation must also be handed over.
From this point on, the procedures defined within the scope of the inspection process plan-
ning for the application and monitoring of the measurement system and the measurement
process take place.
The sequence can be extended taking into account the above-mentioned explanations:
The inspection process planning makes an essential contribution to specifications of the
measurement system or measurement process to be used. It is recommended to use the
specifications developed in the course of these planning activities in the form of a require-
ment specification as a relevant document in a procurement process.
47
4.5 Inspection equipment management
Inspection equipment management is one of the four supporting pillars of inspection process
management and an essential component for the evaluation of product safety and conform-
ity. The task of test equipment management is to provide suitable resources for ensuring
valid, reliable and comparable monitoring and measurement results.
The following topic areas are intended to ensure the quality, reliability and usability of the test
equipment:
a) Input for the organisation of gauge management
Regulating documents
Definition of responsibilities
Traceability (test equipment to test part)
b) General conditions for calibration activities
Requirements for testing laboratories
Competence and qualification of employees
Calibration and testing instructions
Calibration, maintenance
c) Approval process
Proof of capability
Initial acceptance and release of the test equipment
d) Monitoring test equipment
Test equipment management/test equipment monitoring system
Identification/marking of test equipment
Assignment of calibration point (internal, external) for test equipment
Adjustment of calibration interval based on experience
Error-free test software/validation
Calibration status
Standards, traceability and calibration chain
Calibration certificate
Usage decision
Reminder process
e) Response to a not in order result (risk management)
Complaint process
Procedure for the detection of faulty test equipment
Repair
The test equipment must be managed in a test equipment management system. The follow-
ing minimum requirements must be met by this system:
All test equipment must be recorded in the system and must be clearly identifiable
The following information on the test equipment must be managed in the system
o Status of the test equipment
- In use
- Being tested
- Deactivated
48
- Scrapped*
o Approval decision for the use of the test equipment including the approver, ap-
proval date and the associated documentation such as the calibration certifi-
cate
o User or operator
o Date of the next calibration
o Calibration interval
The test equipment’s history must be recorded.
The data for the test equipment as well as records such as calibration certificates
must be archived and clearly assigned to the test equipment. They represent docu-
mented information according to VDA 1 [44].
An internal calibration laboratory may also continue to carry out calibrations. There is no
obligation to have calibration procedures of internal calibration laboratories accredited ac-
cording to ISO/IEC 17025:2017 [22].
The calibration certificates should meet the requirements of ISO/IEC 17025:2017 [22]
Determination of the measurement uncertainties of the calibration processes and
statement of the measurement uncertainty in the calibration protocol
Regular audits to be conducted to review the requirements in accordance with
ISO/IEC 17025:2017 [22]
49
Internal calibrations offer the following advantages and opportunities within the company:
As a minimum, the verification shall take into account the following elements:
50
The foundation of any proof of capability is a detailed description of the measurement pro-
cess. Topics such as the variable to be tested, the measuring principle used, but also a de-
scription of the measuring procedure and the ambient conditions prevailing at the measuring
location are important.
Based on the description, the measurement process is analysed with regard to the expected
influencing parameters. These influencing parameters can thereby be identified. In addition,
a strategy for determining the measurement uncertainty for each influencing parameter is de-
veloped within the framework of the analysis of the measurement process. At the same time,
the analysis determines which measurement processes can be combined into measurement
process groups (see Chapter 0). It is strongly recommended to process the analysis in an in-
terdisciplinary team on the basis of an Ishikawa diagram for the 5M of the measuring tech-
nique (see Chapter 4.6.1).
The results of the analysis of the measurement process lead to a model of the measurement
process, which serves as a basis for the calculation of the measurement uncertainty of the
measurement process.
The individual uncertainty components are determined based on the measurement process
model. This can be based on previous knowledge (method A) or on experiments (method B)
(see Chapter 6).
The measurement process capability is concluded with a complete documentation of the en-
tire process, from the process description to the result of the proof of capability (see Chapter
4.6.5).
51
Analysis, grouping and modelling of inspection processes
It is important to identify, for the analysis and grouping of the inspection processes, where
the system boundaries for the proof of capability are drawn on the basis of the 5M of meas-
urement technology (see Figure 4-14).
52
A strategy for building up a measurement uncertainty budget can then be derived on the ba-
sis of this classification of the parameters. If parameters are fixed, additional testing must be
carried out if the values deviate from the values. The measurement uncertainty can be de-
rived by specific variation of influencing parameters for different characteristics, e.g. different
users.
According to GUM [28], modelling is the creation of a closed mathematical equation and the
formation of partial derivatives. Chapters 5 – 7 of this volume describe a simplified approach
that is applicable to most geometric measurement systems.
Exclusion criteria for the application of these simplified equations are, for example:
Non-linear relationships
Correlations
Non-standard distributed values for repeat measurements
Processing the measurement results (e.g. filtering)
If the simplified equations cannot be applied, modelling according to GUM [28] must follow.
The simplified procedure, which is described in the following chapters, is divided into a two-
stage procedure (see Figure 4-15):
The two stages of the procedure make it possible to consider the influences of the measure-
ment system separately from the measurement uncertainty under operating conditions. At
the same time, it can already be considered after the testing of the measurement system ca-
pability whether the proof of capability for the measurement process can be successful (basic
requirement). In this way, the experimental effort for unsuitable measurement systems can
be reduced.
53
Figure 4-15: Measurement system and measurement process capability
54
Transferability to new inspection processes
Once the measurement uncertainty has been determined for one inspection process, the
question arises as to whether the measurement uncertainty determined can be transferred to
other similar inspection processes.
Example 1: A cylinder block is tested on two coordinate measuring machines from the same
manufacturer, of the same type and in the same measuring room. Can it be assumed that
the measurement uncertainty of both devices is comparable?
Example 2: A cylinder block is tested on two coordinate measuring machines from the same
manufacturer, of different types and in the same measuring room. Can it be assumed that
the measurement uncertainty of both devices is comparable?
Example 3: A new generation of cylinder blocks with differing characteristics will be tested on
the existing coordinate measuring machines for which a proof of capability has already been
issued. Can it be assumed that the measurement uncertainty of both devices is comparable?
In general, transferability is only possible if the uncertainty contributions do not differ signifi-
cantly between the individual inspection processes. In order to evaluate the significance and
to be able to derive decisions, the uncertainty contributions on the Ishikawa diagram along
the 5M of the measurement technique must be considered by an interdisciplinary team (see
Figure 4-14).
It can be assumed in the case of example 1 that the determined measurement uncertainty
can be transferred without further testing.
It must be assumed for example 2 that the error limit has a significant influence on the re-
peatability on the test part. In this case, it is recommended that a separate capability test be
carried out for each coordinate measuring machine. If necessary, parts of the measurement
uncertainty budget, such as the influence of the temperature strain on the measured parts,
can be taken from the proof of capability that has already been performed.
An alternative would be to perform the measurement uncertainty for the coordinate measur-
ing machine with the lower repeatability on the test part and transfer the measurement uncer-
tainty to the coordinate measuring machine with the higher repeatability. The disadvantage of
this alternative is that the measurement uncertainty of the coordinate measuring machine
with the lower combined measurement uncertainty is overestimated and the higher measure-
ment uncertainty for both must be taken into account as a guard band when making the test
decision.
For example 3, it must be checked whether the new characteristics are covered by the exist-
ing proof of capability. This assessment can be evaluated, for example, by means of a spi-
der’s web diagram (see Figure 4-16). The spider’s web diagram shows the relevant boundary
conditions under which the proof of capability was originally provided. In this example for
characteristics in the value range between 0 to 200 mm, in the measuring room under the
prevailing temperature conditions in the measuring room, for a defined temperature range of
the measured part, a material and a coordinate measuring machine.
In the following example, a proof of capability was carried out for the measurement of a bore
distance on the cylinder block in the range from 0 to 200 mm (see Figure 4-16, parameter
space highlighted in blue). Additional characteristics are now added with a distance of 170
mm (see Figure 4-16, inspection process 1) and a distance of 220 mm (see Figure 4-16, in-
spection process 2). In this case it should be checked whether the determined measurement
55
uncertainty can be transferred to inspection processes 1 and 2. It is assumed that the meas-
urement uncertainty for inspection process 1 can be transferred, since the new nominal value
and also all other influencing variables lie in the parameter space highlighted in blue. A trans-
fer to inspection process 2 cannot be done across the board, as the new nominal value is not
in the parameter space.
For inspection process 2, therefore, either a completely new proof of capability must be per-
formed, or the measurement uncertainty contribution must be re-determined as part of the
measurement uncertainty budget – experimentally or based on previous knowledge.
56
The interdisciplinary team must assess whether a completely new proof of capability must be
carried out. The criteria here are, for example:
The relevance of the characteristic for the quality of the final product
The reproducibility of inspection processes
The relevance of the uncertainty contribution in the measurement uncertainty budget
and
The effort involved in carrying out repeat measurements
Figure 4-16: Spider’s web diagram for variation of the input parameters
57
Dealing with unattained inspection process capability
There are several possibilities for action in the case of a negative proof of capability to pro-
vide positive proof of capability (see Figure 4-17):
58
If the proof of capability is negative, the first step is to improve the measurement system and
measurement process. This action improves the capability ratio QMS or QMP. By reducing the
measurement uncertainty, it is possible to provide positive proof of capability without having
to adapt the capability ratio or the specification of the component.
2. Risk analysis with conditional approval (Chapter 7.4.2)
It is possible to check a conditional approval only if the improvement of the measurement
system or the measurement process does not lead to a sufficient reduction of the measure-
ment uncertainty. The customer’s consent is also required.
Conditional approval is based on an acceptable breach of the capability ratio limit (Chapter
7.4.3) or the adjustment of the specification limits (Chapter 7.4.4). This can be either a tem-
porary or a permanent solution. A risk analysis must be carried out and documented based
on this in order to identify and evaluate potential risks of the error.
Applying risk analysis with conditional approval increases the risk of producing components
that deviate from the nominal value.
3. Improvement/substitution of the production process
Independent of the inspection process, the process dispersion can be reduced by an im-
provement or substitution of the manufacturing process. In turn, by reducing the process dis-
persion, it is possible to accept a breach of the capability ratio limit without increasing the risk
of an incorrect test decision or the number of test results in the uncertainty range. This has a
positive effect on the risk analysis in case of a conditional approval.
59
5 5 General procedure for inspection process capability
In the case of tests for series monitoring and conformity tests, it must be ensured that “char-
acteristics with regard to tolerance are correctly and reliably identified as being OK (within
the specification limits) or not OK (outside the specification limits)”. It should be noted that in
addition to errors in the measured values caused by production process variations, errors
caused by the measurement process must also be taken into account. Errors due to the
measurement process make the measurement results and thus the test decision uncertain.
They must be known and may only be accepted up to an appropriate level of testing toler-
ance. The following chapters present a procedure which aims to identify, quantify and sum-
marise the influencing variables of the inspection process in an appropriate manner. For this
purpose, individual uncertainty components are determined, presented in an appropriate
manner (uncertainty budget) and summarised using statistical methods for expanded meas-
urement uncertainty. The relationship between the expanded measurement uncertainty and
the characteristic tolerance allows a statement about the capability of the present inspection
process. In addition, the expanded measurement uncertainty in accordance with the rules of
ISO 14253-1 [24] can be used to demonstrate conformity.
60
Recurring and dominant influencing components of the measurement system and Measure-
ment process are listed as examples in the following. The standard uncertainties derived
from this are described in detail in Chapter 6.
Use of standards
Standard: see terms in Chapter 3 (standard, reference standard, working standard)
Standards are usually idealised traceable material measures which are used for the calibra-
tion of measurement systems. They are characterised by a calibration certificate with indica-
tion of the measurement uncertainty.
The traceability chain is defined by the application of standards (setting masters on measure-
ment systems in production), working standards, reference standards and national stand-
ards.
Use of samples
Sample: see terms in Chapter 3
Sample [41]
Sample (parts) define the quality limits (according to the tolerance limits or limits agreed with
the customer).
The term sample parts* may only be used for attributive characteristics (good/bad tests).
VDA 16 [41] describes samples of the maximum tolerable process situation. MTP samples
(limit samples) define the quality limit levels of the upper/lower tolerance limit (e.g. deter-
mined by customer or development specifications).
Limit samples are sample parts which exceed or fall below the tolerance limit, but barely
cause any impairment for the end customer. These samples must be tested at regular inter-
vals, be usable and correspond to the current quality level.
Note: *These sample definitions are only considered from a metrological point of
view here and not from the perspective of a product-related initial sample
test
Use of reference parts
Reference part: see terms Chapter 3
A reference part is a representative test body or test part (e.g. component) with which a
measurement process can be tested, supported, regularly checked or analysed under series
conditions.
61
As the use of standards is often not always possible, or is very costly in manufacturing and
assembly areas, use of “reference parts” has been proven in practice (gold part, measuring
aids, adjustment aid, functional part).
Usage is varied, e.g. for analyses, stability testing, for the adjustment of measurement sys-
tems in production machines that belong to the machine control, for internal comparisons of
measurement systems, etc.
Serial parts are often used as reference parts. These reference parts must then be marked
separately and provide reproducible measurement results.
The reference part must be checked before use with a traceable measurement system.
As a rule, the reference part may only be used for analysis or adjustment purposes. Calibra-
tion of a measurement system or a measurement uncertainty study of a measurement pro-
cess is only possible if the calibration uncertainty of the reference part can be determined or
estimated.
Resolution
Reference standard
Adjustment procedure at operating point(s)
Linearity error / systematic measurement error
Measurement repeatability.
Figure 5-2: Measurement errors for measurement in accordance with DIN EN ISO 14253-2 [25]
62
The measurement errors in a measurement process consist of known and unknown meas-
urement errors from a number of different sources or causes. The traditional term “measure-
ment fault” is to be replaced by the term “measurement error” (DIN 1319-1 [6] or VIM, Chap-
ter 2.16 [17]). In the case of measuring machines or measurement systems, the errors
agreed or prescribed and permitted in various specifications or guidelines (e.g.
VDI/VDE/DGQ 2618 ff [50]) are also referred to as error limits.
Measurement results can show different types of measurement errors (see Figure 5-2):
Bi xg - xm
𝑥̄𝑔 Arithmetic mean of the measured values
Outliers
Outliers are caused by non-repeatable incidents during the measurement. They can be
caused by interference – electrical or mechanical (e.g. voltage peaks, vibrations). A common
reason for outliers occurring is inadequacies such as incorrect reading, recording or incorrect
handling of measuring equipment by personnel. Outliers cannot be described in advance.
However, they may occur during the measurement test.
63
5.1.1.3 Measuring method/measuring procedure
The way in which a measurement is carried out or which measurement strategy is selected
has an influence on the measurement result. The mathematical methods used to determine
the measured value also influence the result, e.g:
Non-contact/tactile
Measuring point arrangement
Number of measuring points
5.1.2.1 Environment/surroundings
Important influencing components of the environment influencing the measurement process
are, for example:
Temperature
Lighting
Vibrations
Contamination
Air humidity
With regard to the ambient conditions, the effects of temperature fluctuations on the meas-
ured part, measurement system and clamping device are particularly worthy of mention. This
leads to different measurement results in measurements of length at different temperatures.
If vibrations are suspected or present, they should be analysed and eliminated in advance so
that they do not affect the measurement result.
Chapter 6.4.7 contains proposals for the determination of the standard uncertainty from tem-
perature influences.
5.1.2.2 Person/examiner/operator
Influences of the operator on the measurement uncertainty results result from the different
abilities and skills of the examiners when performing the measurement and are, for example:
64
5.1.2.4 Mounting device
If measuring machines are installed in such devices, they can also influence the measure-
ment result, e.g.
Averaging
Max/min
Filtering
Outlier elimination
Wear on standard/device
Temperature change (time: winter/summer; day/night)
Long term drift
Testing of the long-term stability can also be used to prove the continued capability. For
more details on long-term stability, please refer to Chapter 10.
65
5.2 Phases of inspection process capability
66
The evaluation of measurement processes and the consideration of the measurement uncer-
tainty is performed according to the following table (Table 5-1).
Table 5-1: General procedure for proving the capability of Measurement processes
The capability of the measurement system takes into account influences that originate from
the measurement system itself, as described in VIM 3.2 [17].
All relevant uncertainty influences that affect the measurement result shall be considered for
the demonstration of the measurement process capability. Furthermore, the characteristic
tolerances must be known both for the assessment of the measurement system capability
and for the proof of the measurement process capability.
The expanded measurement uncertainty 𝑈𝑀𝑃 is determined for the verification of the meas-
urement process capability and the capability ratio 𝑄𝑀𝑃 is used as an evaluation criterion.
67
The determined measurement uncertainty is available for consideration within the scope of
conformity decisions in accordance with DIN EN ISO 14253-1 [24].
Proof of continued capability (Chapter 10) is provided by continuous monitoring and should
make long-term influences apparent.
x - x
2
i
sg i 1
n -1
68
The standard deviation is included in the measurement uncertainty budget as standard
measurement uncertainty u(xi) if, as is usual in practical applications, the measurement re-
sult is determined by repeat measurements with single measured values.
u xi sg
A smaller value for 𝑢(𝑥𝑖 ) is obtained by repeated measurements of single measured values
with subsequent averaging (Chapter 7.4.5.2).
u x u x1 u x2 u x3 ...
2 2 2 2
u( y ) i
i 1
70
5.5 Expanded measurement uncertainty
The expanded measurement uncertainty U is usually given as a measure of the possible de-
viation of the true value from the measured value. This is calculated by multiplying the com-
bined measurement uncertainty by the coverage factor k (Table 5-2):
𝑈 = 𝑘 ⋅ 𝑢(𝑦)
The relationships shown in Fehler! Verweisquelle konnte nicht gefunden werden. apply.
A confidence level of 95.45% and thus factor k=2 is recommended for calculation of the
measurement system and measurement process capability. The limit values proposed in
Chapter 7 refer to the coverage factor k=2.
71
5.6 Uncertainty budget
An uncertainty budget serves the clear representation of the measurement system or meas-
urement process capability. Table 5-3 shows a pattern for a possible uncertainty budget.
Table 5-3: Example uncertainty budget
72
Figure 5-5: Representation of the guard bands to prove conformity
The width of the guard bands is calculated under the assumption that the probability of error
decision in each guard band is a maximum of 5%, i.e. that there is a maximum probability of
5% that wrong decisions are made.
If the capability indices 𝑄𝑀𝑃 for measurement processes required in this volume are complied
with or only exceeded no more than twice, a guard band factor 𝑔 = 1,65 can be selected
across the board. Thus, the width of the guard bands is calculated according to the formula
𝑔(𝐿/𝑈)(𝑅/𝐴) = 𝑔 ⋅ 𝑢𝑀𝑃 = 1,65 ⋅ 𝑢𝑀𝑃
Note 1: The guard band factor 𝑔 = 1,65 is not constant and depends on the size of
the actual measurement uncertainty. It applies as long as the expanded
measurement uncertainty 𝑈 ≤ 30% is the tolerance and increases to 𝑔 = 2
when the expanded measurement uncertainty 𝑈 = 50% is the tolerance.
Details can be found in DIN EN ISO 14253-1 [26]
Note 2: Customer and supplier can choose other coverage factors 𝑔 by special
agreements
Proof of conformity can be provided for individual values and thus also for 100% measure-
ments (“every part is measured”). In the case of random measurements, the individual con-
formity of the non-tested products cannot be guaranteed with regard to the tested character-
istic.
Note: The conformity of the products (of the tested characteristic) can also be en-
sured by the proof of a capable manufacturing process (VDA 4 [46] or
ISO22514-2 [35]). This is not covered in the present volume. However, a
prerequisite for these capability proofs is a suitable measurement process
with a sufficiently small 𝑄𝑀𝑃 (Chapter 7)
73
The capability ratio 𝑄 always describes the ratio of uncertainty to tolerance (VDA 6.1 [42]). It
should be noted that the expanded measurement uncertainty only indicates “half the disper-
sion”, i.e. the true value is to be found in a range from −𝑈 up to +𝑈. For this reason, the dou-
ble measurement uncertainty must always be compared with the tolerance.
2𝑈
𝑄=
𝑇
In the case of attributive tests, the measurement process capability is demonstrated, as far
as possible, by means of special tests in accordance with Chapter 9.
74
6 Measurement uncertainty determination in measurement
process
6.1 Basic procedure
The following basic topics have been covered in the previous sections:
• the need to determine the expanded measurement uncertainty 𝑈𝑀𝑆 for a measure-
ment system and 𝑈𝑀𝑃 for a measurement process.
• the calculation of the expanded uncertainty measurement 𝑈𝑀𝑆 and 𝑈𝑀𝑃 calculated us-
ing the combined standard uncertainty uMS or uMP and the coverage factor k.
• the criteria for the capability ratios of the measurement system 𝑄𝑀𝑆 and measurement
process 𝑄𝑀𝑃
• a schematic procedure for the proof of capability of the measurement system and
measurement process.
In this section, a standardised approach is proposed which covers a large part of the uncer-
tainty components relevant in practice. Either Method B or Method A (Chapter 5.3) is used to
determine the uncertainties.
In cases where the prerequisites for the procedures presented in the following are not met,
the user must resort to the elementary procedures for determining the measurement uncer-
tainty, as described for example in GUM [28].
The uncertainty components considered must always correspond to the real measurement
process. Uncertainty components whose dispersion (variability) during the experimental pro-
cedure does not correspond to the dispersion in the real measurement process shall not be
used for the calculation of the measurement uncertainty but shall be corrected or otherwise
determined.
75
Table 6-1: Recommendations for determining uncertainty components
76
6.3 Influencing variables in measurement system
The expanded measurement uncertainty UMP refers to the entire measurement process
(Chapter 4.6). The measurement system is evaluated separately as an essential component.
Its capability QMS (Chapter 7.1.1) can usually be determined more easily than the capability
of the measurement process, because only the uncertainties arising from the actual meas-
urement system (measuring machine, standard, setup, ...) are evaluated.
The procedure for the proof of capability of the measurement system is shown in an overview
in the Chapter 5.2. The determination of the individual standard uncertainties is discussed in
this chapter. The calculation of the expanded measurement uncertainty 𝑈𝑀𝑆 and the capabil-
ity ratio 𝑄𝑀𝑆 are presented in Chapter 7.1.1.
Testing of the individual uncertainty components of the measurement system can be omitted
if the maximum permissible error of the measurement system MPE is known, traceable and
documented for the measurement system. uMS is then determined on the basis of MPE.
The determination of the respective standard uncertainty is explained in more detail in the
following sections.
The maximum permissible error (MPE) or the error variation limit is the permitted extreme
value of a measurement error in relation to a known reference value. The MPE always de-
scribes a half width, i.e. the permitted errors are in the range −𝑀𝑃𝐸 … + 𝑀𝑃𝐸.
If the MPE is proven, documented and reliable, the determination of the individual uncertainty
components of the measurement system can be omitted. To this end, it must be ensured that
the certificate issued by the manufacturer or calibration service provider contains at least the
following additional information in addition to the MPE declared:
• Reference to the national/international standard used, which describes how the cali-
bration was performed, or alternatively a description of a validated calibration method
• The proof of traceability
Some example criteria that characterise a trustworthy maximum permissible measurement
error:
• With which standards (nominal values and calibration uncertainty) and at which oper-
ating points (calibration points) were how many repeat measurements carried out?
• What do the specified characteristic values contain and how are they to be under-
stood?
• Under which conditions (laboratory, ..., permissible temperature errors, range of air
humidity, ...) do the characteristic values apply?
• Is the usage decision made with or without calibration uncertainty?
• Is the resolution is significantly lower than the stated maximum permissible measure-
ment error?
For their part, the user must ensure that the MPE has a direct reference to the test character-
istic to be tested. For example, the MPE of an outside micrometer specified in DIN 863 [21]
explicitly refers to the maximum permissible length error in practice, while 𝑀𝑃𝐸𝐸 and 𝑀𝑃𝐸𝑃
determined in accordance with ISO 10360-2 [30] only refers to the basic conditions defined in
this standard (probe, environment, test ball, contact points, test characteristic, ...) and meas-
urement sequences and cannot be used in real practice (e.g. measurement of parallelism).
77
Note: An orientation value for the resolution as a function of the maximum per-
missible measurement error could be in the order of RE ≤ 30% MPE.
The resolution of the RE display is the smallest change of a measured variable that causes a
noticeable change in the corresponding display (VIM 4.14 [37]). For analogue displays, this is
the smallest step size that can be reliably estimated (e.g. between two scale lines), or for dig-
ital displays, the smallest observable step size of the displayed digits.
The standard uncertainty due to the resolution is calculated as follows:
1 𝑅𝐸
𝑢𝑅𝐸 = ⋅ ( 2 ) ; with resolution RE
√3
Note 1: As a precondition for the proof of capability, the resolution of the RE display
of a measurement system must not exceed 5% of the tolerance. Regard-
less of this precondition, the standard uncertainty must be calculated based
on the resolution and taken into account in the measurement uncertainty
budget.
Note 2: The resolution to be used for the calculation must be the resolution actually
used and not the maximum possible resolution.
Note 3: The resolution RE of a digital display does not have to correspond to the
smallest jump of the last digit. For example, by calculating a measured
value from several input signals, the measured value can be quantised in
larger steps (e.g. resolution 0.173 µm with steps of 10.267/ 10.440/ 10.613
µm).
Note 4: The resolution RE may, in the case of complex measurement systems,
have to be determined by a test if it is not clearly evident.
The calibration uncertainty (standard uncertainty) uCAL is the measurement uncertainty with
which the reference value of the standard is affected. Ideally, this is determined during cali-
bration of the standard and indicated on the calibration record.
• If the expanded uncertainty 𝑈𝐶𝐴𝐿 is specified in the protocol, it must be divided by the
associated coverage factor k.
𝑈𝐶𝐴𝐿
𝑢𝐶𝐴𝐿 =
𝑘
• The respective applicable K-value can be taken from the calibration documents.
• If only an interval (-a ... +a) with limit value 𝑎 is assigned to the reference value, the
measurement uncertainty is determined via the rectangular distribution.
1
𝑢𝐶𝐴𝐿 = ⋅𝑎
√3
1
𝑢𝐶𝐴𝐿 = ⋅ 𝑡𝑒 = 0,26 µ𝑚
√3
Repeatability at standard – uEVR
The device-related dispersioning behaviour of the measurement system is tested with the re-
peatability at the reference uEVR. At least 30 repeat measurements are made on a standard
or reference part under repeatability conditions and uEVR is estimated from their dispersion.
Repeat condition analogous to VIM 2.20 [35] means that repeat measurements are to be car-
ried out at short intervals on the standard or reference part by the same examiner in a com-
pletely identical manner. The standard must be inserted into the measuring device again at
the same measuring position before each measurement.
𝑛
1
𝑢𝐸𝑉𝑅 =√ ⋅ ∑(𝑥𝑖 − 𝑥̅ )2
𝑛−1
𝑖=1
Note: In practice, the repeatability from a “MS test” is often carried out with 25 in-
stead of 30 repeat measurements. The resulting errors are small and are
neglected in this volume. For further information, please refer to GUM,
Chapter 6.3, Annex G [29].
The systematic measurement error (bias) uBi is the distance of the arithmetic mean of the re-
peat measurement from the reference value xm.
𝐵𝑖 = |𝑥̅ − 𝑥𝑚 |
79
Note 2: Measuring machines accompanying production are often based on compar-
ative measurements. An “adjustment” of the instrument with the aid of an
adjustment standard (adjustment master) means “compensating” the sys-
tematic measurement error. A repeatability test with the same setting
standard then usually leads to a smaller bias. In this case, the traceability of
the setting master must be ensured, otherwise there is a risk that the bias
will be invalidly underestimated.
Various definitions of linearity can be found in literature. In the context of measurement and
inspection process capability, this is understood to mean the variability of the bias of a meas-
urement system over the application area. Bias and linearity can often only be separated by
very complex experiments. For this reason, practice-oriented procedures are presented here,
which pay particular attention to the question of the effects on the measurement uncertainty
in the inspection process.
There are several possibilities for determining the standard uncertainty:
• Information (previous knowledge) about the measurement system is available
(method B)
• A test is carried out (method A)
o Simplified bias and linearity analysis over the maximum error
o Analysis of bias and linearity with ANOVA methods
- The linearity error is given as expanded measurement uncertainty 𝑈𝑙𝑖𝑛 with coverage
factor 𝑘
𝑈𝑙𝑖𝑛
𝑢𝑙𝑖𝑛 =
𝑘
80
6.3.6.2 Linearity from test (method A)
The linearity error is determined in the “measurement system test” (Chapter 6.3.8). In this
case, the linearity error describes the portion of the bias that is variable over the application
range, while the bias described in Chapter 6.3.5 is assumed to be constant. In practice,
these constant and variable parts of the bias are difficult to separate using acceptable experi-
mental effort.
This separation is omitted in the “simple linearity assessment” described below, which in
some cases may lead to an incorrect estimate of uncertainty. With the simple linearity evalu-
ation, however, a reduced linearity study is possible to secure the tolerance limits with two
standards (one in the range of each tolerance limit). More standards can be used at any time
to increase the quality of the test. In total, at least 30 measurements should be available, i.e.
at least 15 measurements per standard in the case of 2 standards and 10 measurements per
standard in the case of 3 standards. In this case, the linearity is not explicitly stated and is in-
cluded in 𝑢𝐵𝐼 .
The “linearity evaluation with ANOVA” estimates the constant bias 𝑢𝐵𝑖 and thus replaces the
determination of the bias according to Chapter 6.3.5. In addition, the variable part 𝑢𝐿𝐼𝑁 of the
bias and the repeatability on the reference part 𝑢𝐸𝑉𝑅 is estimated by means of an analysis of
variance. In this case, at least three reference parts/standards shall be measured several
times under repeatability conditions so that a total of at least 30 measured values is availa-
ble.
The actual values of the standards should be distributed approximately equidistantly over the
range of application of the measurement system, with the range of application exceeding the
tolerance range to the extent that parts which are outside the tolerance can plausibly be ex-
pected.
For both variants, the calibration uncertainty of the reference parts/standards should be sig-
nificantly less than 5% of the characteristic tolerance. The largest calibration uncertainty of
the reference parts/standards used is included as 𝑢𝐶𝐴𝐿 (Chapter 6.3.3) in the determination
of the combined uncertainty of the measurement system. The measurements must be car-
ried out under typical operating conditions of the measuring system.
81
Figure 6-1: Determination of the linearity with maximum bias
The repeatability is also calculated for each reference part/standard 𝑠𝐸𝑉 . The maximum re-
peatability is included as 𝑢𝐸𝑉𝑅 (Chapter 6.3.4) in the calculation of the combined uncertainty
of the measurement system.
This calculation of the linearity error corresponds to a “worst-case” assumption for the case
that the linearity error follows a kind of characteristic curve and the maximum error was de-
termined in the test. The calculation is not applicable if the linearity error corresponds to a
predominantly random dispersion.
̅̅̅̅̅̅
(𝐵𝐼)
𝑢𝐵𝐼 =
√3
It must be assumed that this mean bias cannot be corrected and may be changed in a ran-
dom manner after a readjustment of the measurement system. Correctable fractions must be
eliminated before the test (Chapter 5.1.1.2)
A “simple analysis of variance” (see [53]) will help determine the remaining dispersed frac-
tions. The dispersioning of the middle layers (residual bias per reference part/standard) 𝑠𝐴
results in the variable proportion of the bias 𝑢𝐿𝐼𝑁
𝑢𝐿𝐼𝑁 = 𝑠𝐴
82
The mean dispersion around the mean values of the individual standards (remaining resi-
dues) results in the repeatability at the reference part 𝑢𝐸𝑉𝑅 (Chapter 6.3.4).
𝑢𝐸𝑉𝑅 = 𝑠𝑅𝑒𝑠
Note: Linearisation and the resulting linearity evaluation with correction on the
measuring machine is described in DIN EN ISO 11095 [17] and is not dealt
with further in this VDA volume.
All other possible influences of the measurement system are to be considered separately, if
suspected or present, by measurement tests, from tables or manufacturer’s data.
The residual uncertainties uMS-REST must be clearly defined and individually named so that
they can be clearly assigned. The components of the residual uncertainties must not be con-
tained in other influencing variables.
Note: As soon as the value of a residual uncertainty falls below 10% of the largest
uncertainty component, the contribution to both the combined and ex-
panded measurement uncertainty is negligible (≤ 0,5%).
Typically, a test of the measurement system (MS test) needs to be carried out, which can be
designed differently depending on the objective. In principle, at least 30 measured values on
standards or reference parts must be recorded in this test, an exception is described in the
note to Chapter 6.3.4. All tests are carried out under repeatability conditions and, as far as
possible, at the site of the measurement process and under real conditions (see Chapter
6.3.4).
83
6.3.8.1 Test with a standard/reference part
If no linearity influences are to be expected, at least 30 repeat measurements are carried out
on a standard or reference part. 𝑢𝐵𝐼 and 𝑢𝐸𝑉𝑅 can be identified from this MS test. The stand-
ard or reference part used determines the calibration uncertainty 𝑢𝐶𝐴𝐿 . The standard uncer-
tainty due to linearity errors 𝑢𝐿𝐼𝑁 cannot be determined by this test.
Note: The MS test corresponds to the test from method 1 of the measurement
system analysis. If reliable data is already available, it can be used to deter-
mine the standard uncertainties 𝑢𝐵𝐼 and 𝑢𝐸𝑉𝑅
For this test, the use of material measures is recommended, the actual values of which are
within the ranges ± 10% of the tolerance limits (Figure 6-3), ideally just outside the tolerance.
Before the test is carried out, the measurement system needs to be adjusted and linearised
according to the procedure described in DIN ISO 11095 [15].
xml Actual value of the material measure in the range of the lower tolerance limit L
xmu Actual value of the material measure in the range of the upper tolerance limit U
84
6.3.8.3 Test with 3 and more standards
If linearity effects are to be expected and if these are to be determined experimentally
(method A), the test must be carried out with at least three standards/reference parts. A mini-
mum of 10 repeat measurements per standard/reference part are carried out. This MS test
can identify 𝑢𝐵𝐼 , 𝑢𝐸𝑉𝑅 and 𝑢𝐿𝐼𝑁 according to Chapter 6.3.6. The standards or reference parts
used determine the calibration uncertainty 𝑢𝐶𝐴𝐿 . The rule is that the larger of the standard un-
certainties 𝑢𝐶𝐴𝐿 is included in the measurement uncertainty budget.
For this test to determine linearity, it is recommended to use material measures whose actual
values are within the ranges ±10% of the tolerance limits or the tolerance centre (Figure 6-4).
Ideally, the material measures at the tolerance limits are slightly outside the limits.
xml Actual value of the material measure in the range of the lower tolerance limit L
xmm Actual value of the material measure in the range of the tolerance centre
xmu Actual value of the material measure in the range of the upper tolerance limit U
85
Repeatability on the test part – uEVO
The repeatability at the test part uEVO describes the dispersioning behaviour of the entire in-
spection process under repeat conditions. To determine the repeatability on the test part, at
least 30 repeat measurements are carried out on several serial parts under repeat conditions
and estimated from these with ANOVA uEVO. For details of the test, see Chapter 6.4.9.
Repeat condition analogous to VIM 2.20 [35] means that repeat measurements are to be car-
ried out at short intervals on real parts by the same examiner in a completely identical man-
ner. Before each measurement, the measured part must be placed in the measuring device
again at the same measuring position.
Note: The test parts used for the test should be distributed over the whole field of
application (Chapter 6.4.9). The test parts are to be clamped and un-
clamped during the measurement test and measured in the same position.
When performing several series of measurements, it must be ensured that
the results of the previous measurement are not known to the operator.
Reproducibility - uAV
The reproducibility uAV describes the reproducibility of the results of different operators under
comparative conditions. In order to determine the reproducibility, repeat measurements must
be carried out on several series parts with several operators, whereby comparison conditions
must apply with regard to the examiners. From the measured values, ANOVA is used to esti-
mate the reproducibility of the operator uAV. For details of the test, see Chapter 6.4.9.
Comparison conditions analogous to VIM 2.22 and 2.24 [17] mean here that at the place of
use several examiners must perform repeat measurements at short intervals on real parts in
a completely identical manner. Before each measurement, the measured part must be
placed in the measuring device again at the same measuring position.
In the case of (partially) automated measurement processes, where the operator cannot in-
fluence the measurement result either through handling the part (e.g. clamping parts) or
through the actual execution, this component can be omitted.
Note: Measurement processes with (partially) automated (coordinate) measuring
machines are not to be regarded as operator-independent per se as long
as the components are positioned and/or clamped by examiners.
Interaction – uIA
86
Note 1: The interactions are determined in the MP test to Chapters 6.4.1 and 6.4.2.
Note 2: Causes for interactions include different test methods or serial parts show-
ing defect patterns that can influence the measurement result (e.g. surface
defects in length measurements).
If the measurement process is determined by several measurement systems, then the repro-
ducibility between measurement systems uGV describes the dispersion between several
measuring points or measuring devices in a inspection process.
To determine the reproducibility, repeat measurements are carried out with several repeat
measurement systems on identical serial parts and estimated with ANOVA uGV, whereby
comparison conditions apply with respect to the measurement systems. For details of the
test, see Chapter 6.4.9.
It should be noted that the different measurement systems must also be evaluated according
to the respective Chapter 6.3.
Note 1: Ideally, the components of 𝑢𝑀𝑆 should be determined by using the same
setting standards with different measurement systems. The components of
the measurement system with the largest 𝑢𝑀𝑆 are included in the calcula-
tion of 𝑢𝑀𝑃 . If this is not possible, the respective largest uncertainty compo-
nents (𝑢𝑅𝐸 , 𝑢𝐶𝐴𝐿 , 𝑢𝐸𝑉𝑅 , 𝑢𝐿𝐼𝑁 , ...) are taken into account.
Note 2: Further measurement errors can be observed when measuring at several
measuring points, when using different measurement systems or when us-
ing different measuring methods for the same measurement task. In order
to ensure that comparable measurement results are obtained for all sys-
tems and processes used, and within specified limits, these errors must be
analysed by means of measurement tests.
If during initial or basic testing it is suspected that the measurement results change over the
time of the measurement or between regular short-term adjustments (zero point adjustment,
offset adjustment, etc., see VIM 3.11 [17]) of the measurement system, this uncertainty
should be determined by means of a defined series of measurements.
The following experiments are suitable for the determination of the uncertainty due to stability
effects, whereby the selection of the experiment must fit the expected time-related change of
the measured values:
Short-term test with a representative and stable component at several points in time
with subsequent determination of the measurement uncertainty:
o Downstream stability test including representation in a monitoring map.
If the visual assessment of the change in dispersioning behaviour is
not significant, the stability is taken into account in the uncertainty
budget with 𝑢𝑆𝑇𝐴𝐵 = 0.
87
In case of significant stability, the ‘external dispersion’ 𝑠𝐴 from the ex-
tended Shewhart chart is determined with ANOVA and taken into ac-
count as 𝑢𝑆𝑇𝐴𝐵 .
o Modified MS test by expanding the number of repeat measurements over a
longer period of time. In this case, the uncertainty component 𝑢𝑆𝑇𝐴𝐵 is part of
the measurement system! Check whether this procedure is compatible with
the possible use of several standards and the maximum condition
max(𝑢𝑅𝐸 , 𝑢𝐸𝑉𝑅 ).
Modified MP test measurement process – more than one examiner can be replaced
by test intervals, i.e. one examiner takes 2 measurements at 3 different times. The
short-term stability is calculated according to ANOVA from 𝑢𝑆𝑇𝐴𝐵 = 𝑢𝐴𝑉
Test “D-optimal plan” MP with definition of a further component “test intervals” >1
Note: As a rule, the measurement stability (long-term stability) is not the subject of
testing in the case of a short-term consideration. This is described in more detail
in Chapter 10.
The inhomogeneity of the test part uOBJ is the uncertainty resulting from the variance of differ-
ent measurement points on the test part. This inhomogeneity can be determined by Method
A or Method B.
Inhomogeneity from preliminary information (Method B)
The uncertainty is calculated from the maximum error aOBJ. For dimensional metrology, this is
usually the uncertainty resulting from the maximum form error of the test part. If the maxi-
mum form error can be determined directly, the following applies:
aOBJ
uOBJ =
3
If the inhomogeneity of the test part is specified as expanded uncertainty UOBJ with coverage
factor k (e.g. for hardness reference blocks), then:
𝑈𝑂𝐵𝐽
𝑢𝑂𝐵𝐽 =
𝑘
88
Inhomogeneity from experiment (Method A)
As an alternative to the use of preliminary information, a coverage of the MP test can also be
used, with targeted measurements taken at several points on the part. The part influence
sOBJ is determined with the ANOVA method.
𝑢𝑂𝐵𝐽 = 𝑠𝑂𝐵𝐽
Note 1: The influence of inhomogeneity of the test part can be reduced by changing
the measurement strategy, e.g. dynamic measurement instead of two-point
measurement.
Note 2: When determining and evaluating the inhomogeneity, the influence of the
filter criteria (mathematical and mechanical) must be taken into account
and adjusted so that these correspond to the real Measurement process.
Temperature - uTEMP
89
The following temperature sources may occur and affect the inspection process:
Surroundings
o Temperature at the measuring point or in the immediate vicinity of the test
o Spatial temperature constancy at the measuring point
o Temporal temperature constancy at the measuring point
Environment
o Solar radiation
o Ventilation (fluctuations)/air conditioning
o Shielding
Person
o Thermal energy input through body temperature
o Number of people
Production facility
o Temperatures in the production equipment that affect the component (me-
chanical stress, cooling lubricant, cleaning air)
o Temperatures in the production facility that affect the environment
Stress
o Heat by applying, for example, forces or pressure during the measurement
process
o Settled state of the measurement process before the measurement
Take into account warm-up phase of the measuring machine
Handling
o Examiner moves component by hand (heat transfer)
o Additional heat influences due to transport between machine and test equip-
ment
These temperature sources can affect the inspection process in different ways:
Convection
Heat radiation
Seasonal gradient
o Summer/winter temperature differences
o Time span between temperature detection and measuring time
o Setting duration (time interval)
o Measuring duration
Spatial gradient
Temperature measurement
o Temperature measurement (location and method)
Method of temperature compensation
The temperature sources listed above ultimately affect the elements of the inspection pro-
cess in various ways (see also Chapter 5.1):
90
Change in electrical properties
o Electrical resistance
o Electrical power
If it is not possible to reduce or eliminate the influencing factors in advance, first include the
influencing variables in the uncertainty budget and, depending on their share in the overall
uncertainty, reduce them afterwards using improvement measures.
In addition, even with known temperature differences and applied corrections, the un-
certainties in the specified coefficients of thermal expansion become measurement
uncertainties in the length measurement. Coefficients are sometimes only approxi-
mate.
1
For example, the average value 𝛼 = 11,5 ⋅ 10−6 𝐾 applies for steel, but depending on
1
the alloy, the value can vary between 𝛼 = 10 ⋅ 10−6 𝐾 (X6Cr17, X23CrNi17) and 𝛼 =
1 1
14,5 ⋅ 10−6 𝐾 (NiCr23Fe), even up to 𝛼 = 16,5 ⋅ 10−6 𝐾 (X5CrNiMo17-12-2) for stain-
less steel.
91
6.4.7.3 Methods to determine measurement uncertainty by temperature
Since most materials change with temperature fluctuations, the standard uncertainty 𝑢 𝑇𝐸𝑀𝑃
due to temperature changes must be determined for all length measurements. When com-
paring a measured part with a reference (comparative measurement) or a scale (absolute
measurement), temperature influences only have no effect if both the measured part and the
reference or scale are made of the same material and both have1 the same temperature. If
this is not the case, the measurement result is affected by an error, which can sometimes be-
come quite significant and preferably needs to be corrected (temperature compensation).
To determine the temperature-related measurement uncertainty, depending on the existing
situation, various methods are found in the relevant standards and regulations, some of
which are listed as examples but do not claim to be comprehensive. The practical handbook
also contains examples of how to determine this.
1. Determining the uncertainty from the temperature difference and the uncertainty of the
coefficient of thermal expansion according to DIN EN ISO 14253-2 [10]
2. Determining the temperature-related measurement uncertainty for absolute measure-
ments with correction (temp. comp.) of the linear expansion
3. Determining the temperature-related measurement uncertainty for comparative measure-
ments with correction (temp. comp.) of the linear expansion in accordance with ISO/IEC
Guide 98-3 (Annex H1) [29]
4. Determining the temperature-related measurement uncertainty at the same temperature
of the measured part and measuring machine in accordance with ISO/IEC Guide 98-3
(Annex H1) [29]
5. Determining the temperature-related measurement uncertainty for length measurements
on coordinate measuring machines (CMM) in accordance with DIN EN ISO 15530-3 [18]
6. Determining the temperature-related adjustment uncertainty for comparative measure-
ments
All other possible influences of the measurement process are to be considered separately, if
suspected or present, by measuring tests, from tables or manufacturer’s data.
The residual uncertainties uMP-REST must be clearly defined and individually named so that
they can be clearly assigned. The components of the residual uncertainties must not be con-
tained in other influencing variables.
Note: As soon as the value of a residual uncertainty falls below 10% of the largest
uncertainty component, the contribution to both the combined and ex-
panded measurement uncertainty is negligible (≤ 0,5%).
To determine the measurement process capability, a test measurement process (MP) is typi-
cally carried out to determine critical uncertainty components according to Method A. The
test can be designed differently depending on the components to be determined. In principle,
at least 30 measured values must be recorded in this test on serial parts. All experiments are
carried out under repeat conditions and at the site of the measurement process under real
measurement conditions and are evaluated with ANOVA. The measured part must always be
clamped and unclamped (or repositioned) for each measurement and are measured again at
exactly the same position.
In the simplest case, a test setup should be chosen in which several serial parts are meas-
ured several times by several operators (if relevant). Mean variance analysis (ANOVA) can
be used to determine the uncertainty components 𝑢𝐸𝑉𝑂 , 𝑢𝐴𝑉 and 𝑢𝐼𝐴 . Typically, 𝑛 = 3 … 10
serial parts are measured by 𝑟 = 2 … 3 operators 𝑘 = 2 … 3 times and at least 𝑛 ⋅ 𝑟 ⋅ 𝑘 = 30
times in total. Care must be taken to ensure that all serial parts can be clearly identified, but
this must not be visible to the operator. The series parts should cover the tolerance plus the
expected excess range and furthermore should not show any defect patterns (e.g. surface
defects in length measurements) which could influence the measurement result. In the first
pass, the parts are measured by all operators, then the measurement is repeated by all oper-
ators alternately in further passes. The measurements must be carried out in such a way that
the operator cannot assign the measurements to the serial part, meaning that they can rec-
ord new measured values without bias in the event of repetitions and do not know the previ-
ously determined measured values.
Note 1: This test corresponds to the test performed in method 2 of the measure-
ment system analysis. If reliable data is already available, it can be used to
determine the standard uncertainties 𝑢𝐸𝑉𝑂 , 𝑢𝐴𝑉 and 𝑢𝐼𝐴 .
93
Note 3: If, for various reasons, the capability indices 𝑄𝑀𝑆 and 𝑄𝑀𝑃 are related to the
manufacturing process dispersion rather than to the tolerance, and if this
manufacturing process dispersion is also determined from the serial parts
used in the test (see Chapter 7.1.3), then the serial parts may not be se-
lected specifically distributed over the tolerance, but must be taken from the
manufacturing process as a representative sample.
If necessary, further influencing variables can be added. Creating a test plan with suitable
software is recommended.
What was the calibration uncertainty by which the actual value of the standard was
determined?
Can the purchased measuring equipment be accepted and approved?
Which uncertainty components have to be considered for standard measurement sys-
tems?
Is the measurement system (measuring machine), the measuring device suitable for
the tolerance(s) under production conditions?
How great is the influence of the production parts on the measurement result or on
the measurement process capability?
Which uncertainty components are to be considered in a conformity assessment
(measurement result within or outside tolerance)?
o The measurement processes shown in the following are examples and can be
adapted, expanded and specified more precisely according to the company’s
specific needs.
Since the measurement process models can build on each other (e.g. model M2.2 when re-
leased at the time of delivery of the measurement system and M5.1 when approved as a se-
ries measurement process in the factory), different limit values for 𝑄𝑀𝑆 and 𝑄𝑀𝑃 can be as-
signed to the models. The limit values are examples (given below) and can be established in
agreement between customers and suppliers.
The models can reflect special situations. In special cases, it may be the case, for example,
that a customer provides reference materials where the calibration uncertainty is too high. In
this case, the responsibility for the calibration uncertainty is not in the hands of the supplier
and in model M2.3, the calibration uncertainty for the acceptance of the measurement sys-
tem is not applicable. The systematic measurement error must also be discussed and may
be omitted due to the uncertain calibration value.
94
Table 6-2: Example measurement process models and their uncertainty components
95
6.7 Preselection of measurement systems
Motivation, requirements
The pre-selection of the right measuring and test equipment together with the required indi-
vidual components and assistive devices is an important step in ensuring the entire measure-
ment system/measurement process is sufficiently suitable and will function reliably over the
planning period.
Unclear requirements and specifications from the past were last firmed up by publication of
the DIN EN ISO 9001:2015 [14] standard, which provides in Chapter “7.1.5 Monitoring and
measuring resources” under “7.1.5.1 General” that:
“The organization should determine and provide the resources needed for valid and reliable
monitoring and measuring results, where monitoring or measuring is used for evidence of
conformity of products and services to specified requirements”.
In this context, “resources” is to be understood as a comprehensive term which, in addition to
the actual measuring and test equipment, also includes, for example, the infrastructure with
environmental conditions, the personnel and their qualifications, as well as software and the
assistive devices used. Each must work together to ensure that valid and reliable monitoring
and measurement results can be achieved.
In the following sections, assistance and possibilities for the correct selection of measuring
and test equipment and, if necessary, aid resources are described. In recent years, this ap-
proach has evolved from “in retrospect” to “anticipatory” in order to avoid misinvestments.
The procedure presented here makes use of known preliminary information according to
Method B. If this information is not available or not ‘trustworthy’, then proceeding according
to Method A and determining EMSs experimentally is recommended.
Note 1: The proof that the measurement software is suitable is described in Chap-
ter 8.2 “Validation of software”.
Note 2: Information for suitable spaces with ambient conditions can be found in the
guidelines:
96
Sources of information for determining important specifications of
measuring equipment
At the beginning, defined criteria should be used to assess whether the planned selection is
suitable for the intended task. For this purpose, the data of the measuring equipment and as-
sistive devices specified in standards, guidelines or manufacturer’s instructions is used.
DIN EN ISO 3611 [20] is a communication standard with requirements for the most
important design characteristics and metrological characteristics without limit values
for measurement errors. These are described in DIN 863-1, which only contains the
limit values for measurement errors.
DIN 863-1 [23] – Micrometers – Part 1: Micrometers for external measurements; max-
imum permissible errors
A procedure in accordance with ISO/TR 14253-6:2012-11 [34] has been established
for demonstrating compliance with the specification.
2. Guidelines
The guidelines are derived from the standards. For example, the required work steps for
Type/type testing,
Initial test,
Calibration and monitoring
are described.
3. Manufacturer specifications
Some manufacturers have adopted the specifications of the standards in their company
guidelines and sales brochures. This is the safest way for the user to understand and, if nec-
essary, check the specifications, as the details are usually clearly described in the standards.
In the event of changes in the data, it is essential that the user obtains clarification from the
manufacturer in order to understand exactly how these values were determined and under
what conditions they apply. If the manufacturer does not provide specifications, the focus
97
should be on determining the uncertainty of the UMS measurement system according to
Method A.
A certain difficulty in the targeted selection of measuring equipment and assistive devices is
the often imprecise or incorrectly used terms and definitions. For this reason, it is often not
clear what is meant by a manufacturer’s individual specification of or older standards and
guidelines. Therefore, it should always be firstly clarified how exactly these terms are to be
understood and how the information has come about in a clear and comprehensible manner
before these terms and figures are used.
Only criteria are listed in the following consideration which are also relevant for the evalua-
tion of the selection under the aspect of the required measuring accuracy for a specified
measurement task. Other important parameters that can play a role in an investment deci-
sion, such as the measuring range, are not the subject of this analysis. This list does not
claim to be exhaustive, as other parameters (e.g. reverse span, hysteresis, sampling fre-
quency, point density, ...) may play an important role depending on the respective measuring
principle, measurement procedure and measuring equipment. The task of the measurement
technology experts is to recognise these and to classify them correctly.
98
MPE “Maximum Permissible Error”
Often the maximum permissible error of the measurement system (MPE) or the error varia-
tion limit is specified (see Chapter 6.3.1). If the maximum errors have been determined ‘relia-
bly’, they can be used for the preselection, otherwise ambiguities have to be eliminated or
the measurement uncertainty of the measurement system has to be determined.
Accuracy classes
The term “accuracy classes” is also used instead of “MPE” for measuring machines. This
number usually refers as a percentage value to the measuring range end value. The accu-
racy class of a measuring machine determines the maximum expected error of a measured
value from the correct value of the physical variable to be measured, if the existing error is
caused by the measuring machine or its physical measuring principle itself. Basically, a
measuring device cannot be adjusted precisely and its properties can change over time due
to external factors. The classification into an accuracy class defines a quality characteristic,
to what extent these causes may lead to a measurement error.
Within the scope of the pre-selection, it is now necessary to select and define the correct ac-
curacy class required for the measurement process.
Measurement repeatability
In some documents the term repeat measurement or repeatability is used. Often, however,
the information does not clearly indicate or describe for which test part, how often, how and
under which conditions the measurement was repeated and with how many measured val-
ues a result was calculated and how. If the conditions for determining the measurement re-
peatability are unclear, it is recommended to determine the repeatability on the standard –
uEVR (see Chapter 6.3.4) and to determine the measurement uncertainty of the system.
There are basically 2 categories (Model 2.2 and Model 2.3 in Chapter 6.6) to classify meas-
uring equipment with regard to its specifications and characteristic values for the evaluation
of a preselection for a certain measurement task. On the one hand, for cases where the MPE
is known and reliable, and on the other hand for cases where in a first measurement uncer-
tainty study, the measurement system or selected uncertainty components of the measure-
ment system should be assessed. The characteristic values are calculated in accordance
with Chapter 7.1.1.
Since only individual components of the measurement system may have been taken into ac-
count in the determination of the characteristic values, the permissible capability ratio QMS
may have to be reduced in order to have sufficient reserve in the subsequent measure-
ment/inspection process capability, which will take all components into account.
99
7 Proof of capability of the measurement process
To assess the metrological requirements of the measurement system and the measurement
process, the capability ratios 𝑄𝑀𝑆 for the measurement system and 𝑄𝑀𝑃 for the measurement
process are introduced. They are defined as a percentage of the ratio of the dispersioning
width of the measurement system (double expanded measurement uncertainty) to the toler-
ance 𝑇.
In Chapter 7.1.5, with a two-sided tolerance for different QMP values, the relationships be-
tween the observed potential process capability 𝐶𝑃𝑜𝑏𝑠 of the manufacturing process and the
actual existing process capability 𝐶𝑃𝑟𝑒𝑎𝑙 are shown. As Figure 7-4 and Table 7-1 show, the
losses due to inadequate inspection process capability can be very large.
2 2 2 } 2 2 2
𝑢𝑀𝑆 = √𝑢𝐶𝐴𝐿 + 𝑚𝑎𝑥{𝑢𝐸𝑉𝑅 , 𝑢𝑅𝐸 + 𝑢𝐵𝐼 + 𝑢𝐿𝐼𝑁 + 𝑢𝑀𝑆𝑅𝐸𝑆𝑇
Determining the uncertainty components of the measurement system can be omitted if MPE
is proven, documented and reliable.
𝑀𝑃𝐸 2
𝑢𝑀𝑆 = 𝑢𝑀𝑃𝐸 = √
3
In case several MPE values influence the combined standard uncertainty of the measure-
ment system, this can be calculated using the following formula.
𝑀𝑃𝐸12 𝑀𝑃𝐸22
2
𝑢𝑀𝑆 = 𝑢𝑀𝑃𝐸 = √𝑢𝑀𝑃𝐸1 2
+ 𝑢𝑀𝑃𝐸2 +⋯ = √ + + ⋯.
3 3
100
Capability ratio 𝑸𝑴𝑷 for the measurement process
2 2 2 2 } 2 2 2
𝑢𝐶𝐴𝐿 + 𝑚𝑎𝑥{𝑢𝐸𝑉𝑅 , 𝑢𝑅𝐸 , 𝑢𝐸𝑉𝑂 + 𝑢𝐵𝐼 + 𝑢𝐿𝐼𝑁 + 𝑢𝑀𝑆𝑅𝐸𝑆𝑇
𝑢𝑀𝑃 = √
2 2 2 2
+𝑢𝐴𝑉 + 𝑢𝐺𝑉 + 𝑢𝑆𝑇𝐴𝐵 + 𝑢𝑂𝐵𝐽 + 𝑢2𝑇 + 𝑢𝑅𝐸𝑆𝑇
2 2
+ ∑ 𝑢𝐼𝐴
( 𝑖 )
If 𝑢𝑀𝑆 alone has been used using one or more MPE the valuation according to method B
(𝑢𝑀𝑆 = 𝑢𝑀𝑃𝐸 ), the following 𝑢𝑀𝑃 is calculated
2
𝑢𝑀𝑃𝐸
𝑢𝑀𝑃 = √( 2 2 2 2
)
+𝑢𝐴𝑉 + 𝑢𝐺𝑉 + 𝑢𝑆𝑇𝐴𝐵 + 𝑢𝑂𝐵𝐽 + 𝑢2𝑇 + 𝑢𝑅𝐸𝑆𝑇
2 2
+ ∑ 𝑢𝐼𝐴
𝑖
101
Capability ratios 𝑸𝑴𝑺 and 𝑸𝑴𝑷 with one-sided specification limits
In the first case, the natural limit is treated as a specification limit and the calculation is per-
formed analogous to the two-sided case as described in Chapters 7.1.1 and 7.1.2.
In cases 2 and 3, the formulae for the capability indices 𝑄𝑀𝑆 and 𝑄𝑀𝑃 must be modified to
handle unilateral specifications. The dispersion on the side of the production process that lies
towards the set specification limit is relevant here. The side of the production process that is
away from the specification may contain unusable or censored measurement data (measure-
ment range limitation, test break-off in force measurements) and is not taken into account.
102
Figure 7-1: Unilateral tolerance
If fewer production parts are available, the process variance can also be roughly esti-
mated on this basis. The parts should be taken from the manufacturing process as a
random sample. If there are too few parts to determine a reliable distribution model, a
standard distribution is assumed. The standard deviation 𝑠𝑝 based on these few parts
will generally underestimate the process variance. The best estimate of the standard
deviation based on the measured parts can be found with the following formula:
𝑛−1
𝑠𝑒𝑓𝑓 = √ ⋅𝑠
𝑛−3 𝑝
Where
o seff is the estimated standard deviation
o n is the number of measurements to calculate sp
o sp is the calculated standard deviation from the sample
𝑛−1
Δ𝑝𝑈 = Δ𝑝𝐿 = 3 ⋅ 𝑠𝑒𝑓𝑓 = 3 ⋅ √ ⋅𝑠
𝑛−3 𝑝
103
The determination of the production process spread with less than 100 parts is only permit-
ted for the “start-up”. As soon as more than 100 parts are available, the acceptance must be
confirmed.
Alternatively, if no reliable data or not enough parts are available, the process vari-
ance 𝛥𝑝 and the process situation 𝑋𝑚𝑖𝑑 can be estimated from historical data of simi-
lar processes. This estimate must be documented in a comprehensible manner. The
determination of the production process variance from a specification is only allowed
for the “planning state”. As soon as more than 100 parts are available, the ac-
ceptance must be confirmed.
The process position 𝑋𝑚𝑖𝑑 is estimated from the 50% quantile 𝑋50% in the case of arbitrary
distributions; in the case of symmetrical distributions (e.g. standard distribution), the arithme-
tic mean value 𝑥̅ can also be used.
The calculation of capability indices depends on whether the process is limited on one side at
the top or bottom.
Figure 7-2: Lower one-sided tolerance with ranges for calculating the capability quotient
In case of an upper one-sided specification limit, the capability indices are calculated with the
formula
̂𝑀𝑆
𝑈 ̂𝑀𝑃
𝑈
𝑄𝑀𝑆 = 𝑄𝑀𝑃 =
𝐶𝑝 ⋅ Δ𝑝𝑈 𝐶𝑝 ⋅ Δ𝑝𝑈
104
In the case of a lower one-sided specification limit, the capability indices are calculated using
the formula
̂𝑀𝑆
𝑈 ̂𝑀𝑃
𝑈
𝑄𝑀𝑆 = 𝑄𝑀𝑃 =
𝐶𝑝 ⋅ Δ𝑝𝐿 𝐶𝑝 ⋅ Δ𝑝𝐿
The resolution RE must be less than 1/10 of the half-page specification interval (𝐶𝑝 ⋅ Δ𝑝𝑈 ) or
(𝐶𝑝 ⋅ Δ𝑝𝐿 )
The capability ratio corresponds to the ratio of this one-sided specification interval to the ex-
panded measurement uncertainty 𝑈𝑀𝑆 or 𝑈𝑀𝑃 (analogous to the ratio of the total tolerance
range to 2 ⋅ 𝑈𝑀𝑆 or 2 ⋅ 𝑈𝑀𝑃 in the two-sided case).
105
In the case of a lower one-sided specification limit, the capability indices are calculated using
the formula
𝑈𝑀𝑆 𝑈𝑀𝑃
𝑄𝑀𝑆 = 𝑄𝑀𝑃 =
𝑋𝑛𝑜𝑚 − 𝐿 𝑋𝑛𝑜𝑚 − 𝐿
The resolution RE must be less than 1/10 of the half-page specification interval (𝑈 − 𝑋𝑛𝑜𝑚 )
or (𝑋𝑛𝑜𝑚 − 𝐿)
In order to classify measurement systems and measurement processes, calculating the mini-
mum tolerance is recommended at which both the measurement system and the measure-
ment process are still suitable. This can be achieved by changing the formulas for QMS or
QMP and inserting QMS_max or QMP_max. This determines the minimum possible tolerance for the
measurement system TMS_min or the Measurement process TMP_min:
2 ⋅ 𝑈𝑀𝑆
𝑇𝑀𝑆_𝑚𝑖𝑛 =
𝑄𝑀𝑆_𝑚𝑎𝑥
2 ⋅ 𝑈𝑀𝑃
𝑇𝑀𝑃_𝑚𝑖𝑛 =
𝑄𝑀𝑃_𝑚𝑎𝑥
Note: The minimum tolerance must always be seen in connection with the re-
spective measurement task.
106
Figure 7-4: Representation of the observed C-value 𝐶𝑝𝑜𝑏𝑠 above the actual C-value 𝐶𝑝𝑟𝑒𝑎𝑙 dependent
from 𝑄𝑀𝑃 .
On the basis of the course of the curve in Figure 7-4 it can be estimated, for example, that
with an actual capability index of 𝐶𝑃𝑟𝑒𝑎𝑙 = 2,20 and a capability ratio of the measurement pro-
cess of 𝑄𝑀𝑃 = 40% only a capability index of 𝐶𝑃𝑜𝑏𝑠 = 1,33 is observed. A much better agree-
ment is obtained with a 𝑄𝑀𝑃 of 10% with the observed capability index of 𝐶𝑃𝑜𝑏𝑠 = 2,09.
It was assumed for the graphical representation, in a simplified way, that the production pro-
cess is normally distributed. The 99.73% dispersion range needed to calculate the capability
index is thus estimated by six standard deviations.
For the observed standard deviation
2 2
𝑠𝑜𝑏𝑠 = √𝑠𝑟𝑒𝑎𝑙 + 𝑠𝑀𝑃
From the trends (Figure 7-4), the Cp;real and Cp;obj values can be specified for typical C values
as a function of QMS (Table 7-1).
107
Table 7-1: : Relationship between 𝐶𝑃 𝑟𝑒𝑎𝑙 and 𝐶𝑃 𝑜𝑏𝑠 for typical 𝐶𝑃 values
Note: The graph and the table only show correlations that are to be expected in
general and are not necessarily exactly correct in individual cases, because
it cannot be guaranteed that the possible measurement errors have actually
occurred in the specific case. It is therefore not permitted to backtrack from
an observed capability index 𝐶𝑃𝑜𝑏𝑠 to a capability index 𝐶𝑃𝑟𝑒𝑎𝑙 that may actu-
ally exist and use it for approvals of machines and processes.
comply with the corresponding limit values, the measurement system and measurement pro-
cess are classified as suitable.
Note 1: The limit values have deliberately not been specified in more detail. The
limit values proposed here are to be understood as guideline values which
cannot necessarily be generalised. The limit values must therefore be
agreed between customers and suppliers in each individual case. If the pro-
posed limit values are not realistic, individual agreements must be made
depending on the characteristic and its specification (large or small/very
small tolerances). The entire measurement process must always be ob-
served. Both economic and technical considerations must therefore be
taken into account when setting limit values. The variation limit should
therefore be set as high as possible and only as low as necessary.
Note 2: If the critical capability index 𝐶𝑝𝑘 of the manufacturing process has been
verified in sufficient amount (e.g. Cpk ≥ 2.0) with a suitable measurement
process, a separate consideration of the expanded measurement uncer-
tainty at the specification limits is no longer necessary, since the variance
of the measurement process is included in the process evaluation and no
parts are to be expected in the limit range of the tolerance.
the test report of the proof of capability for documenting the result
the complete documentation of the proof of capability with the aim of complete tracea-
bility of all parameters and tests
Both documents can be in purely digital form. The minimum requirements are described be-
low.
109
Test report of the proof of capability
General information
Capability ratios:
o Combined measurement uncertainty of the measurement system 𝑢𝑀𝑆 and the
measurement/inspection process 𝑢𝑀𝑃
o Extended measurement uncertainty of the measurement system 𝑈𝑀𝑆 and the
measurement/inspection process 𝑈𝑀𝑃
o Capability ratio of the measurement system 𝑄𝑀𝑆 and the measurement/in-
spection process 𝑄𝑀𝑃
o Applied coverage factor k
o Capability ratio limit of the measurement system 𝑄𝑀𝑆_𝑚𝑎𝑥 and the measure-
ment/inspection process 𝑄𝑀𝑃_𝑚𝑎𝑥
o Optional:
- Minimum tolerances of the measurement system 𝑇𝑀𝑆_𝑚𝑖𝑛 and the
measurement/inspection process 𝑇𝑀𝑃_𝑚𝑖𝑛
- Guard band to be maintained at the specification limits
- Applied guard band factor g
110
Decision on the capability of the measurement/inspection process: Inspection process
suitable/not suitable
The complete documentation of inspection process capability includes all elements of the
test report and additionally the following information:
All components of the uncertainty budget including input variables, calculation, result
and source
Relevant environmental conditions
Individual measurements for the MS test and MP test, including persons who con-
ducted the trials and the trial site
Procedure for monitoring the stability of the proof of capability
111
Measurement process optimisation, e.g. by selecting a more suitable measuring de-
vice (e.g. replacing a manual measuring device with an automated measuring device)
(Chapter 7.4.1)
Note: In this context, “customer” is understood to mean both the external and the
internal customer.
A reduction of the standard uncertainties must be aimed at to improve the measurement sys-
tem/process, which can be achieved, for example, by
Measured parts
Examiner/Operator
If it is determined after optimisation that the capability ratios cannot be achieved, a new risk
analysis must be carried out. The basis for this is the risk analysis within the framework of
the design or process FMEA with emphasis on the significance of the fault. As a guiding
question for the conditional release, those responsible must ask themselves whether the po-
tentially larger errors from the nominal value are acceptable.
In coordination with the respective customer (internal or external), conditional approvals can
be issued. The approvals can have both a temporary character (special release) and a per-
manent character.
The conditional approval may include proof of the effectiveness of measures to avoid the
consequences of errors. Such measures may include, for example:
An approval with increased variation limit must be agreed with the customer.
If the QMP exceeds the variation limit for the capability of a measurement process, a tempo-
rary approval (with conditions, if necessary) can still be granted in consultation with the cus-
tomer if the production process shows a very high capability index (Cpk≥ 2). In this case, how-
ever, the systematic error must be demonstrably small and the stability of the measurement
process must be monitored.
In case of small tolerances the procedure in accordance with Chapter 7.4.5.1 can be applied.
113
Coverage of the characteristic tolerances
If an optimisation and/or temporary approval of the measurement process is not possible, the
characteristic tolerance can be adapted to the new situation within the scope of a tolerance
consideration, if necessary. This change must be agreed with the customer if it affects cus-
tomer-relevant specifications.
Special strategies
7.4.5.1 Fine tolerance rule for measurement processes with small tolerances
Small tolerances and small geometric elements
“Small tolerance” is not a standardised term, it is meant to express that these are very small
tolerances compared to standard conditions. A characteristic of the small tolerances is that
they are very difficult and costly to manufacture and measure. This means that the usual ca-
pability and capability ratios are usually not achievable as with standard tolerances; physical
and technical limits are often reached.
Small tolerances are often (but not only) found on small geometric elements. Small geomet-
ric elements are those where the measuring geometries available for a measurement are
very small and only a few data points can be recorded for reliable evaluation. Examples are:
Length measurements with very short evaluation lengths, radius measurements with very
small radius segments or angle measurements with very short leg lengths and less-than-
ideal surfaces. This is often aggravated by the fact that the start and end points for the re-
spective geometric element are often defined in a fuzzy manner and that no ideal shapes are
available due to surface errors, so that larger measurement errors must inevitably be ex-
pected.
It is not possible to determine a generally valid limit for small tolerances, since in addition to
the very small tolerance values, the geometry, physical and technological conditions of the
measurement task must still be considered in conjunction with the measurement task.
In the event that the usual capability ratio limits cannot be achieved with small tolerances, the
FT regulation can be applied after weighing up alternatives. This method only represents one
possible approach and other approaches and solutions are also possible depending on the
application.
Implementation/Application:
The basics for the determination of the capability ratio QMP correspond completely to the de-
scriptions in Chapters 5 to 7, however, new parameters are determined and evaluated on the
basis of this capability ratio and the specification. The maximum permissible measurement
process variance ΔMP_max serves as the evaluation criterion
Depending on the application, the following specifications must first be made via company
guidelines:
1) The limit tolerance TFT [in µm] defines the tolerance from which the FT regulation
should take effect. Using the limit tolerance TFT and the Capability ratio limit QMP_max,
114
the limit case G of the maximum permissible measurement process variance can be
determined.
𝐺 = 𝑄𝑀𝑃_𝑚𝑎𝑥 × 𝑇𝐹𝑇 [µ𝑚]
2) The limit correction coefficient y (in %) controls to what extent the maximum permissi-
ble measurement process variance ΔMP_max is to be “corrected” as a function of the
tolerance. This is a surcharge on the standard capability ratio limit.
Figure 7-6 schematically represents the FT rule. The blue straight line describes the relation-
ship between the characteristic tolerance T and the maximum permissible measurement pro-
cess variance ΔMP_max, taking into account the Capability ratio limit QMP_max and thus actually
the standard case. The dashed green line describes the maximum permissible measurement
process variance corrected by y ΔMP_max_korr including abort criterion for capability ratios QMP
> 100%.
Furthermore, the scope of validity of the FT rule only refers to actually existing measurement
process variance, which is below the limit case G.
115
The capability of the inspection process is considered proven if the actual measurement pro-
cess variance ΔMP is below the corrected, maximum permissible measurement process vari-
ance ΔMP_max_korr and the capability ratio of the measurement process QMP ≤ does not exceed
100%.
The figure shows in general terms how increasing the number of measured values 𝑛∗ leads
to a reduction in the standard uncertainty.
The mathematical determination of the resulting measurement process optimisation is not in-
significant, since multiple measurements can have different effects on the different compo-
nents, so that it is recommended to repeat the acceptance test with multiple measurements.
Figure 7-7: Reduction of the measurement uncertainty by increasing the number of repeat
measurements n*
116
8 8Special measurement processes
8.1 Classification and mating
If the desired function (e.g. guide play of nozzle body and needle) cannot be technically and
economically achieved by means of direct process control due to small tolerances, and the
manufacturing processes are not capable of achieving the small tolerances, there are differ-
ent approaches to achieve them. The use of classification processes is one approach to solv-
ing this problem.
Typical applications
Diameter classification of rolling elements for the subsequent mating of a rolling bear-
ing
Drive cardan shaft (classification between joint piece and ball hub)
Height classification of shims for the use/generation of functional dimensions in as-
sembly
Mating of cylinder crankcase and crankshaft
Mating of cylinder and piston
Classification
Classification is the process by which parts are measured to 100% and assigned to dimen-
sional groups (or classes, varieties).
Mating (pairing/matching/metering)
The classification process is a prerequisite for mating. The parts to be combined (paired) are
100% read out (classified) and by joining (pairing) the corresponding class groups for each
part the small functional tolerances can be achieved (2 partners from corresponding classes
are mated).
Class jumper
During classification processes, there are always so-called “class jumpers” at the class
boundaries – even with the smallest measurement uncertainty – i.e. a classified part with a
dimension near the class limit can also enter ONE adjacent class during a subsequent meas-
urement due to the measurement uncertainty.
Class width
The division of the characteristic tolerance into at least two or more groups is called class
width (CW). This must always be smaller than the function class width.
117
The extended class width results from the coverage of the measurement uncertainty at the
class boundaries.
Reference value
The class width (kb) and NOT the characteristic tolerance is used as a reference value for
the evaluation of classification processes.
Resolution
The resolution of the display may be up to a maximum of 20% of the class width for classifi-
cation processes. Example: Class width 0.5 μm => maximum permissible resolution 0.1 μm
Linearity
The linearity of the measuring method must also be taken into account or checked during
classification processes. For this purpose, in the case of non-linear measurement systems,
several standards must be measured in the individual classes as part of the MS experiment.
e.g. for each class 1 reference part with nominal size in the respective mid-point of class.
It must be ensured that each class can be measured reproducibly for non-linear systems.
The following is therefore considered a capability ratio for assessing the classification pro-
cess
QMP-CLASS ≤ 1.0
Note 1: The influence of the object (shape error) can have a major impact on the
inspection process, particularly in the case of classification processes,
since the test is typically always performed at the same position on the
component. This should be given special attention in the uncertainty
budget.
Note 2: Figure 8-2 and Figure 8-3 represent suitable and unsuitable classification
processes. InFigure 8-2, the component can be sorted into a maximum of 2
classes. In Figure 8-3, the component can be sorted into a maximum of 3
classes and the inspection process is therefore not suitable.
119
Figure 8-3: Result of an unsuitable measurement process
The validation requirements apply both to third-party software and to software created within
the company.
Commercial standard software for general use, used within its intended scope, can be con-
sidered as sufficiently validated. [ISO 17025, P.42] [12]
If the software is an integral part of the measurement system and calibration is carried out on
a standard (a material measuring) which corresponds to the characteristic measured in use,
the software is validated during the calibration.
120
8.3 Insufficient sample sizes for MS and MP test
If it is not possible to determine the standard measurement uncertainty in experiments with
sufficient statistical data (< 30 measurement values), due to small production lots, for exam-
ple, then the coverage factor k must be calculated using the effective degrees of freedom as
a quantile of the student-t-distribution instead of the standard distribution (see GUM Annex G
[29]). This increases the k-factor depending on the actual sample size. This applies both to
the calculation of the expanded measurement uncertainty of the measurement system 𝑈𝑀𝑆
as well as the measurement process 𝑈𝑀𝑃 .
In a simple experiment, the number of degrees of freedom f is the product of the number of
measurements n minus 1.
𝑓 =𝑛−1
M=15 repeat measurements are carried out on a reference part r=1. The k value must be de-
termined and adjusted from this.
𝑓 = (𝑟 ⋅ 𝑚) − 1 = (1 ⋅ 15) − 1 = 14
A coverage factor of 𝑘 = 2,20 is derived from Table 8-1: k values for 95.45% as a function of
the degree of freedomwhen 𝑓 = 14.
In practice, however, the problem arises that several experiments are carried out and therefore
the effective degrees of freedom would have to be calculated using the Welch-Satterthwaite
formula (GUM Annex G4.1). Simplified, the effective degrees of freedom 𝑓𝑒𝑓𝑓 can be approxi-
mately determined by using the line of freedom 𝑓𝑚𝑖𝑛 and the standard measurement uncer-
tainty of the test with the fewest measured values according to the following calculation:
𝑢𝑚𝑖𝑛 4
𝑓𝑒𝑓𝑓 = ( ) ⋅ 𝑓𝑚𝑖𝑛
𝑢𝑀𝑃
121
The effective degrees of freedom must always be rounded down to a whole number. The k-
factor can then be taken from Table 15 using 𝑓𝑒𝑓𝑓 . If the test with the fewest number of
measurements is the MS test, then 𝑢𝑚𝑖𝑛 = 𝑢𝐸𝑉𝑅 , if the MP test has the fewest measured val-
ues, then 𝑢𝑚𝑖𝑛 = 𝑢𝐸𝑉𝑂 . If, due to the maximum condition, one of these components is omit-
ted, the calculation of the effective degrees of freedom 𝑓𝑒𝑓𝑓 refers only to the remaining com-
ponent.
1st question
How has the development characteristic changed due to the development processes. In
other words, what effect 𝐸 do development results have on the target figure? Example: By
which amount could the spring stiffness be increased by an appropriate design (material/ge-
ometry)?
When designing the measurement process, the question must then be answered as to how
large the maximum uncertainty of the measurement process may be in order to be able to
prove this effect 𝐸. Simplified, it is assumed that the ranges of the expanded measurement
uncertainty for the two measurements before and after the change may not overlap. Then it
can be assumed that the true values before and after the change cannot be the same and an
effect has been detected. However, the magnitude of the effect is still subject to uncertainty.
If the expanded measurement uncertainties overlap, then the true values could lie within the
overlap range and thus be identical. In this case, an effect cannot be proven.
122
Figure 8-5: Effect is not detectable
Therefore, assuming that the measurement uncertainty before and after the change is the
same, the effect 𝐸 to be proven must be greater than twice the expanded measurement un-
certainty 𝑈𝑀𝑃
𝐸 ≥ 2 ⋅ 𝑈𝑀𝑃
or the expanded measurement uncertainty to demonstrate the effect 𝐸 must be less than half
the effect 𝐸.
1
𝑈𝑀𝑃 ≤ ⋅ 𝐸
2
Thus, the following requirement applies to the ratio of the combined uncertainty of the uMP
measurement process to the effect 𝐸 to be resolved
2 ∙ 𝑈𝑀𝑃
≤1
𝐸
2nd question
Whether the design characteristic meets the target value within the specified limit values. In
other words: can the development result be evaluated as OK or NOK? There are one-sided
or double-sided limited target values, comparable to the lower and upper specification limit in
production.
To answer this question, the decision (OK/NOK) must take into account the measurement
uncertainty uMP at the specification limits. Reference is made to Chapter 5.7. The develop-
ment result must lie within the conformance zone. If the result is within the tolerance, but also
within the safety zones (guard bands 𝑔𝐿𝐴 and 𝑔𝑈𝐴 ), compliance with the specification is not
proven.
It follows that in the case of bilateral development specifications 𝑈𝐸 and 𝐿𝐸 the expanded
measurement uncertainty 𝑈𝑀𝑃 must be less than half the development tolerance 𝑇𝐸 = 𝑈𝐸 −
𝐿𝐸 , otherwise there is no conformance zone.
2 ∙ 𝑈𝑀𝑃
≤ 0,
𝑇𝐸
123
9 Proof of capability of attribute inspection processes
9.1 Basic preliminary remarks
In terms of a zero-defect strategy and/or a philosophy of continuous improvement of pro-
cesses (CIP), attributive testing is not suitable.
Reactions only occur after the limit values have been exceeded, i.e. rework or scrap
has already been produced. In the event of a variation limit being exceeded, the ex-
tent to which it has been exceeded is not apparent.
If gauges have lead dimensions, pseudo rejects are produced and production costs
are increased
Variations and their changes in the process can neither be recorded nor evaluated as
long as a relatively high reject rate does not occur
The sensor technology is often too imprecise to be able to evaluate with attributive
tests with small tolerances (forces during application are often different or too high,
discrimination ability of the sensory perceptions is too low)
Therefore, the aim should be to replace, wherever possible, attributive testing with an indicat-
ing measuring machine in order to regulate a process in the sense of a zero-defect strategy
or CIP before non-conforming products (rework or rejects) arise.
The result of attributive tests is highly influenced by the individual handling of the respective
examiner, the variable characteristics and properties of the test parts and often takes place in
a less-than-ideal environment. Therefore, even with attributive testing, high accuracy require-
ments cannot be met. An empirical value for geometric variables is, for example, that for
basic tolerance levels smaller than IT 9 (IT: ISO tolerance in accordance with DIN ISO 286-
1), the attributive testing is too uncertain and should be replaced by measuring tests as far as
possible. The use of attributive inspection processes for detection of faults with FMEA sever-
ity 9 or 10 is explicitly not recommended, or requires further safeguards. In this case, test
methods based on variable data are preferable.
If attributive inspection processes are essential, the following points should be taken into ac-
count in planning and implementation:
124
9.2 Proof of capability for attributive inspection processes
Whether proof of capability for attributive testing is possible at all, with restrictions, or not at
all, depends to a large extent on whether standards and correspondingly graded or suitable,
meaningful test parts are available for the respective test characteristic.
The currently common procedures and methods for the proof of capability are not universally
applicable. Depending on the task, one or more methods have to be used, for some complex
tasks/contexts, task-specific solutions have to be developed and validated. None of these
known methods can guarantee a “100% secure” test decision.
In contrast to variable measuring methods, process capability of the manufacturing process
cannot be proven nor maintained to ensure that only a few components are produced at the
limit.
Figure 9-1: Possible wrong decisions depending on the capability of the production process
Thus, limit values are never generally “sufficiently secure” and, like the methods to be ap-
plied, must be agreed with the customer. Ultimately, however, even if proof of capability has
been provided, it cannot be guaranteed that no faulty parts will reach the customer.
In the case of proof of capability for attributive inspection processes, there is a distinction
made between two situations.
Attributive inspection processes with discrete results test characteristics that are in
principle also measurable (e.g. limit gauges). The results of the attributive test deci-
sions can then be compared with the reference measurements of a test lot deter-
mined in the proof of capability.
The measurement uncertainty can be determined and proof of capability can be pro-
vided analogous to the tests to be measured.
Typically, the signal detection method or the analytical method is recommended for
this purpose (see Chapter 9.5.1).
Attributive inspection processes with purely discrete results (e.g. visual test of non-
measurable characteristics) can only be compared with reference decisions of a refer-
ence examiner or reference team.
125
The measurement uncertainty cannot be determined and a proof of capability is lim-
ited to the simulation of the inspection process with known test lots. This means that
attributive test equipment analyses can be examined for their capability by means of
probability-based characteristic values. This does not take place in the physical di-
mension as in process capability with the following methods, however, but in the
probability theoretical dimension.
The characteristic values are limited to more or less subjective criteria with which the
test results are compared with expected results. Comparisons between testers and
against the reference auditor(s) are common.
Typically, for nominal scales, the Cohen’s or Fleiss’ kappa method and the effective-
ness method are recommended. For ordinal scales the Kendall’s W method addition-
ally applies. The prerequisite for this is that, ideally, the test lot and the test sequence
sufficiently reflect the real situation (for example, representative ratio of defect types,
typical working environment, usual cycle time).
Kappa is a central characteristic value for the nominal case. There are different for-
mulations (Fleiss’ kappa or Cohen’s kappa), but they agree on the simplest formula
for Kappa. Fleiss’ kappa is applicable for more extensive cases than Cohen’s kappa,
for example when more than two inspection processes are to be compared or the
compliance of the examiner with him/herself is to be evaluated. In practice, the Fleiss
method is therefore recommended. A characteristic value for the ordinal case is Ken-
dall’s W.
The characteristic values kappa and Kendall are corrected for random matching, i.e.
they would fall to zero if the matches of the test decisions could be explained by pure
chance. A characteristic value of 1 means that all matches have been found that go
beyond pure coincidence. Significance tests exist to check for rejection of the null hy-
pothesis (“results are better than pure chance”) for small values.
The long-term verification of the attributive characteristic values in terms of stability
can also be carried out with these methods, if comparable (ideally identical) lots of
test parts are available over longer periods of time and the entire procedure is carried
out in the same way.
126
Figure 9-2: Characteristics that are discreet or have been made discreet
Carrying out the proof of capability must be based on a representative test lot. This test lot
shall present all the characteristics and types of defects of the production process in a repre-
sentative proportion so as not to distort the probabilities of occurrence of the test results.
Combined with the requirement that a sufficient number of test parts in the critical area of
measurement uncertainty are also present, it follows that often a very large number of tests
and test parts are necessary to obtain sufficiently usable data. If attributive inspection pro-
cesses are also to assess the accuracy and not only the repeatability, the test parts need ref-
erence evaluations based either on a variable countermeasure or, if not possible, on the ref-
erence decision of a reference examiner or even reference testing teams.
It follows from this that a meaningful proof of capability for an attributive inspection process
can, if at all, only be achieved with considerable effort.
The following graphic illustrates the risks involved in creating a test lot:
Figure 9-3: Meaningfulness in relation to the uncertainty as a function of the position of the part in the
tolerance
– Due to the position of the part, the uncertainty is not apparent, although it is very large.
The inspection process seems to be secure, but it is not
– The uncertainty leads to different results due to the borderline situation but is very small.
The inspection process appears to be insecure, but it is not
– The probability of a wrong decision for a given uncertainty is 0%, the object therefore
only allows the statement that the uncertainty is smaller than the distance of the characteris-
tic value from the specification limit. Since, at a typical confidence interval of 95%, only 2.5%
of the decisions are made unilaterally outside the expanded measurement uncertainty (i.e.
one in 40 replicates), sufficient repeat measurements are required to determine the uncer-
tainty with sufficient quality.
– If the test part lies exactly on the tolerance limit, the probability of a wrong decision is
50%, regardless of the size of the measurement uncertainty. The part does not contribute an-
ything to the determination of the measurement uncertainty, but falsely gives the impression
of an unsuitable inspection process.
– The probability of “error not detected” is between 0% and 50%. If the true value is
known, this part can be used to determine the measurement uncertainty or to estimate the
capability
– The probability of “false alarm/pseudo error” is between 0% and 50%. If the true value is
known, this part can be used to determine the measurement uncertainty or to estimate the
capability
– The probability of a wrong decision is 0% for a given uncertainty, the situation corre-
sponds to .
128
9.5 Possible methods for the evaluation of attributive inspection pro-
cesses
Due to the limitations mentioned in the previous chapters, only typical methods and areas of
application are listed here, but no explicit recommendation for a specific procedure is given.
Furthermore, a single procedure is often not sufficient to evaluate all aspects of an attributive
inspection process.
It follows from the remarks in Chapter 9.3 that methods which do not concede wrong deci-
sions to the examiners, but require absolute agreement with other examiners and the refer-
ence decision (e.g. Short Method), are not suitable for an assessment of measurement un-
certainty. Such methods are only suitable for a quick overview (screening) of whether there
are significant shortcomings in the inspection process, but not for proving the capability of the
inspection process.
Scope of application
The method of signal recognition is applicable when technically measurable characteristics
are inspected in a simplified attributive manner. The test decision chooses from 2 categories
(yes/no, good/bad, ...)
The objective
The objective of this method is to identify, by comparing the test results of measured charac-
teristic values, the area in which a clear decision in the attributive test is not possible.
Prerequisite
The prerequisite for this method is a test lot of at least 50 parts that can be inspected repeat-
edly and that cover the entire scope of the attributive test of this characteristic. In the exam-
ple case of an attributive form gauge (setting gauge, straightening gauge), haptic/tactile test
(switch actuation) or visual test (colour), the entire tolerance field should be covered and, in
addition, exceeded to such an extent that bad parts that clearly exceed the tolerance field are
also clearly identified. This ensures that the applicability of the gauge is checked in the entire
field of application. It makes sense to keep the dimensional distances sufficiently small in the
range of the expected uncertainty so that an appropriate determination of the uncertainty
range can be made.
129
Figure 9-4: Selection of test parts for the signal detection method
In addition, all parts must be measurable and must also be measured for the evaluation. It
must be possible to carry out this measurement with a sufficiently small and known measure-
ment uncertainty. The test parts of the test lot are clearly marked and cannot be seen by the
examiners.
Execution
A reference value must be determined for each test part using a measurement process. The
test pieces are evaluated by at least two tests in at least two runs in random order and the
evaluations are recorded. If no test influence is to be expected, four runs shall be performed
in random order.
Characteristic values
After completion of the tests, the parts are sorted in ascending order according to the refer-
ence value. Assuming that errors are found near the tolerance limits, the range from the last
completely correct assessment of all test runs to the first re-compliance is determined. If the
attributive inspection process checks both specification limits, the range is determined at
both specification limits. The mean width of this range is considered the uncertainty range
and simply equates to 2 ⋅ 𝑈𝐴𝑇𝑇𝑅 . The capability index of the Measurement process 𝑄𝑀𝑃 is cal-
culated analogous to Chapter 7.1.2 according to the formula:
2 ⋅ 𝑈𝐴𝑇𝑇𝑅
𝑄𝑀𝑃 =
𝑇
130
Figure 9-5: Results of the signal detection method
131
Figure 9-6: Value progression of the reference values with determined measurement uncertainties
As one of the few procedures for attributive methods, this method of signal recogni-
tion can actually determine the measurement uncertainty and the capability index of
an attributive inspection process.
The known measurement uncertainty of the attributive inspection process means a
safety measure can be applied to the gauge in the design to reduce the probability of
wrong decisions.
Disadvantages of the procedure
The attributive test must evaluate one characteristic and no more. If, for example, a
setting gauge checks position, squareness and a dimensional size (e.g. diameter or
width) and if, in the case of a negative test decision, these assessments cannot be
distinguished, this method cannot be used.
The assessment can only be made in two categories (e.g. “OK” and “NOK”)
The gradation of the test parts essentially determines the determined measurement
uncertainty. If the gradation is too large or the number of parts too small, the meas-
urement uncertainty may be significantly overestimated
If no mismatches are found, the measurement uncertainty cannot be determined. For
simplification, it can be assumed that the range 2 ⋅ 𝑈 is smaller than the distance be-
tween three consecutive test parts in the range of the tolerance limit. The range 2 ⋅
𝑈𝐴𝑇𝑇𝑅 is equated to this distance when calculating 𝑄𝑀𝑃
Comparisons with the reference values in the sense of “is the decision right?” must be
treated with caution, as a gauge may have a wear allowance or safety allowance.
132
9.5.1.2 Method of advanced signal detection
Scope of application
The method of advanced signal detection is applicable when measurable characteristics are
inspected in a simplified attributive manner, typically cylindrical limit gauges (applicable for
each limit position), individual gauges (e.g. test pins). In terms of the test decision, a selec-
tion is made from 2 categories (yes/no, good/bad, …).
The objective
The objective of advanced signal detection comprises two stages:
Stage 1: The width of the area of non-conformance in which the examiners do not reach
clear decisions is determined. Additionally, it is evaluated whether the examiner’s decisions
are correct.
Stage 2: The measurement uncertainty for intended conformity decisions is identified with all
relevant uncertainty components.
Prerequisite
Clearly defined and dimensionally graded test parts with reference measurements must be
available (to be taken from production or produced specifically)
Traceability (e.g. via DAkkS (accreditation body) calibrated limit plug gauges or internal
traceable calibration) must be ensured.
9 test parts per limit/working point must be available with defined gradation ( ≤ 5% T)
Note: The result improves from a statistical point of view with smaller gradations
of the test parts (e.g. 2.5%), but then more test parts (9->18) must be avail-
able
The test parts must be in the analysis range of +/-20% around the limit/working point
Execution
Stage 1 GR&R:
1. Determine test parts according to specifications
2. Select required test parts from production or manufacture them specifically
3. Carry out reference measurements with sufficiently accurate measuring equipment
4. Assign test parts to the corresponding limits
5. Record decisions of the 3 examiners with at least 2 typically hidden test runs
6. Evaluate decisions: Areas of non-conformance dx and possibly fx
7. Evaluate results, make decisions, documentation
133
Characteristic values
Stage 1 GR&R:
%GRR = (dx + fx) / T x 100 %
with
o dx = width of area of non-conformance at the respective border/working point
o fx = range with incorrect (wrong) decisions compared to the reference meas-
urements and tolerance limits
Stage 2:
𝑈𝑀𝑆 and 𝑈𝑀𝑃 , determined from all relevant influencing variables uxi
Stage 2:
𝑈𝑀𝑆 und 𝑈𝑀𝑃
Uncertainty range of the test system at the respective limit compared to the tolerance
from the area of non-conformance of the individual retests of all examiners (= charac-
teristic value 1)
By way of comparison with the reference measurements/tolerance limits, it must be
checked whether decisions of the examiners are correct (= characteristic value 2)
Secured conformance decisions based on the MU
Scope of application
134
The analytical method applies when measurable characteristics are subjected to simplified
attributive testing. The test decision chooses from 2 categories (yes/no, good/bad, ...)
The objective
The uncertainty range and the bias of the gauge shall be determined by means of a perfor-
mance curve of the test system.
Prerequisite
The test pieces are intended to cover the range of application of the gauge and a reference
value be determined for each test piece in accordance with chapter 9.5.1.1. The method can
be used for both unilateral and bilateral limits.. The method is described in a simplified way in
the following using a lower specification limit.
Execution
At least 8 parts must be determined from the available test parts, which show acceptance
numbers a from 0 to 20 at m=20 repeat measurements. The smallest part should have the
acceptance number 𝑎+ = 0, the largest 𝑎+ = 20. The parts in between have acceptance
numbers 1≤a≤19. The reference values of the parts are ideally equidistant.
The acceptance probabilities of the selected objects are determined as follows:
𝑎+ + 0,5 𝑎+
𝑖𝑓 < 0,5 𝑎+ ≠ 0
𝑚 𝑚
𝑎+
𝑃𝑎 = 0,5 𝑖𝑓 = 0,5
𝑚
𝑎+ − 0,5 𝑎+
𝑖𝑓 > 0,5 𝑎+ ≠ 20
𝑚 𝑚
The probabilities 𝑃𝑎 are plotted in a probability plot of the standard distribution over the refer-
ence values and approximated with a standard distribution. The 95% uncertainty range and
the bias can now be derived from the probability plot.
Characteristic values
The 95% dispersion range of the power curve corresponds to the performance curve 2𝑈𝑀𝑃 ,
from which the capability index 𝑄𝑀𝑃 can be calculated.
135
Assessment and limit values
The capability index 𝑄𝑀𝑃 is evaluated according to chapter 7.2. In this case, the bias of the
power curve cannot be taken into account as a measure of dispersion and must be explicitly
evaluated as an error. A significance test can be used to check whether the bias detected is
significant or merely a random effect.
Scope of application
The short method can be used for characteristics that are discrete or made discrete, but can-
not determine any measurement uncertainty. On the contrary, as soon as an undoubtedly ex-
isting measurement uncertainty is recognisable, the measurement system is declared unsuit-
able in many publications. This is of course only correct in part, because the size of the
measurement uncertainty remains unknown. Therefore, this short method is more suitable
for rough screening or stability monitoring.
The objective
The short method compares test decisions of several examiners on several parts. If all deci-
sions are the same, then the review process is considered suitable.
Prerequisite
20 parts are required to cover the application area. The test parts of the test lot are clearly
marked and cannot be seen by the examiners. The notes from Chapter 9.3 and 9.4 apply
Execution
At least two examiners shall alternately test all parts at least twice under repeat conditions in
random order. The typical setup consists of two examiners who alternately perform 2 series
of measurements on all 20 parts.
136
Characteristic values
This method provides only the number of 𝑛≠ non-conformances as a characteristic value.
This includes the parts for which the test decisions are not the same.
9.5.2.2 Effectiveness
Scope of application
The method of effectiveness is applicable to characteristics that are discrete or made dis-
crete, but cannot determine any measurement uncertainty. It is determined in a simple way
how many test decisions are correct, how many pseudo errors (“OK” parts detected as
“NOK”, Type1 error) and how many bad parts were not detected (undetected “NOK” parts,
Type 2 error). In order to arrive at an appropriate statement, large test scopes are necessary,
because small percentages must be verified for pseudo faults and undetected bad parts. The
procedure can be evaluated in parallel with the following methods in Chapters 9.5.2.3 to
9.5.2.5 except for the data recorded there.
The objective
The procedure should ensure that the probability of pseudo faults and undetected bad parts
is low. Therefore, these incorrect decisions are simply counted but not statistically evaluated.
Prerequisite
At least 50 parts are necessary for the implementation, at least 100-200 parts are recom-
mended for small risks of error (≤5%). The notes from Chapter 9.3 and 9.4 apply
137
Execution
At least two examiners shall alternately test all parts at least twice under repeat conditions in
random order.
Characteristic values
The number of decisions counted for
𝑛= “correct”
𝑛𝑓− “false bad” (pseudo error, Type 1 error)
“𝑛𝑓+ “false good” (unrecognised bad parts, Type 2 error)
Scope of application
The Bowker test can be used for characteristics that are discrete or made discreet, but can-
not determine any measurement uncertainty. No reference decisions are necessary for the
test parts. The examiners are compared in pairs.
The objective
138
The Bowker test checks whether there are significant differences between different examin-
ers. Whether the individual tests have led to the correct result in each case is not taken into
account.
Prerequisite
At least 40 parts are necessary for the implementation, larger sample sizes are recom-
mended. The notes from Chapters 9.3 and 9.4 apply
Execution
At least two examiners shall alternately test all parts three times under repeat conditions in
random order.
Each of the 40 results of examiners A or B will be assigned to one of three classes:
Class 1: all 3 repetitions gave a “good” result
Class 2: no uniform result within the 3 repetitions
Class 3: all 3 repetitions gave a “bad” result
The results of the tests can then be summarised in a table:
Characteristic values
This table is now tested for symmetry using the Bowker test. The procedure only knows the
test decision of the Bowker test as a characteristic value.
If there are no significant differences between the examiners, the frequencies 𝑛𝑖𝑗 determined
in the above table are sufficiently symmetrical with respect to the main diagonal.
The null hypothesis to be tested 𝐻0: 𝑚𝑖𝑗 = 𝑚𝑗𝑖 (𝑖, 𝑗 = 1 … 3 with 𝑖 ≠ 𝑗) states that the expected
frequencies 𝑚𝑖𝑗 , which are symmetrical to the main diagonal, are identical. The test statistic
is compared with the 1 − 𝛼 quantile of the 𝜒 2 distribution with 3 degrees of freedom.
139
2
2
(𝑛𝑖𝑗 − 𝑛𝑗𝑖 )
χ =∑ > 8,603
𝑛𝑖𝑗 + 𝑛𝑗𝑖
𝑖>𝑗
Scope of application
The kappa method is suitable for nominal characteristics, i.e. the characteristic categories do
not have a natural ranking. The results are therefore only checked for consistency. Example:
140
The colour of products is checked for conformity with the respective reference sample within
the scope of product audits. Audit teams from three different shifts are provided with a test lot
and results are compared to decide which team still needs instruction and whether all teams
meet a minimum kappa score.
The objective
Kappa compares the proportion of agreement between the auditors with the randomly ex-
pected agreement. If a sufficiently large “over-random” proportion of matches is found, the
inspection process is considered suitable.
Prerequisite
Evaluation makes sense from about 100 test parts and 3 examiners with repetition. However,
this recommendation is heuristic in the sense that it depends on the risk of misclassifications
and not exclusively on theoretical limits. The setup of the trial must be agreed between the
customer and the supplier. The test parts should contain all result categories in representa-
tive proportions and also cover the limit ranges of the categories. The requirements men-
tioned in Chapter 9.3 apply. In order to compare the correctness of the test decisions (com-
parison of examiner with reference), the test parts can be assigned a reference decision by a
reference examiner or reference team. The objects must be clearly marked and the examin-
ers must not be able to recognise them. The notes from Chapter 9.3 and 9.4 apply
Execution
A pseudo-random sequence for feeding the test parts to the examiners should be aimed for
with a view to avoiding recognition and the resulting repetition of results. This means that the
examiners will evaluate the test parts alternately per run in random order under typical test
conditions of the real inspection process.
Characteristic values
𝑃𝑏𝑒𝑜𝑏𝑎𝑐ℎ𝑡𝑒𝑡 −𝑃𝑧𝑢𝑓ä𝑙𝑙𝑖𝑔
𝜅= 1−𝑃𝑧𝑢𝑓ä𝑙𝑙𝑖𝑔
The appropriate value range is from -1 to 1.
141
Table 9-2: Results matrix for two examiners
𝑎+𝑑
𝑃𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 =
𝑎+𝑏+𝑐+𝑑
The Cohen count simply compares the paired decisions per part and run, i.e. for example, for
part 17 in the second run, the decisions of the two examiners are compared and the result is
assigned to one of the four fields. This means that if there are more than two examiners,
these examiners (example examiners A, B, C) can only be compared in pairs (AxB, BxC,
CxA).
When counting by diligence, all decisions are considered independent and paired matches
are counted per part. This also makes it possible to use one examiner’s multiple decisions to
determine both a measure of compliance with itself (AxA, BxB, CxC) and a measure of com-
pliance across all the examiners involved (AxBxC).
The aim is to get kappa as close to 1 as possible. kappa limits >= 0.75...0.9 are applied after
consultation with customers, but cannot completely exclude misclassification. There is con-
sensus in the relevant sources that kappa from 0.9 is considered acceptable. Other limit val-
ues can be defined by agreement between customer and supplier.
There are further developments of the basic formula given above for special cases, such as
the existence of an external standard. The kappa method according to Fleiss is suitable for
all practice-relevant combinations of examiners and repetitions, which is therefore more uni-
versally applicable than the Cohen method. Reference is made to the original source (see
[27], [32]) for a more detailed description of the complex topic.
142
The comparison of the actual and randomly expected agreement is preferable to a mere de-
termination of the proportion of agreement, which would be partially random and therefore
provides only an insufficient assessment of the attributive inspection process.
Scope of application
Kendall’s rank-correlation analysis evaluates characteristics whose categories have a natural
ranking. In the case of non-conformance, the extent of the difference is therefore also rele-
vant. The greater the error in the rankings, the worse the characteristic value for Kendall’s W,
for example: Products are audited and defects are identified in three categories according to
severity (“A, B, C defects”). For example, an error from A to C is worse than from A to B. To
this end, audit teams from three different shifts are provided with an test lot at regular inter-
vals and results are compared to decide using Kendall’s W which team still needs instruction
and whether all of them always meet an internal minimum value for W.
The objective
The Kendall characteristic W compares the proportion of agreement between examiners with
the randomly expected agreement by rank on the scale of values.
Prerequisite
The requirements described for the kappa method described above apply. The test parts
should contain all result categories in a representative manner and also cover the border ar-
eas of the categories. In his original publication [Kendall1939], however, Kendall also exam-
ines smaller data sets than 100 test parts.
Execution
The scale of values (of at least 3 levels) is mapped to natural numbers in the simplest case,
with one representing the lowest valuation. A linear shift in the evaluation scale has no effect
on the result.
Characteristic values
Kendall’s W (concordance coefficient) is a characteristic value that also takes into account
rank ties (coincident values). The formula is complex (because sorting is necessary) and is
usually calculated using appropriate software [53].
12∗ ∑𝑁 ̅
𝑖=1(𝑇𝑖 −𝑇 )²
𝑊= 𝑚2 (𝑛3 −𝑛)−𝑚𝑡
The possible value range is from 0 to 1.
143
Rank length 𝑡𝑗𝑘 is the number of cases that share rank k with the assessor j.
144
10 Assessment of continuous capability
The assessment of continuous capability, previously often referred to as stability monitoring
or measurement stability, has gained in importance due to the standard requirements in ISO
9001:2015. In order to constantly secure product quality and meet the requirements of the
standards, the continuous capability of the measurement and inspection processes must be
ensured.
The proof of capability prior to the start of series production presented in the previous chap-
ters usually only covers a relatively short period of a few minutes to a few hours, so that the
uncertainty at the time of measurement is sufficiently described. However, influencing varia-
bles of the testing process can change over a longer period of time (days to months), which
can cause the measurement or inspection process to lose its original capability. Among the
causes for this are environmental changes, drift, wear and tear, ageing, pollution, etc.
For this reason, proof of continuous capability after series production has started is neces-
sary in order to be able to detect significant changes in the influencing variables in good time.
If the continuous capability has been proven over a certain period of time, the proof of capa-
bility in accordance with Chapter 7 does not have to be repeated during this time.
Depending on the type and result of the stability test, it may be useful to adjust the calibration
cycle.
It may be necessary to carry out a new adjustment before each measurement for the small-
est tolerances. Stability monitoring is then no longer necessary.
10.1 Methods
Various methods can be used to demonstrate continued capability. If a single method is not
able to detect all changes, other methods need to be combined. The test frequency and test
intervals should be determined according to the risk assessment (see Chapter 4) and the ex-
pected stability of the inspection process and can range from several tests per shift to one
test per year. The determination of the methods used and the test intervals shall be justified
and documented in a comprehensible manner.
The following methods are proposed as examples:
Regular calibration
Regular calibration of the measuring equipment is essential within the scope of the pro-
cesses for test equipment monitoring (see Chapter 4), but in many cases it alone is not suffi-
cient for comprehensive stability monitoring, since the calibration is carried out under ideal
conditions (calibration laboratory) and not under actual conditions of use. In addition, only in-
dividual components of the measurement system (e.g. measuring device or adjustment mas-
ter, test standard) are assessed, and not the entire measurement and inspection process.
Preventive maintenance
Prior to maintenance, a stability test must be carried out to ensure that the capability was
maintained until the last measurement. After maintenance, the corresponding proof of capa-
bility must be repeated, depending on the scope of maintenance.
Requalification of employees
It must be ensured that the examiners are trained in the measurement process and that
these competencies are maintained. A regular comparison of the examiners is necessary to
ensure a uniform evaluation of the defect patterns, particularly in attributive tests.
All significant influencing variables (drift, wear, ageing of material, electronic compo-
nents, contamination, changes in environmental conditions, ...) shall be evaluated to
determine whether they can cause a change in the inspection process over time. The
ranking of the influencing components in the uncertainty budget can be a good aid for
the assessment of the important influencing variables. All significant standard uncer-
tainties are listed according to size here and can be used for systematic processing.
Risk assessment and consideration of the expected stability of the inspection process
and, if necessary, definition of the test parts
It has to be evaluated which methods from Chapter 10.1 are suitable for the analysed
influencing variables in order to detect a temporal error in time.
146
Determination of the test frequency, test intervals
Determination of a reaction plan in case of unstable or inappropriate measurement
processes that cannot be continued
Regular implementation of the defined monitoring method(s), documentation of re-
sults
Regular evaluation and analysis and initiation of corrective measures in case of
measurement processes that are not stable or not suitable for continuous use
Revision, documentation, archiving of the results. Check at regular intervals whether
the defined methods are effective or whether there is potential for improvement
The measurement stability corresponds methodically and mathematically to the known qual-
ity control chart, which is also used in the SPC framework for the control of production pro-
cesses. However, since it does not evaluate a classic product quality characteristic here and
is not used for regulation in the actual sense, the term quality control chart is avoided and re-
placed by the term stability chart.
The points from Chapter 10.2 from the analysis of the influencing variables to the definition of
the reaction plan apply. The implementation and evaluation is as follows:
Experience values from comparable inspection processes with the same stability
part
Calibration data of the stability part, if a calibrated reference part or standard is
used
Results on the measurement process situation 𝑥̅ from the MS test, if the same ref-
erence part is used
Measurement process situation from a preliminary run over a significant period of
time (at least 20 measurements/samples with all significant influencing factors)
147
Note: In the context of a preliminary run, the actual behaviour of the inspection
process must be observed and documented over a significant period of
time. For example, repeat measurements can be taken at fixed intervals
(e.g. every 15 minutes, once per hour, ...) and over a fixed period of time
(e.g. at least one working day, all shifts relevant to the operation, up to sev-
eral working days). The action limits and, if necessary, the setting intervals
can then be determined from the data obtained.
In the case of a pre-run, an initial revision of the test frequency, test and adjust-
ment intervals can already be carried out at this point.
148
If sufficiently documented experience from similar measurement processes is
available, appropriate action limits can be adopted from these measurement pro-
cesses.
If the dispersion is small compared to the resolution RE and therefore the values
in the control chart only fluctuate by +/- 1 digit, the limits can be appropriately set
to +/- 2.5 digit.
When manually specifying the action limits, it must be ensured that the dispersion
(uncertainty) of the measurement process cannot increase compared to the proof
of capability and the associated approval, and thus the continued capability may
no longer be given.
Figure 10-2: Example manual definition of the action limits with small fluctuations in the range of 1 digit
As long as the action limits are not exceeded and there are no further serious sta-
bility violations (run, trend, middle third, ...) or irregularities, the inspection process
is considered stable/continuously suitable and can be continued.
If the recorded measurement results are extremely stable when using standards
and/or calibrated reference parts, then an extension of the calibration intervals
149
may appear appropriate. This requires a more exact alignment of the test points
and test methods for stability monitoring and calibration. Only if it is ensured that
the stability measurements also represent the calibration measurements can a
clear decision be made. Alternatively, the test interval can be extended depending
on the risk assessment
In the case of stability violations or other irregularities, the causes must be ana-
lysed and corrective measures must be implemented and documented and, if
necessary, the test facility must be cleaned/improved and readjusted. It may also
be necessary to shorten the calibration interval or the test interval
If instabilities still occur despite optimisation of the test and adjustment intervals,
the measuring equipment must be improved
If possible and useful, the findings should be transferred to comparable inspection
processes.
Note: The term “setting” may not be equated with calibration here. It includes cal-
ibration, but also more extensive adjustment and correction measures on
the elements of the measurement system and the measurement process.
Before/at the beginning and after the adjustment, the measurement system
must be calibrated to detect changes in the inspection process!
Documentation, archiving
The recorded and evaluated data must be documented and archived.
If a stability map is used for the demonstration of continuing capability as proposed in Chap-
ter 10.3, the measurement uncertainty shown in it may replace several components of the
uncertainty budget, e.g. 𝑢𝑅𝐸 , 𝑢𝐸𝑉𝑅 , 𝑢𝑀𝑆 𝑅𝑒𝑠𝑡 , 𝑢𝐴𝑉 , 𝑢𝐼𝐴 , 𝑢𝑆𝑇𝐴𝐵 and 𝑢 𝑇 . To determine the stand-
ard measurement uncertainty from the long-term stability 𝑢𝐿𝑆𝑇𝐴𝐵 , the standard deviation of all
measured values (𝑛 ≥ 30) entered in the stability map can be used in a simplified
way.𝑢𝐿𝑆𝑇𝐴𝐵 = 𝑠𝐿𝑆𝑇𝐴𝐵
The extent to which the components are actually replaced by 𝑢𝐿𝑆𝑇𝐴𝐵 depends on the setup of
the stability test, the stability components used and the examiners performing the test and
must be checked in each individual case.
150
11 Index of formula symbols
Symbol Name Term
151
Symbol Name Term
uMP-REST Standardunsicherheit aus der weitere Einflüsse standard uncertainty from other influences not
included in the analysis of the measurement pro-
cess
uSTAB Standardunsicherheit aus der Vergleichbarkeit standard uncertainty from stability of measure-
zu unterschiedlichen Zeitpunkten (Kurzzeitsta- ment system
bilität)
uLSTAB Standardunsicherheit aus fortdauernden Eig- standard uncertainty from long term stability of
nung (Langzeitstabilität) measurement system
uTEMP Standardunsicherheit aus der Temperatur standard uncertainty from temperature
QMP-KLASS Eignungsgrenzwert Messprozess bei Klas- capability ratio limit measurement process for
senbildung class formation
T Toleranz tolerance
TMPmin Minimal zulässige Messprozesstoleranz minimum permissible tolerance of measurement
process
TOLMSmin Minimal zulässige Messsystemtoleranz minimum permissible tolerance of measurement
system
𝑇𝐸 Entwicklungstoleranz development tolerance
k Erweiterungsfaktor coverage factor
g Schutzabstandsfaktor guard band factor
𝑔𝑈𝐴 Schutzabstand an der oberen Spezifikations- guard Band on upper specification limit for ac-
grenze U für Annahme ceptance
𝑔𝐿𝐴 Schutzabstand an der unteren Spezifikations- guard Band on lower specification limit for ac-
grenze U für Annahme ceptance
𝑔𝑈𝑅 Schutzabstand an der oberen Spezifikations- guard Band on upper specification limit for rejec-
grenze U für Rückweisung tion
𝑔𝐿𝑅 Schutzabstand an der unteren Spezifikations- guard Band on lower specification limit for rejec-
grenze U für Rückweisung tion
a Grenzwert variation limit
𝑎+ Annahmezahl acceptance number
b Verteilungsfaktor distribution factor
152
Symbol Name Term
𝐿𝐴 Untere Annahmegrenze unter Berücksichti- lower acceptance limit taking into account the
gung des Schutzabstands guard band
𝑈𝑅 Obere Rückweisegrenze unter Berücksichti- upper rejection limit taking into account the
gung des Schutzabstands guard band
𝑈𝐴 Obere Annahmegrenze unter Berücksichtigung upper acceptance limit taking into account the
des Schutzabstands guard band
𝐿𝐸 Untere Grenze der Entwick- lower limit of the development specifications
lungsspezifikationen
𝑈𝐸 Obere Grenze der Entwicklungsspezif- upper limit of the development specifications
ikationen
153
12 References
[1] A.I.A.G. – Chrysler Corp., Ford Motor Co., General Motors Corp. 2010. Measurement
Systems Analysis, Reference Manual. 4. Auflage, Michigan, USA,. Retrieved from.
[2] AIAG - Automotive Industry Action Group. 2016. IATF 16949:2016. Anforderungen für
die Lieferkette. Retrieved from.
[3] DAkkS - Deutsche Akrreditierungsstelle. 2010. RichtlinieDAkkS-DKD-R 4-3Blatt 10.1
Kalibrieren von Messmitteln fürgeometrische Messgrößen. Kalibrieren von Bügelmess-
schrauben mit planparallelen oder sphärischen Messflächen. DAkkS, Braunschweig.
Retrieved September 23, 2020 from.
[4] DAkkS - Deutsche Akrreditierungsstelle. 2013. EA-4/02 M: 2013 Ermittlung der
Messunsicherheit bei Kalibrierungen (Deutsche Übersetzung). Deutsche Akrreditie-
rungsstelle, Braunschweig. Retrieved from.
[5] Deutscher Kalibrierdienst. 1998. DKD-4: Rückführung von Mess- und Prüfmitteln auf
nationale Normale. DKD bei der PTB, Braunschweig, Braunschweig. Retrieved from.
[6] DIN - Deutsches Institut für Normung. 1989. DIN ISO 55350-12: Ausgabe: 1989-03 Be-
griffe der Qualitätssicherung und Statistik; Merkmalsbezogene Begriffe. Beuth Verlag,
Berlin. Retrieved from.
[7] DIN - Deutsches Institut für Normung. 1995. DIN 1319-1: Grundlagen der Messtechnik
– Teil 1: Grundbegriffe. Beuth Verlag, Berlin. Retrieved from.
[8] DIN - Deutsches Institut für Normung. 1996. DIN 1319-2: Grundlagen der Messtechnik
– Teil 2: Begriffe für die Anwendung von Messgerä-ten. Beuth Verlag, Berlin. Retrieved
from.
[9] DIN - Deutsches Institut für Normung. 1996. DIN 1319-3: Grundlagen der Messtechnik
– Teil 3: Auswertung von Messungen einer einzelnen Messgröße, Messunsicherheit.
Beuth Verlag, Berlin. Retrieved from.
[10] DIN - Deutsches Institut für Normung. 1998. ISO/TR 14253-2: Geometrical product
specifications (GPS) – In-spection by measurement of workpieces and measuring
equipment. Part 2: Guide to the estimation of uncertainty in GPS measurement, in cali-
bration of measuring equipment and in product verification. Beuth Verlag, Berlin. Ret-
rieved from.
[11] DIN - Deutsches Institut für Normung. 1999. DIN 863-3:1999-04, Prüfen geometrischer
Größen_- Meßschrauben_- Teil_3: Bügelmeßschrauben, Sonderausführungen; Kon-
struktionsmerkmale, Anforderungen, Prüfung. Beuth Verlag GmbH, Berlin. Retrieved
from.
[12] DIN - Deutsches Institut für Normung. 2004. DIN EN ISO 10012: Ausgabe:2004-03
Messmanagementsysteme. Anforderungen an Messprozesse und Messmittel (ISO
10012:2003) - Dreisprachige Fassung EN ISO 10012:2003. Beuth Verlag, Berlin. Ret-
rieved from.
[13] DIN - Deutsches Institut für Normung. 2005. DIN EN ISO 9000:2005: Qualitätsmanage-
mentsysteme. Grundlagen und Begriffe, 9000. Beuth Verlag, Berlin. Retrieved from.
[14] DIN - Deutsches Institut für Normung. 2005. DIN EN ISO/IEC 17025: Ausgabe:2005-08
Allgemeine Anforderungen an die Kompetenz von Prüf- und Kalib-rierlaboratorien
154
(ISO/IEC 17025:2005). Deutsche und Englische Fassung EN ISO/IEC 17025:2005.
Beuth Verlag, Berlin. Retrieved from.
[15] DIN - Deutsches Institut für Normung. 2006. DIN ISO 3534-1 bis 3534-3: Statistik. Be-
griffe und Formelzeichen. Beuth Verlag, Berlin. Retrieved from.
[16] DIN - Deutsches Institut für Normung. 2008. DIN EN ISO 9001:2008: Qualitätsmanage-
mentsysteme - Anforderungen, 9001. Beuth Verlag, Berlin. Retrieved from.
[17] DIN - Deutsches Institut für Normung. 2008. DIN ISO 11095:2008-04, Lineare Kalibrie-
rung unter Verwendung von Referenzmaterialien (ISO_11095:1996); Text Deutsch und
Englisch. Beuth Verlag, Berlin. Retrieved from.
[18] DIN - Deutsches Institut für Normung. 2009. DIN EN ISO 15530-3:2009-07 Geometri-
sche Produktspezifikation und -prüfung (GPS) - Verfahren zur Ermittlung der Messunsi-
cher-heit von Koordinatenmessgeräten (KMG) - Teil 3: Anwendung von kalibrierten
Werkstücken oder Normalen. Beuth Verlag, Berlin. Retrieved from.
[19] DIN - Deutsches Institut für Normung. 2010. DIN ISO/IEC Guide 99:2007 Internationa-
les Wörterbuch der Metrologie (VIM). Beuth Verlag, Berlin. Retrieved from.
[20] DIN - Deutsches Institut für Normung. 2011. DIN EN ISO 3611:2011-03, Geometrische
Produktspezifikation (GPS)_- Längenmessgeräte: Bügelmessschrauben_- Konstrukti-
onsmerkmale und messtechnische Merkmale (ISO_3611:2010); Deutsche Fassung
EN_ISO_3611:2010. Beuth Verlag GmbH, Berlin. Retrieved from.
[21] DIN - Deutsches Institut für Normung. 2012. DIN EN ISO/IEC 17024:2012-11, Konfor-
mitätsbewertung_- Allgemeine Anforderungen an Stellen, die Personen zertifizieren
(ISO/IEC_17024:2012); Deutsche und Englische Fassung EN_ISO/IEC_17024:2012.
Beuth Verlag GmbH, Berlin. Retrieved from.
[22] DIN - Deutsches Institut für Normung. 2016. DIN ISO 22514-1:2016-08, Statistische
Methoden im Prozessmanagement_- Fähigkeit und Leistung_- Teil_1: Allgemeine
Grundsätze und Begriffe (ISO_22514-1:2014); Text Deutsch und Englisch. Beuth Ver-
lag, Berlin. Retrieved from.
[23] DIN - Deutsches Institut für Normung. 2017. DIN 863-1:2017-02, Geometrische Pro-
duktspezifikation_(GPS)_- Messschrauben_- Teil_1: Bügelmessschrauben; Grenz-
werte für Messabweichungen. Beuth Verlag, Berlin. Retrieved from.
[24] DIN - Deutsches Institut für Normung. 2017. DIN EN ISO/IEC 17025:2017 Allgemeine
Anforderungen an die Kompetenz von Prüf- und Kalibrierlaboratorien (ISO/IEC
17025:2005). Deutsche und Engli¬sche Fassung EN ISO/IEC 17025:2005. Beuth Ver-
lag, Berlin. Retrieved from.
[25] DIN - Deutsches Institut für Normung. 2018. DIN 32937:2018-04, Mess- und Prüfmittel-
überwachung_- Planen, Verwalten und Einsetzen von Mess- und Prüfmitteln. Beuth
Verlag GmbH, Berlin. Retrieved from.
[26] DIN - Deutsches Institut für Normung. 2018. DIN EN ISO 14253-1: Geometrische Pro-
duktspezifikation (GPS). Prüfung von Werkstücken und Messgeräten durch Messen.
Teil 1: Entscheidungsregeln für die Feststellung von Übereinstimmung o-der Nichtüber-
einstimmung mit Spezifikationen. Beuth Verlag, Berlin. Retrieved from.
155
[27] Joseph L. Fleiss and Jacob Cohen. 1973. The Equivalence of Weighted Kappa and the
Intraclass Correlation Coefficient as Measures of Reliability. Educational and Psycho-
logical Measurement 33, 3, 613–619. DOI:
https://doi.org/10.1177/001316447303300309.
[28] ISO – International Standard Organization. 1998. ISO 3650:1998-12 Geometrical Prod-
uct Specifications (GPS) - Length standards - Gauge blocks. Beuth Verlag, Berlin. Re-
trieved from.
[29] ISO – International Standard Organization. 2008. ISO/IEC: Guide 98-3 (2008). Beuth
Verlag, Berlin https://www.bipm.org/utils/common/documents/jcgm/
JCGM_200_2012.pdf. Retrieved from.
[30] ISO – International Standard Organization. 2008. ISO/WD 22514-7: Capability and per-
formance. Part 7: Capability of Measurement Processes., Genf. Retrieved from.
[31] ISO – International Standard Organization. 2009. ISO 10360-2:2009 Geometrical prod-
uct specifications (GPS) — Acceptance and reverification tests for coordinate measur-
ing machines (CMM) — Part 2: CMMs used for measuring linear dimensions. Retrieved
from.
[32] ISO – International Standard Organization. 2010. ISO/TR 14468:2010 Selected illustra-
tions of attribute agreement analysis, 03.120.30. Retrieved from.
[33] ISO – International Standard Organization. 2011. ISO 8015:2011(en) Geometrical
product specifications (GPS) — Fundamentals — Concepts, principles and rules.
Beuth Verlag, Berlin. Retrieved from.
[34] ISO – International Standard Organization. 2012. ISO/TR 14253-6:2012-11 Geometri-
sche Produktspezifikation (GPS) - Prüfung von Werkstücken und Messgeräten durch
Messen - Teil 6: Allgemeine Grundsätze für die Annahme und Zurückweisung von
Messgeräten und Werkstücken. Beuth Verlag, Berlin. Retrieved from.
[35] ISO – International Standard Organization. 2017. ISO 22514-2:2017-02: Statistical
methods in process management - Capability and performance. Part 2: Process capa-
bility and performance of time-dependent process models. Beuth Verlag, Berlin. Re-
trieved from.
[36] JGCM - Joint Committee for Guides in Metrology. [VIM3] 2.20 repeatability condition of
measurement repeatability condition https://jcgm.bipm.org/vim/en/2.20.html. Retrieved
September 23, 2020 from.
[37] JGCM - Joint Committee for Guides in Metrology. [VIM3] 4.14 Resolution https://
jcgm.bipm.org/vim/en/. Retrieved September 23, 2020 from.
[38] JGCM - Joint Committee for Guides in Metrology. 2008. JCGM 100:2008 Evaluation of
measurement data. Guide to the expression of uncertainty in measurement. Retrieved
from.
[39] M. G. KENDALL. 1938. A NEW MEASURE OF RANK CORRELATION. Biometrika 30,
1-2, 81–93. DOI: https://doi.org/10.1093/biomet/30.1-2.81.
[40] M. G. KENDALL and B. B. Smith. 1939. The Problem of m Rankings. Ann. Math. Sta-
tist. 10, 3, 275–287. DOI: https://doi.org/10.1214/aoms/1177732186.
156
[41] VDA - Verband Deutscher Automobilindustrie. 2013. VDA Band 5.1 Rückführbare In-
line-Messtechnik im Karosseriebau. Ergänzungsband zu VDA Band 5, Prüfprozesseig-
nung. VDA e.V., Berlin. Retrieved from.
[42] VDA - Verband Deutscher Automobilindustrie. 2016. VDA Band 16 - Dekorative Ober-
flächen von Anbau- und Funktionsteilen im Außen- und Innenbereich von Automobilen.
VDA e.V., Berlin. Retrieved from.
[43] VDA - Verband Deutscher Automobilindustrie. 2016. VDA Band 6.1 OM-Systemaudit
Serienproduktion. VDA e.V., Berlin. Retrieved from.
[44] VDA - Verband Deutscher Automobilindustrie. 2018. VDA Band 1 - Dokumentierte In-
formation und Aufbewahrung. 4. vollständige überarbeitete Ausgabe. VDA e.V., Berlin.
Retrieved from.
[45] VDA - Verband Deutscher Automobilindustrie. 2019. AIAG & VDA FMEA-Handbuch.
Design-FMEA, Prozess-FMEA, FMEA-Ergänzung - Monitoring & Systemreaktion. VDA
e.V., Berlin. Retrieved from.
[46] VDA - Verband Deutscher Automobilindustrie. 2020. VDA Band 4 Ringbuch, Sicherung
der Qualität in der Prozesslandschaft. 3. überarbeitete und erweiterte Auflage 2020,
aktualisiert März 2010, ergänzt. VDA e.V., Berlin. Retrieved from.
[47] VDA - Verband Deutscher Automobilindustrie. 2020. VDA Besondere Merkmale (BM)
04/2020 Besondere Merkmale, Prozessbeschreibung, 2., aktualisierte Auflage, April
2020. VDA e.V., Berlin. Retrieved from.
[48] VDI - Verein Deutscher Ingenieure e.V. 2001. VDI/VDE/DGQ 2618 Blatt 10.1 Prüfmit-
telüberwachung - Prüfanweisung für Bügelmessschrauben. VDI/VDE-Gesellschaft
Mess- und Automatisierungstechnik, Düsseldorf. Retrieved from.
[49] VDI - Verein Deutscher Ingenieure e.V. 2005. VDI/VDE 2627 Blatt 2 Messräume - Leit-
faden zur Planung, Erstellung und zum Betrieb. VDI/VDE-Gesellschaft Mess- und Au-
tomatisierungstechnik, Düsseldorf https://www.dakks.de/sites/default/files/dakks-dkd-
r_4-3_blatt_10.1_20101221_v1.1.pdf. Retrieved from.
[50] VDI - Verein Deutscher Ingenieure e.V. 2013. VDI/VDE 2600 Blatt 1. Prüfprozessma-
nagement - Identifizierung, Klassifizierung und Eignungsnachweise von Prüfprozessen.
Beuth Verlag, Berlin. Retrieved from.
[51] VDI - Verein Deutscher Ingenieure e.V. 2014. VDI/VDE/DGQ 2618 Blatt 11.1. Prüfmit-
telüberwachung - Prüfanweisung für mechanische Messuhren. Beuth Verlag, Berlin.
Retrieved from.
[52] VDI - Verein Deutscher Ingenieure e.V. 2015. VDI/VDE 2627 Blatt 1 Messräume -
Klassifizierung und Kenngrößen - Planung und Ausführung. VDI/VDE-Gesellschaft
Mess- und Automatisierungstechnik, Düsseldorf. Retrieved from.
[53] VDMA - Verband Deutscher Maschinen- und Anlagenbau. 2020. Entwurf VDMA
8720:2020-09 Leitfaden zur Klärung der Eigenschaften, Anforderungen und Abnahme
von Messsystemen und Messprozessen. VDMA. Retrieved from.
[54] Peter T. Wilrich and Hans-Joachim Henning, Eds. 1987. Formeln und Tabellen der an-
gewandten mathematischen Statistik (3., völlig neu bearb. Aufl. von P.-Th. Wilrich und
H.-J. Henning). Springer, Berlin, New York.
Index
157
Einflüsse · 152
A Environment · 23, 60, 64, 90, 91
Evaluation methods · 65
Examiner · 19, 64, 79, 86, 88, 90, 113, 130, 133,
Absolute measurement · 92
134, 136, 138, 139, 140, 141, 142, 146
Action limits · 24, 148, 149
expanded measurement uncertainty · 60, 67, 70,
Adjustment · 22
71, 73, 74, 75, 77, 80, 88, 100, 101, 119, 123
Adjustment master · 61, 80, 145
ANOVA · 20, 52, 69, 80, 81, 82, 86, 87, 88, 89, 93
Area of non-conformance · 133, 134
Attributive test
F
Gauge · 20
Auflösung · 151, 152 Fehlergrenzwert · 151
Averaging · 65, 69, 112 Form error · 88, 108
Formula symbol · 19, 110, 151
B
G
Bediener · 151
Bias · 20, 79, 80, 81, 82, 135, 136 Grenzwert der Messabweichung · 151
Guidelines · 13, 63, 96, 97, 98
GUM · 16, 19, 37, 53, 68, 70, 75, 79, 110, 121
C
C value · 107
I
Calibration · 18, 21, 22, 32, 33, 35, 36, 37, 42, 47,
48, 49, 50, 61, 62, 77, 78, 79, 97, 120, 133, 145 Influences · 23, 53, 60, 64, 67, 83, 91, 93, 101, 108,
Calibration uncertainty · 14, 37, 62, 77, 78, 81, 84, 109
85, 94 Influencing components · 61, 62, 64, 68, 69, 73, 94,
Capability · 13, 23, 32, 36, 39, 48, 50, 53, 60, 65, 112, 133, 146
67, 68, 77, 106, 111, 113, 126, 128, 129, 145, Interactions · 68, 86, 87
146, 147, 149, 150
Capability ratio
Capability ratios · 67, 74, 99, 100, 101, 105, 106, K
107, 110, 119
Characteristic · 19, 36, 64, 105, 109, 110, 120, 132 Kalibrierung · 151
Characteristic values · 43, 127
Classification · 117
CMM · 32 L
Combined measurement uncertainty · 70, 101
Comparative measurement · 21, 32, 80, 92 Limit values · 23, 71, 78, 94, 97, 108, 109, 112, 113,
Conformity · 19, 28, 32, 35, 60, 72, 73, 96, 141 123, 124, 125, 132, 134, 136, 137, 138, 140, 142,
Conformity assessment · 19 144
Conformity decisions · 68, 133, 134 Linear expansion · 90, 91, 92
Control chart · 24, 147, 149 Linearitätsabweichung · 151
Correct value · 21 Linearity · 80, 81, 85, 118
Correction · 83, 92, 149 Linearity error · 62, 80, 81, 82
Coverage probability · 19, 71 Linearity testing · 82
Literature · 15, 80, 138
D
M
Definitions · 16, 80, 98
D-optimal design · 69 Material measures · 61, 84, 85, 112
D-optimal plan · 69 Maximum permissible measurement error · 23,
77, 98, 99
Measured part · 60, 64, 86, 91, 92, 93, 112
E Measured values · 23, 28, 60, 81, 83, 87, 93, 120,
121, 122, 146, 150
Eignung · 152 Measureme<nt uncertainty · 13
Eignungskennwert Measurement error
Eignungskennwerte · 152 systematic · 20, 23, 52, 62, 63, 77, 79, 80, 94, 99
158
Measurement fault · 15, 63
Measurement method · 18, 64
P
Measurement procedure · 18, 64, 87, 98, 112
Measurement process · 14, 28, 43, 50, 51, 65, 73, Person · 23, 64, 90
86, 94, 106, 108, 114, 147 Points in time · 87, 88
Measurement process capability · 13, 23, 24, 32, Process potential · 106
43, 45, 51, 53, 54, 67, 71, 72, 73, 74, 93, 94, 102, Proof of capability · 23, 25, 37, 43, 47, 50, 52, 53,
108, 150 55, 56, 57, 73, 77, 78, 98, 100, 110, 122, 124,
Measurement process models · 94 125, 126, 127, 145, 146, 149
Measurement repeatability · 20, 62, 99 Prüfobjekt · 151
Measurement result · 17, 18, 19, 20, 21, 23, 52, 60,
64, 65, 67, 69, 86, 87, 92, 93, 94
Measurement software · 96, 120 R
Measurement stability · 23, 24, 88, 113, 145, 147
Measurement system capability · 23, 32, 45, 53, Random errors · 60
67 Reference standard · 21, 61, 62
Measurement systems · 23, 32, 53, 61, 62, 63, 65, Reference values · 110, 135
69, 73, 78, 87, 101, 111, 118 Repeat measurements · 52, 53, 57, 68, 79, 84, 85,
Measurement uncertainty · 13, 15, 17, 18, 19, 20, 86, 87, 88, 112, 128, 135
21, 22, 23, 28, 29, 32, 35, 37, 38, 39, 45, 49, 50, Repeatability · 20, 69, 79, 81, 82, 83, 86, 99, 127,
51, 52, 53, 55, 56, 60, 61, 67, 68, 70, 71, 72, 73, 147
74, 75, 77, 78, 80, 83, 85, 87, 92, 93, 98, 99, 100, Reproducibility · 20, 57, 65, 69, 86, 87, 101, 140
101, 105, 107, 109, 110, 112, 117, 121, 122, 123, Residues · 83
125, 126, 127, 128, 129, 130, 132, 133, 136, 137, Resolution · 22, 62, 77, 78, 98, 105, 106, 110, 118,
138, 150 149
Measuring equipment · 22, 33, 36, 62, 94, 98, 99,
112, 145
Measuring instrument drift · 63 S
Measuring machine · 22, 23, 77, 83, 92, 94, 99,
112, 124, 145 Setting · 22, 150
Measuring points · 69, 87, 112 Setting standard · 80
Messabweichung small tolerances · 109, 118
systematische · 151, 152 Special measurement processes · 117
Messobjekt · 151 Stability · 24, 37, 65, 87, 88, 111, 113, 126, 145,
Messstellen · 151 146
Messunsicherheit · 152 Standard · 14, 21, 32, 48, 49, 61, 62, 63, 65, 77, 78,
Method 1 · 84 79, 81, 82, 83, 84, 85, 99, 120, 146, 147
Method 2 · 69, 93 Standard distribution · 63, 71, 78, 103, 104, 121,
Method 3 · 93 135
Method A Standard measurement uncertainty · 18, 69, 79,
ANOVA · 51, 68, 75, 80, 81, 85, 88, 89, 93, 96, 121, 150
98 Standards · 13, 17, 19, 65, 92, 97, 98
Method B · 51, 68, 69, 75, 80, 88, 96, 101 Surroundings · 64, 90, 113
minimal tolerance · 106
Mounting device · 65
MPE · 23, 77, 80, 99, 100 T
MSA · 13, 20
Temperatur · 152
Temperature · 52, 55, 64, 89, 90, 92, 113
N Temperature influences · 64, 89, 91, 92
Test characteristic · 19, 77, 125
Normal · 151 Test part · 14, 20, 25, 26, 48, 64, 69, 86, 88, 89, 90,
92, 99, 124, 125, 127, 128, 133, 134, 141, 143,
146, 149
O Testing · 19, 23, 35, 97, 124, 125, 130
Thermal expansion coefficient · 90
Operator · 19, 60, 64, 69, 86, 93, 113 Tolerance · 15, 24, 28, 29, 31, 32, 50, 60, 73, 74,
Operator influence · 93 78, 81, 84, 93, 94, 98, 100, 102, 106, 108, 109,
Outliers · 63, 149 110, 114, 117, 123, 124, 134, 145
Toleranz · 152
True value · 20
159
Verification · 24, 36, 50
U Verteilungsfaktor · 152
Vibrations · 63, 64, 68, 113
Uncertainty budget · 18, 51, 60, 68, 72, 79, 87, 91, VIM · 16, 63, 67, 78, 79, 86, 87
110, 146, 150
Uncertainty component · 18, 51, 60, 68, 75, 76,
77, 83, 85, 87, 88, 93, 94, 95, 99, 100, 108, 110,
W
133
Uncertainty range · 28, 130, 134, 135
Unsicherheitsbereich · 152 Wiederholbarkeit · 151
unsuitable · 53, 111, 112, 128 Wiederholpräzision · 151
Working standard · 21, 61
V
Z
Validation · 24, 32, 45, 47, 48, 96, 120
Vergleichbarkeit · 151, 152 Zeitpunkte · 152
160