Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323357705

STATISTICAL QUALITY CONTROL IN INDUSTRY AND SERVICES

Conference Paper · January 2017

CITATIONS READS
0 14,471

2 authors:

Maria Ivette Gomes Fernanda Otila Sousa Figueiredo


University of Lisbon University of Porto
942 PUBLICATIONS   8,379 CITATIONS    160 PUBLICATIONS   960 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Acceptance Sampling Plans View project

Generalized Means in EVT View project

All content following this page was uploaded by Maria Ivette Gomes on 23 February 2018.

The user has requested enhancement of the downloaded file.


STATISTICAL QUALITY CONTROL IN INDUS-
TRY AND SERVICES

Authors: M. Ivette Gomes


– CEAUL and DEIO, FCUL, Universidade de Lisboa, Portugal
(ivette.gomes@fc.ul.pt)
Fernanda Figueiredo
– Faculdade de Economia, Universidade do Porto,
e CEAUL, Universidade de Lisboa, Portugal
(otilia@fep.up.pt)

The terminology quality is here identified with adequacy to be used, i.e. products
should satisfy the requirements of users. In quality control, we are always con-
fronted with the project’s quality versus the conformity’s quality. It is with this
second type of quality that we deal in the area of statistical quality control (SQC).
The main objectif of SQC is to achieve a guarantee of quality in production and
service organizations, through the use of adequate statistical techniques.

Quality characteristics describe, either individually or jointly, the adequacy to


be used of a certain product or service. They can be physical (weight, volt-
age), sensorial (flavor, appearance, color) or temporally orientated (reliability
function). The standard procedure in SQC is the following one: (1) observe
the relevant quality characteristics; (2) compare those observations with possibly
pre-determined specifications, the so-called quality norms; (3) take appropriate
actions whenever there exist a significant difference between the real and the
expected performances.

Quality is often pointed out as the key factor for the success of an organiza-
tion. For more details, see Montgomery (2009), Gomes et al. (2010), and Gomes
(2011a,b), among others. The statistical production control (with low costs) im-
plies a reduction in the manufacture’s costs and the increasing of productivity.

The main objective of SQC is the systematic reduction of the variance of any
relevant quality characteristic. The most usual statistical techniques in the field
of quality are acceptance sampling, statistical process control (SPC), regression
analysis, analysis of variance, time-series analysis, reliability, and experimental
design—Taguchi’s designs. The typical evolution in the use of statistical tech-
niques in SQC can be summarized in the following way: (1) At the lowest level of
maturity, there exist only modest applications of acceptance sampling; (2) When
maturity increases, sampling inspection is intensified; (3) Next comes SPC, with
the systematic use of control charts, already developed by others; (4) When the
process attains stability and maturity increases, it is then usual to develop dif-
ferent experimental designs and to assess the reliability of the final product. This
enables the improvement and optimization of production.
2 M. Ivette Gomes and Fernanda Figueiredo

We begin with a brief historical introduction. First, and immediately after the
Industrial Revolution only inspection was usual, to identify defective products
and prevent their sale to consumers. Next, the use of SPC was intensified. In
1924, at the Bell Laboratories, Shewhart developed the concept of control chart,
and, more generally, SPC, with the introduction of the first statistical control
chart, the so-called Shewhart-chart, shifting then the attention from the product
to the production process (Shewhart, 1931). Dodge and Romig (1959), also in
the Bell Laboratories, introduced and developed the sampling inspection, as an
alternative to the 100% inspection. Other pioneers were W.E. Deming, J.M. Ju-
ran, P.B. Crosby and K. Ishikawa. But it was during the Second World War that
there was a generalized use and acceptance of SQC, mainly in the industries of
manufacture. SQC was then largely used in USA and considered as primordial
for the defeat of Japan. In 1946, the American Society for Quality Control was
founded, and this enabled a huge push to the generalization and improvement
of SQC methods. After the II World War, Japan was confronted with rare food
and lodging, and the factories were in ruin. They evaluated and corrected the
causes of such a defeat. The quality of the products was an area where USA had
definitely overpassed Japan, and this was one of the items they tried to correct,
becoming rapidly masters in inspection sampling and SQC, and leaders of quality
around 1970. More recently, the efforts in quality development, which were ini-
tially centered in goods—manufactured or consumed products—were expanded to
include services—work developed for the benefice of others. Recently, the quality
developments have also been devoted to the motivation of workers, a key element
in the expansion of the Japanese industry and economy. Quality is more and
more the prime decision factor in the consumer preferences, and quality is often
pointed out as the key factor for the success of organizations. The implementation
of a production QC clearly leads to a reduction in the manufacturing costs, and
the money spent with control is almost irrelevant. At the moment, the quality
improvement in all areas of an organization, a philosophy known as Total Quality
Management (TQM) is considered crucial (see Vardeman and Jobe, 1999). The
challenges are obviously difficult. But the modern SQC methods surely provide a
basis for a positive answer to these challenges. SQC is at this moment much more
than a set of statistical instruments. It is a global way of thinking of workers in
an organization, with the objective of making things right at the first place. This
is mainly achieved through the systematic reduction of the variance of relevant
quality characteristics.

SQC is nowadays routinely used in big organizations, which apart from their
specific quality departments have contracts with researchers, who facilitate the
development of simple but more sophisticated procedures to assure a better qual-
ity of their products.

Among the large variety of simple instruments that promote quality, we first
mention the cause-and-effect or fish-and-bone diagrams: If we want to achieve a
certain objective, or if we have in mind a certain quality problem, all possible
factors that lead to such an objective or such a problem need to be schematized
in a scheme of the type:
Statistical Quality Control in Industry and Services 3

Working method Workers

Quality
characteristic

Equipment Measurements

We further mention the fluxogram or organigram, an instrument for the clear and
fast identification of the process or for the detection of problems in a process,
showing the chronological steps of a certain operation. The Pareto diagrams are
also quite relevant methods in SPC. Assume that our interest lies in the classifica-
tion of the cause underlying a failure in a certain production process. The classes
or categories are, for instance: Lack of experience regarding monitoring in the
production line; Lack of experience in management; Non-weighted experiences;
Incompetence; Other reasons (such as negligence, fraud, . . . ); Unknown reasons.
We are then facing a qualitative variable, and given that we are interested in the
failure underlying cause, we can easily build a frequency table, where the ordering
of the categories is arbitrary. In applications regarding quality, it is sensible to
order those categories decreasingly according to the occurrence frequency, begin-
ning with the most common category, and ending with the less common. The
frequency table is in this particular case given by:

Underlying cause Fr. Rel. fr. Cum. prop.


Incompetence 698 .477 .477
Non-weighted experiences 314 .215 .692
Management inexperience 236 .161 .853
In-line lack of experience 111 .076 .929
Unknown reason 83 .057 .986
Other causes 21 .014 1.000
Totals 1.463 1.00

The Pareto diagram is a bar graph, usually with vertical bars, placed from the
left to the right by decreasing height. The Pareto graphs are very popular in
SQC. The heights of the bars represent often the frequency of the problems in
the production process (number of defectives, accidents, failures, . . . ). Since the
bars are placed by decreasing order of height, it is then easier to identify the areas
with the most severe problems.
In Figure 1, and for the data in the previous table, a Pareto diagram is presented.
Beyond the bars decreasingly ordered, we can there see the cumulative proportion
of failures—the so-called cum-line.

The main objective of a Pareto diagram is to establish priorities among the dif-
ferent possible causes of problems in the production process.

Regarding additional statistical techniques suitable to the achievement of quality,


we first mention the histogram, a graph with vertical bars, the statistical image
4 M. Ivette Gomes and Fernanda Figueiredo

100

80

60
698

40

314
236 20
111
83 21
0
Categories (variable: AVARIAS)
Value Cumul. Percent

Figure 1: Pareto diagram associated with the process of failures

of the probability density function of a quantitative variable. The main objective


of the histogram is to show the shape of the distribution underlying a sample of
quantitative data. The first step in the construction of a histogram is the defini-
tion of the class intervals (categories) to which data belong, given an adequate
class interval range, based on the available sample.
We further mention the probability paper graph (also called Q-Q plot)—-a graph-
ical technique for model selection. With adequate modifications, this method
can be used for complete and censored data, either continuous or discrete. It is
merely a method of linearization of the cumulative distribution function (CDF),
F , underlying the available sample. On the basis of the ascendingly ordered
sample, (x1:n , x2:n , · · · , xn:n ), and for a model F (x) = F ((x − λ)/δ), the cloud of
points,
xi:n , yi = F −1 (i/(n + 1)) =: F −1 (pi ) , 1 ≤ i ≤ n,


is plotted. If the graph shows a linear relationship between xi:n and yi , 1 ≤ i ≤ n,


we have an informal validation of the postulated CDF F (.). The intersection with
one of the axis and the slope of the fitted line provide initial estimates of λ and δ.
More generally, the linear relationship can appear between a known function of
xi:n and another known function of pi . When the graph in a probability paper is
clearly non-linear, something that leads to the rejection of the postulated model,
F (.), we further can get additional information from the graph. For details, see,
for instance, Bury (1975). Further messages provided by a QQ-plot are: (1) If two
lines with different slopes appear, the underlying model should be a mixture of
two populations from F but with different scales and possibly different locations;
(2) A linear relationship in the central part of the graph, with slight deviations
from the line at one (or the two) extremes means censoring in one (or both) sides,
without our knowledge. It can also suggest an underlying truncated model F ; (3)
A convex graph can mean that the underlying CDF is more asymmetric to the
right than F or that an upper location parameter should have been considered
(than the replacement of xi:n by xn:n − xi:n in the QQ-plot will lead to a straight
line in the graph); (4) Similarly, a concave graph can mean that the underlying
Statistical Quality Control in Industry and Services 5

CDF is more asymmetric to the left than F or than a lower location parameter
should have been considered (than the replacement of xi:n by xi:n − x1:n in the
QQ-plot will lead to a straight line in the graph).
We finally mention the control charts, which are bi-dimensional plots, with the
time plotted in the horizontal axis versus a relevant numerical quantity (like any
quality measure associated with the process under study), plotted is the vertical
axis. The points are usually connected by a line, leading to graphs of the type:

UCL = Uper Control Limit


Sample quality characteristic

CL = Central Line

LCL= Lower Control Limit


Sample number or time

We next go on with a brief introduction to control charts. In all production


processes, independently of a well design and maintenance, there always exist an
inherent variability. To differentiate between the inevitable random causes and
the deterministic causes in a production process, Shewhart designed, in 1924, the
first control chart. It is a simple graphical method, which enables the operator
to detect the existence of deterministic causes. If there is only the intervention
of random causes, it is said that the process is in control (IN–state), and the
production process continues. If deterministic causes are detected, it is said
that the process is out of statistical control (OUT–state). It is then necessary
to detect and eliminate those causes, preferably without the interruption of the
production process. A control chart is a statistical test, performed along time, of
the hypothesis H0 : IN-state versus H1 : OUT-state, on the basis of an adequate
and simple statistic, W .
Generally speaking, the basic methodology associated with a control chart is
the following: (1) Perform a sampling of the process along time, and represent
graphically a measure W associated with the process, an average, a percentage,
a maximum; (2) Given the chosen statistics, determine, on the basis of adequate
statistical methodology, and taking possibly into account historical data, a cen-
tral line (CL), and two other lines, the so-called lower control limit ( LCL) and
upper control limit (UCL); (3) Data need to be statistically analyzed and any
preliminary data analysis in SQC is essentially graphical—Histograms and QQ-
plots.
6 M. Ivette Gomes and Fernanda Figueiredo

If there exist measurements outside the control limits, it is necessary to detect the
deterministic cause that provoked such a problem, trying the best way of removing
it. If measurements are inside the control limits, in principle it is not necessary to
take any decision, and the production can continue regularly. However, even when
all points are inside the control limits, if their behavior is non-random, there is an
indication that the process is out of control, i.e. in an OUT-state. If the reason of
such a problem is found and eliminated, the production process will surely have
a better performance. Random tests applied to the control chart points are thus
crucial in SQC. As mentioned before, there is an intimate relationship between
control charts and hypothesis testing. Essentially, a control chart is a test of the
hypothesis H0 : The process is (IN)-control vs H1 : The process is (OUT)-of-
control, performed along time. We have again under play the α-risk associated
with the type I error, i.e. the probability of deciding for the state OUT when the
state is IN, and the β-risk associated with type II error, i.e. the probability of
deciding for the state IN when the process is OUT-of-control.
We next mention the most simple example of a control chart, associated with a
manufacturing process of motor segments. The relevant quality characteristic is
the external diameter of those segments. The process needs to be controlled with
an external diameter of 74 mm, and an associated standard deviation equal to
0.01 mm (on the basis of a prior data analysis or due to management norms).
We can then use the Shewhart-chart: every half-an-hour, a sample of segments
is collected, and the average (x5 ) of their external diameters is computed, being
those averages plotted in a control chart (X-chart). These averages are estimates
of the process mean value µ, the parameter to be monitored. The center line is
CL = 74 mm (the target). If we assume that data are normally distributed, i.e.,
X ∼ N (µ = 74, σ = 0.01), the control limits can be determined √ on the basis
that under H0 , X is Normal µx = 74mm, σx = 0.0045 = 0.01/ 5 . Under an
IN-state, 100(1 − α)% of the sample mean diameters are expected to be in the
interval between 74+0.0045ξα/2 and 74−0.0045ξα/2 , with ξα/2 = α/2-quantile of a
N (0, 1). For α = 0.002 (the most common value in the english literature), ξα/2 =
−3.09, and consequently, the control limits are LCS0.002 = 74.0139, LCI0.002 =
73.9861. In the american literature, and under normality condition, the value
3.09 is replaced by 3, being used the so-called 3-sigma control limits, LCS =
74.0135, LCI = 73.9865.
The main problem when devising a control chart is thus to find adequate statistics,
most of the times associated with simplicity of calculus, with a known sampling
distribution under the validity of the IN-control state, so that we can easily find
the control limits, which obviously provide a confidence interval for a measure of
location or scale of such a statistic. The most often used statistics are, for quali-
tative characteristics percentages and totals, and for quantitative characteristics,
the average, the empirical standard deviation, the range, and upper and lower
order statistics, obviously including the maximum and the minimum.
The choice of the parameters involved in the control charts, the dimension n of
the samples to collect (the so-called rational subgroups), the sampling interval, h,
and the control limits, L, are thus the crucial points to deal with in SPC. The
type of sampling is also very important. Just as a simple example, let us consider
Statistical Quality Control in Industry and Services 7

that we are dealing with a production process where the time for production of a
unique piece is too long. Then moving averages and moving ranges provide simple
and adequate measures of location and scale. Similarly, we can speak on moving
minima and moving maxima control charts. We are then inducing dependence
among the summary statistics considered along the inspection process.
Main primary characteristics. Given a control statistic, W , to which is associated
a continuation region C := (LCL, UCL), being θ = θ0 the IN-state, we have: (1)
Risk-α = P (W 6∈ C|θ = θ0 ); (2) Characteristic curve≡ β(θ), β(θ) = P(W ∈
C|θ), θ ∈ Θ, β(θ0 ) = 1 − α; (3) NSS or RL (number of samples to signal or run
length)— number of collected samples since the instant of re-beginning of the
process (instant 0) up to the instant of emission of an out-of-control state. We
then need to count the sample collected at the (re)beginning of the production
process, as well as all collected ones, excluding the sample which was responsible
for the alarm signal; (4) ANSS or ARL =E[NSS]; (5) TS (Time to Signal)—
time elapsing between the (re)beginning of the process and the time at which is
collected the sample responsible for the emission of the out-of-control state; (6)
ATS (Average Time to Signal) = E[TS].
Since NSS = NSS(θ) is a geometric random variable, with a support 1, 2, . . ., and
a parameter (1 − β(θ)), we get ANSS(θ) = 1/(1 − β(θ)), for any sampling policy.
The same thus not happen with the TS and the ATS.
The main sampling policies are the FSI policy — the sampling intervals (intervals
between any pair of consecutive observations of the control statistic) are fixed
and equal to d : d > 0, and the VSI policy—the sampling intervals are different,
and depend upon the location of the collected observations. If the value of the
control statistic is a long way from the central line, but not sufficient enough to
provoke the emission of an out-of-control state, the collection of a new sample is
anticipated. If the value of the summary statistic is close to the central line, the
collection of a new sample is delayed. And the process itself can be correlated.
Much more could be said beyond the FSI and the VSI policies, including topics
like the consideration of robust methods in SPC (Figueiredo & Gomes, 2004a,b;
2006; 2009; 2016) and the use of the bootstrap methodology (Efron, 1979) in the
design of reliable control charts (Figueredo & Gomes, 2015).

Acknowledgements. Research partially supported by National Funds through


FCT — Fundação para a Ciência e a Tecnologia, projecto UID/MAT/00006/2013
(CEA/UL).

REFERENCES

[1] Bury, K.V. (1975). Statistical Models in Applied Science. John Wiley and Sons.
[2] Dodge, H.F. & Romig, H.G. (1959). Sampling Inspection Tables, Single and Dou-
ble Sampling, 2nd edition. John Wiley & Sons.
[3] Efron, B. (1979). Bootstrap methods: another look at the jackknife. Ann. Statist.
7, 1–26.
8 Author names and/or inicials

[4] Figueiredo, F. & Gomes, M.I. (2004a). Estimação robusta dos limites de uma
carta de controlo. In Rodrigues, P. et al. (eds.), Estatı́stica com Acaso e Necessi-
dade, Edies S.P.E., 249–257.
[5] Figueiredo, F. & d Gomes, M.I. (2004b). The total median in Statistical Quality
Control. Applied Stochastic Models in Business and Industry 20:4, 339–353.
[6] Figueiredo, F. & Gomes, M.I. (2006). Box-Cox transformations and robust control
charts in SPC. In Pavese et al. (eds.). Advanced Mathematical and Computational
Tools in Metrology VII, pp. pp. 35-46, World Scientific, New Jersey.
[7] Figueiredo, F. & Gomes, M.I. (2009). Monitoring industrial processes with robust
control charts. Revstat 7: 2, 151–170.
[8] Figueiredo, F. & Gomes, M.I. (2015). Control charts implemented on the basis of
a bootstrap reference sample. In Filius, L., Oliveira, T. & Skiadas, C.H. (Eds.),
Stochastic Modeling, Data Analysis and Statistical Applications, ISAST editions,
505–514.
[9] Figueiredo, F. & Gomes, M.I. (2016). The total median statistic to monitor con-
taminated normal data. Quality Technology and Quantitative Management 13:1,
78–87
[10] Gomes, M.I. (2011a). Acceptance Sampling. In Lovric, M. (ed.), International
Encyclopedia of Statistical Science, Part 1, pp. 5–7, Springer.
[11] Gomes, M.I. (2011b). Statistical Quality Control. In Lovric, M. (ed.), Interna-
tional Encyclopedia of Statistical Science. Part 19, pp. 1459–1463, Springer.
[12] Gomes, M.I., Figueiredo, F. & M.I. Barão (2010). Controlo Estatı́stico da Quali-
dade, 2a edition. Edições S.P.E.
[13] Ishikawa, S.B. & J.M. Jobe (1985). What is Total Quality Control. The Japanese
Way. Prentice-Hall.
[14] Montgomery, D.C. (2009). Statistical Quality Control: a Modern Introduction.
Wiley.
[15] Shewhart, W.A. (1931). Economic Control of Quality of Manufactured Product.
Van Nostrand, NewYork.
[16] Swift, J.A. (1995). Introduction to Modern Statistical Quality Control and Man-
agement. St. Lucie Press.
[17] Vardeman, S. and nd J.M. Jobe (1999). Statistical Quality Assurance Methods
for Engineers. John Wiley & Sons.

View publication stats

You might also like