Professional Documents
Culture Documents
Industrial Statistics A Computer Based Approach With Python Statistics For Industry Technology and Engineering Ron S. Kenett
Industrial Statistics A Computer Based Approach With Python Statistics For Industry Technology and Engineering Ron S. Kenett
Industrial Statistics A Computer Based Approach With Python Statistics For Industry Technology and Engineering Ron S. Kenett
https://ebookmeta.com/product/modern-statistics-a-computer-based-
approach-with-python-statistics-for-industry-technology-and-
engineering-ron-s-kenett/
https://ebookmeta.com/product/elementary-statistics-7e-ron-
larson/
https://ebookmeta.com/product/statistics-and-data-visualisation-
with-python-jesus-rogel-salazar/
https://ebookmeta.com/product/statistics-for-chemical-and-
process-engineers-a-modern-approach-2nd-edition-shardt/
ISE Principles of Statistics for Engineers and
Scientists (ISE HED IRWIN INDUSTRIAL ENGINEERING) 2nd
Edition William Navidi Prof.
https://ebookmeta.com/product/ise-principles-of-statistics-for-
engineers-and-scientists-ise-hed-irwin-industrial-
engineering-2nd-edition-william-navidi-prof/
https://ebookmeta.com/product/elementary-statistics-picturing-
the-world-8e-8th-edition-ron-larson/
https://ebookmeta.com/product/elementary-statistics-picturing-
the-world-7th-global-edition-ron-larson/
https://ebookmeta.com/product/bayesian-statistics-for-beginners-
a-step-by-step-approach-therese-m-donovan/
https://ebookmeta.com/product/python-for-probability-statistics-
and-machine-learning-2nd-edition-jose-unpingco/
Statistics for Industry, Technology, and
Engineering
Series Editor
David Steinberg
Tel Aviv University, Tel Aviv, Israel
Editorial Board
V. Roshan Joseph
Georgia Institute of Technology, Atlanta, GA, USA
Ron S. Kenett
Neaman Institute, Haifa, Israel
Christine M. Anderson-Cook
Los Alamos National Laboratory, Los Alamos, USA
Bradley Jones
SAS Institute, JMP Division, Cary, USA
Fugee Tsung
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Industrial Statistics
A Computer-Based Approach with Python
Ron S. Kenett
KPA Ltd., Ra’anana, Israel
Shelemyahu Zacks
Binghamton University, Mc Lean, VA, USA
Peter Gedeck
University of Virginia, Falls Church, VA, USA
This work is subject to copyright. All rights are solely and exclusively
licensed by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in
any other physical way, and transmission or information storage and
retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The publisher, the authors, and the editors are safe to assume that the
advice and information in this book are believed to be true and accurate
at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the
material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
A Introduction to Python
B List of Python Packages
C Code Repository and Solution Manual
D Bibliography
Index
List of Abbreviations
AIC Akaike Information Criteria
ANOVA Analysis of Variance
ANSI American National Standard Institute
AOQ Average Outgoing Quality
AOQL Average Outgoing Quality Limit
AQL Acceptable Quality Level
ARIMA Autoregressive Integrated Moving Average
ARL Average Run Length
ASN Average Sample Number
ASQ American Society for Quality
ATE Average Treatment Effect
ATI Average Total Inspection
BECM Bayes Estimation of the Current Mean
BI Business Intelligence
BIBD Balanced Incomplete Block Design
BIC Bayesian Information Criteria
BLUP Best Linear Unbiased Prediction
BN Bayesian Network
BP Bootstrap Population
CAD Computer-Aided Design
CADD Computer-Aided Drawing and Drafting
CAM Computer-Aided Manufacturing
CART Classification And Regression Trees
CBD Complete Block Design
c.d.f. cumulative distribution function
CED Conditional Expected Delay
cGMP Current Good Manufacturing Practices
CHAID Chi-square Automatic Interaction Detector
CI Condition Indicator
CIM Computer-Integrated Manufacturing
CLT Central Limit Theorem
CMM Coordinate Measurement Machines
CMMI Capability Maturity Model Integrated
CNC Computerized Numerically Controlled
CPA Circuit Pack Assemblies
CQA Critical Quality Attribute
CUSUM Cumulative Sum
DACE Design and Analysis of Computer Experiments
DAG Directed Acyclic Graph
DFIT Difference in Fits distance
DLM Dynamic Linear Model
DoE Design of Experiments
DTM Document Term Matrix
EBD Empirical Bootstrap Distribution
ETL Extract-Transform-Load
EWMA Exponentially Weighted Moving Average
FDA Food and Drug Administration
FDA Functional Data Analysis
FPCA Functional Principal Component Analysis
FPM Failures Per Million
GFS Google File System
GRR Gage Repeatability and Reproducibility
HPD Highest Posterior Density
HPLC High-Performance Liquid Chromatography
IDF Inverse Document Frequency
i.i.d. independent and identically distributed
InfoQ
Information Quality
IPO Initial Public Offering
IPS Inline Process Control
IQR InterQuartile Range
ISC Short-Circuit Current of Solar Cells (in Ampere)
KS Kolmogorov–Smirnov Test
LCL Lower Control Limit
LLN Law of Large Numbers
LQL Limiting Quality Level
LSA Latent Semantic Analysis
LSL Lower Specification Limit
LTPD Lot Tolerance Percent Defective
LWL Lower Warning Limit
MAE Mean Absolute Error
m.g.f. moment generating function
MLE Maximum Likelihood Estimator
MSD Mean Squared Deviation
MSE Mean Squared Error
MTBF Mean Time Between Failures
MTTF Mean Time To Failure
NID Normal Independently Distributed
OAB One-Armed Bandit
OC Operating Characteristic
PCA Principal Component Analysis
p.d.f. probability density function
PERT Project Evaluation and Review Technique
PFA Probability of False Alarm
PL Product Limit Estimator
PPM Defects in Parts Per Million
PSE
Practical Statistical Efficiency
QbD Quality by Design
QMP Quality Measurement Plan
QQ-Plot Quantile vs. Quantile Plot
RCBD Randomized Complete Block Design
Regex Regularized Expression
RMSE Root Mean Squared Error
RSWOR Random Sample Without Replacement
RSWR Random Sample With Replacement
SE Standard Error
SL Skip Lot
SLOC Source Lines of Code
SLSP Skip Lot Sampling Plans
SPC Statistical Process Control
SPRT Sequential Probability Ratio Test
SR Shiryaev Roberts
SSE Sum of Squares of Errors
SSR Sum of Squares around the Regression Model
SST Total Sum of Squares
STD Standard Deviation
SVD Singular Value Decomposition
TAB Two-Armed Bandit
TF Term Frequency
TTC Time Till Censoring
TTF Time Till Failure
TTR Time Till Repair
TTT Total Time on Test
UCL Upper Control Limit
USL Upper Specification Limit
UWL
Upper Warning Limit
WSP Wave Soldering Process
Contents
1 The Role of Statistical Methods in Modern Industry
1.1 Evolution of Industry
1.2 Evolution of Quality
1.3 Industry 4.0 Characteristics
1.4 Digital Twin
1.5 Chapter Highlights
1.6 Exercises
2 Basic Tools and Principles of Process Control
2.1 Basic Concepts of Statistical Process Control
2.2 Driving a Process with Control Charts
2.3 Setting Up a Control Chart:Process Capability Studies
2.4 Process Capability Indices
2.5 Seven Tools for Process Control and Process Improvement
2.6 Statistical Analysis of Pareto Charts
2.7 The Shewhart Control Charts
2.7.1 Control Charts for Attributes
2.7.2 Control Charts for Variables
2.8 Process Analysis with Data Segments
2.8.1 Data Segments Based on Decision Trees
2.8.2 Data Segments Based on Functional Data Analysis
2.9 Chapter Highlights
2.10 Exercises
3 Advanced Methods of Statistical Process Control
3.1 Tests of Randomness
3.1.1 Testing the Number of Runs
3.1.2 Runs Above and Below a Specified Level
3.1.3 Runs Up and Down
3.1.4 Testing the Length of Runs Up and Down
3.2 Modified Shewhart Control Charts for NX
3.3 The Size and Frequency of Sampling for Shewhart Control
Charts
3.3.1 The Economic Design for NX-charts
3.3.2 Increasing the Sensitivity of p-charts
3.4 Cumulative Sum Control Charts
3.4.1 Upper Page’s Scheme
3.4.2 Some Theoretical Background
3.4.3 Lower and Two-Sided Page’s Scheme
3.4.4 Average Run Length, Probability of False Alarm, and
Conditional Expected Delay
3.5 Bayesian Detection
3.6 Process Tracking
3.6.1 The EWMA Procedure
3.6.2 The BECM Procedure
3.6.3 The Kalman Filter
3.6.4 The QMP Tracking Method
3.7 Automatic Process Control
3.8 Chapter Highlights
3.9 Exercises
4 Multivariate Statistical Process Control
4.1 Introduction
4.2 A Review Multivariate Data Analysis
4.3 Multivariate Process Capability Indices
4.4 Advanced Applications of Multivariate Control Charts
4.4.1 Multivariate Control Charts Scenarios
4.4.2 Internally Derived Target
4.4.3 External Reference Sample
4.4.4 Externally Assigned Target
4.4.5 Measurement Units Considered as Batches
4.4.6 Variable Decomposition and Monitoring Indices
4.5 Multivariate Tolerance Specifications
4.6 Tracking Structural Changes
4.6.1 The Synthetic Control Method
4.7 Chapter Highlights
4.8 Exercises
5 Classical Design and Analysis of Experiments
5.1 Basic Steps and Guiding Principles
5.2 Blocking and Randomization
5.3 Additive and Non-additive Linear Models
5.4 The Analysis of Randomized Complete Block Designs
5.4.1 Several Blocks, Two Treatments per Block:Paired
Comparison
5.4.2 Several Blocks, t Treatments per Block
5.5 Balanced Incomplete Block Designs
5.6 Latin Square Design
5.7 Full Factorial Experiments
5.7.1 The Structure of Factorial Experiments
5.7.2 The ANOVA for Full Factorial Designs
5.7.3 Estimating Main Effects and Interactions
5.7.4 2m Factorial Designs
5.7.5 3m Factorial Designs
5.8 Blocking and Fractional Replications of 2m Factorial
Designs
5.9 Exploration of Response Surfaces
5.9.1 Second Order Designs
5.9.2 Some Specific Second Order Designs
5.9.3 Approaching the Region of the Optimal Yield
5.9.4 Canonical Representation
5.10 Evaluating Designed Experiments
5.11 Chapter Highlights
5.12 Exercises
6 Quality by Design
6.1 Off-Line Quality Control, Parameter Design, and the Taguchi
Method
6.1.1 Product and Process Optimization Using Loss
Functions
6.1.2 Major Stages in Product and Process Design
6.1.3 Design Parameters and Noise Factors
6.1.4 Parameter Design Experiments
6.1.5 Performance Statistics
6.2 The Effects of Non-linearity
6.3 Taguchi’s Designs
6.4 Quality by Design in the Pharmaceutical Industry
6.4.1 Introduction to Quality by Design
6.4.2 A Quality by Design Case Study:The Full Factorial
Design
6.4.3 A Quality by Design Case Study:The Desirability
Function
6.4.4 A Quality by Design Case Study:The Design Space
6.5 Tolerance Designs
6.6 Case Studies
6.6.1 The Quinlan Experiment
6.6.2 Computer Response Time Optimization
6.7 Chapter Highlights
6.8 Exercises
7 Computer Experiments
7.1 Introduction to Computer Experiments
7.2 Designing Computer Experiments
7.3 Analyzing Computer Experiments
7.4 Stochastic Emulators
7.5 Integrating Physical and Computer Experiments
7.6 Simulation of Random Variables
7.6.1 Basic Procedures
7.6.2 Generating Random Vectors
7.6.3 Approximating Integrals
7.7 Chapter Highlights
7.8 Exercises
8 Cybermanufacturing and Digital Twins
8.1 Introduction to Cybermanufacturing
8.2 Cybermanufacturing Analytics
8.3 Information Quality in Cybermanufacturing
8.4 Modeling in Cybermanufacturing
8.5 Computational Pipelines
8.6 Digital Twins
8.7 Chapter Highlights
8.8 Exercises
9 Reliability Analysis
9.1 Basic Notions
9.1.1 Time Categories
9.1.2 Reliability and Related Functions
9.2 System Reliability
9.3 Availability of Repairable Systems
9.4 Types of Observations on TTF
9.5 Graphical Analysis of Life Data
9.6 Nonparametric Estimation of Reliability
9.7 Estimation of Life Characteristics
9.7.1 Maximum Likelihood Estimators for Exponential TTF
Distribution
9.7.2 Maximum Likelihood Estimation of the Weibull
Parameters
9.8 Reliability Demonstration
9.8.1 Binomial Testing
9.8.2 Exponential Distributions
9.9 Accelerated Life Testing
9.9.1 The Arrhenius Temperature Model
9.9.2 Other Models
9.10 Burn-In Procedures
9.11 Chapter Highlights
9.12 Exercises
10 Bayesian Reliability Estimation and Prediction
10.1 Prior and Posterior Distributions
10.2 Loss Functions and Bayes Estimators
10.2.1 Distribution-Free Bayes Estimator of Reliability
10.2.2 Bayes Estimator of Reliability for Exponential Life
Distributions
10.3 Bayesian Credibility and Prediction Intervals
10.3.1 Distribution-Free Reliability Estimation
10.3.2 Exponential Reliability Estimation
10.3.3 Prediction Intervals
10.3.4 Applications with Python:Lifelines and pymc
10.4 Credibility Intervals for the Asymptotic Availability of
Repairable Systems:The Exponential Case
10.5 Empirical Bayes Method
10.6 Chapter Highlights
10.7 Exercises
11 Sampling Plans for Batch and Sequential Inspection
11.1 General Discussion
11.2 Single-Stage Sampling Plans for Attributes
11.3 Approximate Determination of the Sampling Plan
11.4 Double Sampling Plans for Attributes
11.5 Sequential Sampling and A/B Testing
11.5.1 The One-Armed Bernoulli Bandits
11.5.2 Two-Armed Bernoulli Bandits
11.6 Acceptance Sampling Plans for Variables
11.7 Rectifying Inspection of Lots
11.8 National and International Standards
11.9 Skip-Lot Sampling Plans for Attributes
11.9.1 The ISO 2859 Skip-Lot Sampling Procedures
11.10 The Deming Inspection Criterion
11.11 Published Tables for Acceptance Sampling
11.12 Sequential Reliability Testing
11.13 Chapter Highlights
11.14 Exercises
A Introduction to Python
A.1 List, Set, and Dictionary Comprehensions
A.2 Scientific Computing Using numpy and scipy
A.3 Pandas Data Frames
A.4 Data Visualization Using pandas and matplotlib
B List of Python Packages
C Code Repository and Solution Manual
Bibliography
Index
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
R. S. Kenett et al., Industrial Statistics, Statistics for Industry, Technology, and
Engineering
https://doi.org/10.1007/978-3-031-28482-3_1
Preview
Industrial statistics is a discipline that needs to be adapted and provide
enhanced capabilities to modern industrial systems. This chapter
presents the evolution of industry and quality in the last 300 years. The
transition between the four industrial revolutions is reviewed as well as
the evolution of quality from product quality to process, service,
management, design, and information quality. To handle the new
opportunities and challenges of big data, a current perspective of
information quality is presented, including a comprehensive InfoQ
framework. The chapter concludes with a presentation of digital twins
which are used in industry as a platform for monitoring, diagnostic,
prognostic and prescriptive analytics. The Python code used in this and
the following chapters is available from https://gedeck.github.io/
mistat-code-solutions/IndustrialStatistics.
Supplementary Information
The online version contains supplementary material available at
(https://doi.org/10.1007/978-3-031-28482-3_1).
And God saw everything that he had made, and, behold, it was
very good (Genesis I, 31).
Inspection was indeed the leading quality model for many centuries. A
vivid picture of inspection in action is depicted in Syndics of the
Drapers’ Guild, a 1662 oil painting by Rembrandt one can admire in the
Rijksmuseum in Amsterdam. A second important milestone, where
specification of parts got set before final assembly, is attributed to Eli
Whitney (1765–1825), an American inventor, mechanical engineer, and
manufacturer. Whitney is known in the USA as the inventor of the
concept of mass production of interchangeable parts. In 1797, the US
government threatened by war with France, solicited 40,000 muskets
from private contractors because the two national armories had
produced only 1000 muskets in 3 years. Whitney offered to supply
10,000 muskets in 2 years. He designed machine tools enabling
unskilled workman to make parts that were checked against
specification. The integration of such parts made a musket. Any part
would fit in any musket of similar design. The workforce was now split
into production and inspection teams. A third milestone, 120 years
later, was the introduction of statistical process control charts by
Shewhart (1926). Following this third milestone, attention shifted from
quality of product, and inspection, to process quality, and statistical
process control. Sixty years later, on the basis of experience gained at
Western Electric Joseph Juran formulated the Quality Trilogy, as a
universal approach for managing quality. This marked the start of
quality management and was a precursor to total quality management
and six sigma. A key contributor to this movement was W. Edwards
Deming who, together with Juran, had huge success in implementing
quality management principles in devastated post world War II Japan
(Deming 1982, 1991; Juran 1986, 1995). In a further development in
the 1960s a Japanese engineer, Genichi Taguchi, introduced to industry
methods for designing statistically designed experiments aimed at
improving products and processes by achieving design-based
robustness properties (Godfrey 1986; Taguchi 1987). These methods
were originally suggested by R. A. Fisher, the founder of modern
statistics in agriculture and greatly developed in the chemical industry,
by his student and son in law, G.E.P Box (Fisher 1935; Box et al. 2005).
In 1981 Taguchi came to Bell Laboratories, the research arm of Western
Electric, to share his experience in robust design methodologies. His
seminars at Holmdel, New Jersey, were attended by only a dozen
people. His English was poor and his ideas so new that it took time to
understand his methods. At that time, industry was mostly collecting
data on finished product quality with only some data on processes.
Thirty years later, industry started facing a big data phenomenon.
Sensors and modern data analysis systems offered new options for
process and product control. This led to considerations of integrated
models combining data from different sources (Godfrey and Kenett
2007). With data analytics and manufacturing execution systems
(MES), the business of quality started shifting to information quality
(Kenett 2008). To handle this, Kenett and Shmueli (2014) introduced a
framework labeled “InfoQ”. Technically, the definition of InfoQ is the
derived utility (U) from an application of a statistical or data analytic
model (f), to a dataset (X), given the research goal (g); InfoQ = U(f(X∣g)).
On this basis, data scientists can help organizations generate
information quality from their data lakes.
To assess the level of InfoQ in a specific study, Kenett and Shmueli
(2014) proposed eight dimensions of InfoQ:
1. Data resolution: The measurement scale and level of aggregation of
the data relative to the task at hand must be adequate for the study.
For example, consider data on daily purchases of over-the-counter
medications at a large pharmacy. If the goal of the analysis is to
forecast future inventory levels of different medications when re-
stocking is done on a weekly basis, then, weekly aggregated data is
preferred to daily aggregate data.
2. Data structure: The data can combine structured quantitative data
with unstructured, semantic based data. For example, in assessing
the reputation of an organization one might combine data derived
from the stock exchange with data mined from text such as
newspaper archives or press reports. Doing it enhances
information quality.
For more on InfoQ see Kenett and Shmueli (2016). An application of the
information quality framework to chemical process engineering is
presented in Reis and Kenett (2018).
After 2010s, organizations started hiring data scientists to leverage
the potential in their data and data scientists started getting involved in
organizational infrastructures and data quality (Kenett and Redman
2019). Systems Engineering in the Fourth Industrial Revolution: Big
data, Novel Technologies, and Modern Systems Engineering is
discussed in Kenett et al. (2021a). In summary, quality models evolved
through the following milestones: (1) Product quality, (2) Process
quality, (3) Management quality, (4) Design quality, and (5) Information
quality. The chapters in this book cover the tools and methods of
industrial analytics supporting this quality evolution.
1.3 Industry 4.0 Characteristics
Industry 4.0, the so-called fourth industrial revolution, relies on three
basic elements:
Sensor technology that can extensively measure products and
processes online.
Flexible manufacturing capabilities—such as 3D printing—that can
efficiently produce batches of varying size.
Analytics that power the industrial engine with the capability to
monitor, diagnose, predict, and optimize decisions.
One significant analytic challenge is data integration. Sensors may
collect data with different time cycles. Dynamic time warping (DTW)
and Bayesian Networks (BN) can fuse the collected data into an
integrated picture (Kenett 2019). In analytic work done in industry,
data is collected either actively or passively and models are developed
with empirical methods, first principles or hybrid models. The
industrial cycle provides opportunities to try out new products or new
process set-ups and, based on the results, determine follow-up actions.
It is, however, important to make sure that analytic work is reviewed
properly to avoid deriving misleading conclusions, which could be very
costly and/or time-consuming. For example, a lithium battery
manufacturer discovered it had uncalibrated test equipment evaluating
end-of-the-line products. The company was able to avoid a major recall
by using the plant’s control charts to precisely identify the problematic
batches. To avoid shipping immature products, or defective batches,
good diagnostic capabilities are vital for monitoring and identifying the
cause of any reported problems.
Analytic challenges in systems engineering and industrial
applications include (Kenett et al. 2021b):
Engineering design
Manufacturing systems
Decision-support systems
Shop-floor control and layout
Fault detection and quality improvement
Condition-based maintenance
Customer and supplier relationship management
Energy and infrastructure management
Cybersecurity and security
These challenges require monitoring products and process; designing
new products and processes; and improving products and processes.
The next section presents an approach designed to attain all these
objectives; the digital twin, also known as a surrogate model.
1.6 Exercises
Exercise 1.1 Describe three work environments where quality is
assured by 100% inspection of outputs (as opposed to process control).
References
ANSI (2010) ANSI/ISA-95.00.01 enterprise-control system integration – part 1:
models and terminology. Technical report. American National Standards Institute,
Washington
Box GEP, Hunter JS, Hunter WG (2005) Statistics for experimenters: design,
innovation, and discovery, 2nd edn. Wiley, Hoboken
[zbMATH]
Davis S (1997) Future perfect: tenth anniversary edition, updated edn. Basic Books,
Reading
Fisher RA (1935) The design of experiments. Oliver and Boyd, Ltd., Edinburgh
Godfrey AB (1986) Report: the history and evolution of quality in AT&T. AT&T Tech J
65(2):9–20. https://doi.org/10.1002/j .1538-7305.1986.tb00289.x
[Crossref]
Hermann M, Pentek T, Otto B (2015) Design principles for industrie 4.0 scenarios: a
literature review. Working Paper No. 01/2015. Technische Universität Dortmund,
Dortmund. https://doi.org/10.13140/RG.2.2.29269.22248
Hints R, Vanca M, Terkaj W, Marra ED (2011) A virtual factory tool to enhance the
integrated design of production systems. In: Proceedings of the DET2011 7th
international conference on digital enterprise technology, Athens, pp 28–30
IMT S (2013) Are virtual factories the future of manufacturing? Technical report.
Jain S, Shao G (2014) Virtual factory revisited for manufacturing data analytics. In:
Proceedings of the 2014 winter simulation conference, WSC ’14. IEEE Press,
Savannah, pp 887–898
[Crossref]
Juran JM (1986) The quality trilogy: a universal approach to managing for quality. In:
40th annual quality congress – American Society for quality control, ASQC
transactions of: 19–21 May 1986, Anaheim, California, 1st edn. American Society for
Quality Control, Inc., Milwaukee
Juran JM (ed) (1995) A history of managing for quality, 1st edn. Asq Pr, Milwaukee
Kenett RS (2008) From data to information to knowledge. Six Sigma Forum Magazine,
pp 32–33
Kenett RS (2019) Applications of Bayesian networks. Trans Mach Learn Data Mining
12(2):33–54
Kenett RS, Bortman J (2021) The digital twin in industry 4.0: a wide-angle
perspective. Qual Reliab Eng Int 38(3):1357–1366. https://doi.org/10.1002/qre.2948
[Crossref]
Kenett RS, Coleman S (2021) Data and the fourth industrial revolution. Significance
18(3):8–9. https://doi.org/10.1111/1740-9713.01523
[Crossref]
Kenett RS, Redman TC (2019) The real work of data science: turning data into
information, better decisions, and stronger organizations, 1st edn. Wiley, Hoboken
[Crossref]
Kenett RS, Shmueli G (2014) On information quality. J R Stat Soc A Stat Soc 177(1):3–
38. https://doi.org/10.1111/rssa.12007
[MathSciNet][Crossref]
Kenett RS, Shmueli G (2016) Information quality: the potential of data and analytics
to generate knowledge, 1st edn. Wiley, Chichester
[Crossref]
Kenett RS, Zonnenshain A, Fortuna G (2018b) A road map for applied data sciences
supporting sustainability in advanced manufacturing: the information quality
dimensions. Procedia Manuf 21:141–148. https://doi.org/10.1016/j .promfg.2018.02.
104
[Crossref]
Kenett RS, Swarz RS, Zonnenshain A (eds) (2021a) Systems engineering in the fourth
industrial revolution: big data, Novel Technologies, and Modern Systems Engineering,
1st edn. Wiley, Hoboken
Santner TJ, Williams BJ, Notz WI (2003) The design and analysis of computer
experiments. Springer series in statistics. Springer, New York. https://doi.org/10.
1007/978-1-4757-3799-8
Shewhart WA (1926) Quality control charts. Bell Syst Tech J 5(4):593–603. https://
doi.org/10.1002/j .1538-7305.1926.tb00125.x
[Crossref]
Terkaj W, Tolio T, Urgo M (2015) A virtual factory approach for in situ simulation to
support production and maintenance planning. CIRP Ann Manuf Technol 64:451–454.
https://doi.org/10.1016/j .c irp.2015.04.121
[Crossref]
Volvo Group Global (2017) Virtual twin plant shorten lead times. https://www.
volvogroup.c om/en/news-and-media/news/2017/mar/v irtual-twin-plant-shorten-
lead-times.html
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023
R. S. Kenett et al., Industrial Statistics, Statistics for Industry, Technology, and
Engineering
https://doi.org/10.1007/978-3-031-28482-3_2
Preview
Competitive pressures are forcing many management teams to focus on
process control and process improvement, as an alternative to
screening and inspection. This chapter discusses techniques used
effectively in industrial organizations that have adopted such ideas as
concepts. Classical control charts, quality control, and quality planning
tools are presented along with modern statistical process control
procedures including new statistical techniques for constructing
confidence intervals of process capability indices and analyzing Pareto
charts. Throughout the chapter, a software piston simulator is used to
demonstrate how control charts are set up and used in real-life
applications.
Supplementary Information
The online version contains supplementary material available at
(https://doi.org/10.1007/978-3-031-28482-3_2).
Paavi nyökkäsi.
»Pyhä isä, jos… jos olisitte tiennyt, että hän oli jotakin muuta…
jotakin lähempää… esimerkiksi… jonkun papin sukulainen, olisitteko
vaatinut hänen syyttämistään kumminkin?»
»Pyhä isä», sanoi kapusiinilainen, »jos teille olisi sanottu, että hän
on sukua teidän omaan ympäristöönne kuuluvalle papille…»
»Pyhä isä, jos… jos teille olisi kerrottu… että hän oli kardinaalin
sukulainen?»
»Trinità dei Montin portinvartia toi sen», sanoi hän, »ja hän kertoi
minulle, että luostarissa nyt on Angelica-niminen maallikkosisar,
sekä pelkäsi, että seuraavat kirjeet ehkä joutuvat hukkaan… Ettekö
ole iloinen kirjeestänne, signora? Minä luulin signoran vallan
kuolevan ilosta ja annoin miehelle kuusi soldia.»
Roma käänteli kirjettä käsissään ajatellen, kuinka hän ennen oli
iloinnut saadessaan kirjeen Rossilta ja miettien, voisiko hän
ollenkaan avata tätä.
»Niin minäkin tein siihen aikaan, kun Tommaso oli sodassa. Mutta
nyt on pääsiäinen, signora, eikä Pyhä Neitsyt anna teille pahoja
uutisia tänään. Kuulkaa! Tuolla kaikuu Gloria. Minä kuulen aina
kirkonkellot pääsiäislauantaina. Kun tulin kuuroksi, oli Giuseppe pieni
lapsi, ja minä otin kääreet pois hänen jaloistaan, ja hän käveli ensi
kerran. Voi minun pientä raukkaani… Mutta minä saatan signoran
itkemään.»
D.R.»
KUNINGAS.
I.
Kun hän näin oli valloittanut kuulijansa heti alussa, jatkoi hän
repimällä alas melkein kaiken, mitä he olivat sanoneet.
»Addio, Onorevole!»
»Addio!»
II.
Kaikki sinä päivänä toi hänen mieleensä Roman. Kun hän avasi
silmänsä, kiiti juna juuri järven ohi, ja nuori, kymmen- tai
kaksitoistavuotinen tyttö katseli ikkunasta järvellä näkyviä valkoisia
purjeita. Hänen suora vartalonsa, hänen varmamuotoinen jalkansa,
hänen kirkas, peloton katseensa ja vielä kehittymätön naisellisuus
hänen olennossaan toi Rossin mieleen Roman Lontoon ajoilta. Tytön
äiti istui vastapäätä Rossia ja katsoi häneen hymyillen.
Hänen sielunsa taistelu oli lyhyt. Ellei hän olisi luottanut Romaan,
ei hän ikinä olisi rakastanut häntä. Ellei hän olisi paljastanut koko
sydäntään Romalle, ei hän koskaan olisi saanut tietää Roman
rakkaudesta. Ja vaikka Roma olikin kärsinyt, maksaisi Rossi nyt
kaiken hänelle. Hän muisti tuon Campagnalla vietetyn ihanan päivän
ja ajatteli sitten tulevia suloisia hetkiä. Mitä Roma nyt teki? Hän kai
lähettää sähkösanoman Chiassoon. Jumala häntä siunatkoon!
Jumala siunatkoon kaikkia!