3-Sai

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335256153

Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept

Chapter · December 2019


DOI: 10.1007/978-3-030-29743-5_28

CITATIONS READS

5 1,212

3 authors:

Cuong Sai Maxim Shcherbakov


Control, Automation in Production and Improvement of Technology Institute (CA… Volgograd State Technical University
12 PUBLICATIONS 21 CITATIONS 80 PUBLICATIONS 731 CITATIONS

SEE PROFILE SEE PROFILE

Van Phu Tran


Volgograd State Technical University
7 PUBLICATIONS 18 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The Development of Scientific and methodical foundation of effective automatic forecasting in the intelligent management of Hybrid Renewable Energy System View
project

Building energy management View project

All content following this page was uploaded by Maxim Shcherbakov on 12 May 2020.

The user has requested enhancement of the downloaded file.


Data-Driven Framework for Predictive
Maintenance in Industry 4.0 Concept

Van Cuong Sai(B) , Maxim V. Shcherbakov, and Van Phu Tran

Volgograd State Technical University, Lenin Avenue 28, 400005 Volgograd, Russia
svcuonghvktqs@gmail.com
http://www.vstu.ru

Abstract. Supporting the operation of the equipment at the opera-


tional stage with minimal costs is an urgent task for various industries. In
the modern manufacturing industry machines and systems become more
advanced and complicated, traditional approaches (corrective and pre-
ventive maintenance) to maintenance of complex systems lose their effec-
tiveness. The latest trends of maintenance lean towards condition-based
maintenance (CBM) techniques. This paper describes the framework to
build predictive maintenance models for proactive decision support based
on machine learning and deep learning techniques. The proposed frame-
work implemented as a package for R, and it provides several features
that allow to create and evaluate predictive maintenance models. All fea-
tures of the framework can be attributed to one of the following groups:
data validation and preparation, data exploration and visualization, fea-
ture engineering, data preprocessing, model creating and evaluation. The
use case provided in the paper highlights the benefits of the framework
toward proactive decision support for the estimation of the turbofan
engine remaining useful life (RUL).

Keywords: Condition-based maintenance (CBM) ·


Predictive maintenance (PdM) · Industry 4.0 ·
Internet of Things (IoT) · Remaining useful life (RUL) ·
Data-driven method · Machine learning · Deep learning

1 Introduction
In the current manufacturing world, systems are becoming more and more com-
plex, especially for the machine system. This complexity is a source of various
incidents and faults that cause considerable damage to items, the environment,
and people. Failure of some parts of the system could affect all of the operations.
The classical maintenance approaches (corrective and preventive maintenance)
under such conditions largely lose their effectiveness.
In corrective maintenance, the interventions are performed only when the
critical component is fully worn out and failure. It minimizes the number of
unnecessary part replacements or repairs since maintenance is only carried out
as needed. However, this approach can also lead to unexpected and lengthy
The reported study was supported by RFBR research projects 19-47-340010 r a.
c Springer Nature Switzerland AG 2019
A. G. Kravets et al. (Eds.): CIT&DS 2019, CCIS 1083, pp. 344–358, 2019.
https://doi.org/10.1007/978-3-030-29743-5_28
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 345

losses of production, safety risks as a system nears failure, or expensive repairs


and replacements.
In preventive maintenance (time-based maintenance or planned mainte-
nance), the interventions are placed according to periodic intervals regardless
of the assets’ health condition and thus the service life of the critical compo-
nents are not fully utilized [11].
Therefore, in order to prevent risks, another efficient strategy for preven-
tive maintenance needs to be carried out system. One effective strategy to
enhance the reliability of the system is to develop and utilize intelligent systems
that perform the functions of predictive analytics and predictive maintenance.
This strategy allows for the transition from time-based preventive maintenance
to condition-based maintenance (CBM). Condition-based maintenance, known
as Predictive Maintenance (PdM), is a maintenance strategy that provides an
assessment of the system’s condition, based on data collected from the system
by continuous monitoring to optimize the availability of process machinery and
greatly reduce the cost of maintenance.
The key to implementing predictive maintenance is the ability to assess
equipment health and discover detail information about current or future faults
through collected data. Nowadays, with the trend of smart manufacturing and
the rapid development of information and communication technologies (ICT),
companies are increasingly applying types of sensors and information technolo-
gies to capture data at all stages of production. It allows for collecting large
amounts of data about the health condition of the equipment. Simultaneously,
technologies such as Internet of Things (IoT), Internet of Services (IoS), Artifi-
cial intelligence (AI), and data mining (DM), which are all inherent in Industry
4.0, are being leveraged with “Big Data” to facilitate a more adaptable and
smart maintenance policy.
Remaining useful life (RUL) prediction of the equipment is the key technology
for realizing condition-based maintenance (CBM). Accurate prediction of RUL
plays an important increasingly crucial role in the intelligent health management
system for the optimization of maintenance decisions. Thus, RUL prediction
of equipment has important significance for guaranteeing production efficiency,
reducing maintenance cost, and improving plant safety [9,10].
At present, the main methods used for RUL estimation is physics-based
failure models (model-based methods) and data-driven methods. Model-based
methods attempt to set up mathematical or physical models to describe degra-
dation processes of machinery, and update model parameters using measured
data [3,4]. The commonly used models include the Markov process model [2],
the Wiener process model [12], etc. Hence, the model-based method is difficult to
be put into use for complex systems as damage propagation processes and equip-
ment dynamic response are complex. To address this issue, data-driven prognos-
tic methods represent the system degradation process using machine learning
algorithms. These methods rely on the assumption that the statistical charac-
teristics of data are relatively consistent unless a fault occurs. Literally hundreds
of papers propose new machine learning algorithms, suggesting methodological
346 V. C. Sai et al.

advances and accuracy improvements for RUL prediction. Methods such as the
autoregressive model (AR) [8], deep neural network [7,17], and support vec-
tor machine [5,15] are used. Yet their conclusions are based only on a certain
situation and on specific data.
In machine learning, there’s something called the “No Free Lunch” theorem
which basically states that no one-size-fits-all machine learning algorithm is best
for all problems and every dataset. It is difficult to determine which or what
type of learning algorithm should be selected among many competing learning
algorithms. Moreover, accurate model today could become inaccurate tomorrow
and the model’s predictive accuracy depends on the relevance, sufficiency, and
quality of the training and test data. Therefore, proper preprocessing strategies
and model selection are the foundation of the construction of a robust accurate
model.
This paper presents a developed framework (a package in R) for predictive
maintenance in Industry 4.0 concept (PdM framework). The PdM framework
helps engineers and domain experts to easily analyze and utilize multiple multi-
variate time series sensor data to develop and test predictive maintenance models
based on RUL estimation using machine learning algorithms and deep learning
algorithms in a rapid for decision support proactive systems for optimizing the
maintenance and service of the machine.
R is a free software environment for statistical computing and graphics sup-
ported by the R Foundation for Statistical Computing [13]. R is one of the most
powerful machine learning platforms and is used by the top data scientists in
the world. There are so many algorithms and so much power sits there ready
to use. It has a large number of packages that expand the functionality for pro-
cessing and predictive modeling. One of them is caret [6], which implements the
functions to streamline the model training process for complex regression and
classification problems.

2 Proposed Framework
2.1 Task Statement and Methodology
We assume to have sensor readings over the total operational life of simi-
lar equipment (run-time to failure data). We denote the set of instances by
ID. For an instance i ∈ ID, we consider multi-sensor times series X (i) =
(i) (i) (i)
X1 , X2 , ..., XT (i) }, where T (i) is length of time series that corresponds
to the total operational life (from start to end of life), Xti ∈ R(n+k) is an
(n + k)-dimensional vector corresponding to the n sensors related to the equip-
ment state and k sensors related to operational conditions at time t: X (i) =
(i) (i) (i) (i) (i) (i)
s1 , s2 , ..., sn , c1 , c2 , ..., ck }.
In Table 1 we presents the data table schema for storing such data allowing
the implementation of the proposed framework. Figure 1 shows the architecture
of the proposed framework.
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 347

Table 1. The data Table schema for predictive maintenance framework.

Column Description
id Unit identifier - a unique number that identifies an instance
timestamp A timestamp (days, cycles, ..) when data was obtained
s1 , s2 , ..., sn The n sensor columns related to the equipment state at
each time step
c1 , c2 , ..., ck The k sensor columns related to the operating conditions

Here is how data can look like in this format:

Because we know when each instance in ID will fail, for a instance i, we can
compute a RUL at each time step t, defined as an instance’s elapsed life at that
(i)
time minus its total lifetime: RU Lt = T (i) − t(i) .
In order to produce RUL estimates, we must still determine the models f (.)
of the form described below that captures a functional relationship between the
sensor values and RUL at time t for a instance i:

RU Li (t) = f si,1 (t), si,2 (t), ..., si,n (t), ci,1 (t), ci,2 (t), ..., ci,k (t))

Once we have the historical run-to-failure data with RUL labels at each time
step, we can build and train models using machine learning and deep learning
technologies. The next step is to find a predictive model that can accurately
predict the RUL from new sensor data coming in from the currently monitored
operating instances similar to the historical monitored instances in ID.
For the currently monitored operating instance similar to the historical mon-
itored instances in ID, the length T (i) corresponds to the elapsed operational
life till the latest available sensor reading (measurements are truncated some
(unknown) amount of time before it fails.).
348 V. C. Sai et al.

2.2 Predictive Maintenance Framework (PdM)


The proposed framework (PdM) for predictive maintenance in the concept of
Industry 4.0 implements some tools shown on Fig. 1. The PdM framework helps
engineers and domain experts to easily analyze and utilize multiple multivariate
time series sensor data to develop and test predictive maintenance models that
to estimate remaining useful life (RUL) to prevent equipment failures. These
models are developed by accessing historical failure data that is stored in local
files, in SQL database, on cloud storage systems such as Amazon S3, or on a
Hadoop Distributed File System, etc in the format presented in Table 1. The
following assumptions are required to create accurate predictive maintenance
models: (1) historical data should contain degradation evolution of the critical
component over time, (2) historical data should contain sufficient number of
training instances to build representative models of the desired critical compo-
nent’s behavior.

Fig. 1. System architecture for predictive maintenance in concept of Industry 4.0.


Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 349

This framework is developed to solve following tasks for predictive mainte-


nance:
1. Analyze data: to load, summarize and visualize multiple multivariate time
series data,
2. Prepare data: for data preparation including data cleaning, feature engineer-
ing and data transforms,
3. Build and evaluate predictive maintenance models: for using a large number of
machine learning algorithms which are available in R, including linear, non-
linear, trees, ensembles for regression; for re-sampling methods, algorithm
evaluation metrics and model selection, for algorithm tuning and ensemble
methods.
The advantage of all functions implemented in the PdM framework is using
a universal interface for different tasks. The proposed framework can be used for
any multiple-component system.
The schematic diagram implementing this framework is shown in Fig. 2. The
predictive maintenance routine starts with data acquisition using sensors and
data acquisition boards. Appropriate signal processing techniques will then be
applied to extract representative features and/or system model parameters from
the collected data. These features will be used to create accurate predictive
maintenance models. These obtained models utilize the current health status of
a given critical component to predict its future condition and plan maintenance
actions before breakdown for the optimal use of the equipment in real time.

Fig. 2. The schematic diagram of the predictive maintenance system.

Downloading and Installation. To install and use the PdM framework, run
this code:
> i n s t a l l . pac kage ( d e v t o o l s )
> d e v t o o l s : : i n s t a l l g i t h u b ( ” f o r v i s /PdM” )
> l i b r a r y (PdM)
350 V. C. Sai et al.

3 Use Case Example


In this section we present end-to-end use case applying some key features of
the PdM framework to analyze multiple multivariate time series sensor data,
to build and evaluate predictive maintenance models for remaining useful life
(RUL) estimation of a turbofan engine.
The framework has several built-in datasets available as data frames. For
illustrations, we will use the Turbofan Engine Degradation Simulation data set
FD001 that was released by the Prognostics Center of Excellence at NASA’s
Ames research center [14]. It contains simulated sensor data for different turbo-
fan engines generated over time. It consists the training dataset (train data1),
testing dataset (test data1), and ground truth data (truth data1). The training
dataset includes the run-to-failure sensor measurements from degrading turbofan
engines of the same type, recorded as multiple multivariate time series. However,
the testing data only have aircraft engine’s operating data without failure events
recorded. Both the training set and the test are stored using data table schema
presented in Table 1. The ground truth data is a vector containing the infor-
mation of true remaining useful life for each engine in the testing data. Both
training set and testing set data have 100 turbofan engines.
The first step is to load and check if the input datasets are correctly formatted
in accordance with the data table schema presented in Table 1:
> l i b r a r y (PdM)
> data ( t r a i n d a t a 1 )
> data ( t e s t d a t a 1 )
> data ( ( t r u t h d a t a 1 )
> validate data ( train data1 )
[ 1 ] TRUE
> validate data ( test data1 )
[ 1 ] TRUE
The function validate data() checks that the input data necessary column
and the composite primary key values are not duplicated. When input data
is loaded successfully and corresponds to the format required, we can look at
descriptive statistics using the function summarize data():
> summary data ( t r a i n d a t a 1 )
++++ Summary s t a t i s t i c s ++++
==============================
− Number o f o b s e r v a t i o n s : 20631
− Number o f v a r i a b l e s : 26
==============================
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 351

The function summarize data() provides a solution to show key descriptive


stats for each column, e.g. the number of observations and variables, a data frame
containing the descriptive stats of each of the columns: data types, number of
missing values, min, max, mean, median, percentiles, and skewness.
After using summarize data() function if we’ve seen that the dataset has
few missing values across all columns, we may to do well to impute it
using handle misv() function. This function provides different imputation algo-
rithms: remove rows with missing values, replace the missing values with the
mean, median, mode of the column, k-Nearest Neighbors (knn), bagging of
regression, etc.
Here is an example of using knn algorithm to impute missing data in training
dataset:
> t r a i n d a t a 1 <− h a n d l e m i s v ( t r a i n d a t a 1 , method = ’ knn ’ )
Now that the missing values are handled, our dataset is now ready to visu-
alize by using the universal function visualize data(). This function allows us to
plot different types of charts, e.g. scatter plots, line graphs, both line and point
graphs, boxplot, histogram, etc for the multiple multivariate time series sensor
data with the results appearing as panels in a larger figure. Each panel plot
corresponds to a set value of the variable (sensor channel).
Here is an example of using the function visualize data() to plot the line
graphs for training dataset:
> v i s u a l i z e d a t a ( t r a i n d a t a 1 , i d = 1 : 1 0 , t yp e = ’ l ’ )
The Fig. 3 shows all 24 sensor channels (21 sensors – about the health con-
dition of the engines and 3 sensors – about the conditions in which engines are
352 V. C. Sai et al.

Fig. 3. Sensor readings over time for the first 10 engines in the training set

operating) for the first 10 engines (10 lines in each subplot-one for each engine)
from the training set, plotted against time.
From the results of the data summary and visualization we can draw the
following conclusions:
– The variables have not the same scale and sensor readings have noise. It means
that we should transform this data in order to best expose its structure to
machine learning algorithms.
– The following variables are constant in the training set, meaning that the
operating condition was fixed and/or the sensor was broken/inactive: c3, s1,
s5, s10, s16, s18, s19.
– The sensor s6 is practically constant.
We can check and discard these variables from the analysis for both training
and testing datasets by using the function process data():
# Check and remove v a r i a b l e s with a z e r o v a r i a n c e
> c ( t r a i n d a t a 1 , t e s t d a t a 1 ) %<−% p r o c e s s d a t a ( t r a i n d a t a 1 ,
t e s t d a t a 1 , method=’zv ’ )
## The f o l l o w i n g v a r i a b l e s with a z e r o v a r i a n c e a r e
## removed : c3 , s1 , s5 , s 10 , s16 , s18 , s 19

# Check and remove v a r i a b l e s with a n ea r z e r o v a r i a n c e


> c ( t r a i n d a t a 1 , t e s t d a t a 1 ) %<−% p r o c e s s d a t a ( t r a i n d a t a 1 ,
t e s t d a t a 1 , method=’nzv ’ )
## The f o l l o w i n g v a r i a b l e s with a n ea r z e r o v a r i a n c e a r e
## removed : s 6
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 353

The function process data() also provides a number of useful data trans-
form methods supported in the argument to the process data() function in PdM
framework, e.g. Box-Cox transform, Yeo-Johnson transforms, MinMax normal-
ization, divide values by standard deviation, subtract mean from values, trans-
form data to the principal components, etc.
Before building any predictive maintenance model for predicting RUL, we
should check that our data should contain enough useful information to allow us
to distinguish between healthy and failing states of the engines. If they don’t, it’s
unlikely that any model built with sensor data will be useful for our purposes.
The function visualize data() also allows us to compare the distribution of sensor
values in “healthy” engines to a similar set of measurements when the engines
are close to failure:
> v i s u a l i z e d a t a ( t r a i n d a t a 1 , i d = 1 : 1 0 0 , t yp e = ” h f ” ,
n s t e p = 20 )

Fig. 4. Distribution of the healthy vs failing sensor values.

The Fig. 4 shows the distribution of the values of all sensor channels for each
engine in the training set, where healthy values (in green) are those taken from
the first 20 time steps of the engine’s lifetime and failing values are from the last
20 time steps (n step = 20). It’s apparent that these two distributions are quite
different for some sensor channels.
Also, the correlation between attributes can be calculated using the visual-
ize correlation() function.
> v i s u a l i z e c o r r e l a t i o n ( t r a i n d a t a 1 , method=’ c i r c l e ’ )
354 V. C. Sai et al.

Fig. 5. Correlation matrix of training dataset input attributes

The Fig. 5 shows that many of the attributes have a strong correlation. Many
methods perform better if highly correlated attributes are removed.
We can find and remove the highly correlated attributes with an absolute
correlation of 0.75 using the find redundant() function as follows:
> t r a i n d a t a 1 <− f i n d r e d u n d a n t ( t r a i n d a t a 1 , c f = 0 . 7 5 )
## The f o l l o w i n g h i g h l y c o r r e c t e d f e a t u r e s a r e removed : s4 ,
## s7 , s1 1 , s1 2 , s13 , s 14
With the missing values handled and redundant features removed, our
datasets is now ready to undergo variable transformations if required. Here is
an example of using the process data() function to transform the training and
testing datasets (MinMax normalization):
# MinMax n o r m a l i z a t i o n
> c ( t r a i n d a t a 1 , t e s t d a t a 1 ) %<−% p r o c e s s d a t a ( t r a i n d a t a 1 ,
t e s t d a t a 1 , method=’ra nge ’ )
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 355

Before building predictive maintenance models we should generate labels for


the training data which are remaining useful life (RUL) by using the calcu-
late rul() function:
> t r a i n d a t a 1 <− c a l c u l a t e r u l ( t r a i n d a t a 1 )
After this step, the dataset is ready to build any predictive maintenance
model, but we have no idea what algorithms will do well on this task. Let’s check
several different algorithms, e.g. linear regression (lm), regression trees (rpart),
support vector machines (svm), random forest (rf)) using a unified function
train model():
> mo del s <− t r a i n m o d e l (RUL˜ , t r a i n d a t a 1 , method = c ( ’ lm ’ ,
’ rp a rt ’ , ’ svmRadial ’ , ’ r f ’ ) )
The train model() function is based on train() function from the caret pack-
age [6]. It means that the train model() function allows using any available
in R algorithm and can be used to: evaluate, using re-sampling, the effect of
model tuning parameters on performance; choose the “optimal” model across
these parameters; estimate model performance from a training set. To know
that models train model() supports, run the following:
# L ooki ng f o r a v a i l a b l e a l g o r i t h m s i n c a r e t
> models <− p a s t e ( names ( g e t M o d e l I n f o ( ) ) , c o l l a p s e = ’ , ’ )
> models
To make a prediction based on data omitted in training data set, we use the
following sentence:
> p r e d i c t i o n s <− c r e a t e p r e d c i c t i o n ( models ,
new data = t e s t d a t a 1 )
> e v a l u a t i o n <− e v a l u a t e m o d e l ( t r u t h d a t a 1 , p r e d i c t i o n s )
The evaluate model() function returns a list with the following variables:
accuracy – data frame with accuracy table, plot – visualization of predicted and
actual values, prd – a prediction-realization diagram showing a scatterplot with
forecast vs actual values (Figs. 6 and 7).
> e v alu at io n $acc u ra c y

## Model MAE MdAE RMSE sMAPE


## lm 25.9751 7 31.25047 31.25047 19.54203
## rp ar t 21.691 12 18.12015 28.31998 34.15792
## svm 17.5220 3 14.60832 23.60905 24.27818
## rf 19.4753 9 13.66242 26.49968 25.18338

> evaluation$plot
356 V. C. Sai et al.

Fig. 6. Visualisation of machine id vs. RUL covering both predicted and actual values
of dataset.

Fig. 7. Prediction-realization diagram for FD001 data. Different colors and marks to
show forecasts relating to different forecasting methods.

> eva lu at io n$ pr d
The result shows that SVM has the lowest errors (MAE, MdAPE, RMSE,
sMAPE). We can look at the default parameters of this algorithm:
> p r i n t ( models$svm )
Data-Driven Framework for Predictive Maintenance in Industry 4.0 Concept 357

We can improve the accuracy of this best algorithm (svm in this case) by
tuning their parameters using grid search:
# Tune SVM sigma and C p a r a me t r es
# Use t he expand . g r i d to s p e c i f y t h e s e a r c h s p a c e
> g r i d <− expand . g r i d ( s igma=c ( . 0 1 , . 0 1 5 , 0 . 2 ) ,
C=s e q ( 1 , 1 0 , by =1))
# Tr ai n t h e svm
> s vm g r i d <− t r a i n m o d e l (RUL, t r a i n d a t a 1 ,
method=”svm ” ,
tun eG ri d=g r i d )
Using print(svm grid) we can see the optimal model with final values of
parameters selected for this model (sigma and C in this case). Once we have an
accurate model on our test harness we can save it to a file so that we can load it
up later and make predictions by using the saveRDS() and readRDS() function:
# s a ve th e model to d i s k
> saveRDS ( sv m gri d , ” . / f i n a l m o d e l . r d s ” )
# Load th e model
> model <− readRDS ( ” . / f i n a l m o d e l . r d s ” )
In addition to machine learning algorithms, the PdM framework also pro-
vides some tools for building deep learning predictive maintenance models based
deep learning algorithms, e.g. create tensor(), train lstm(), train cnn(), etc. More
details about all functions of the PdM framework use following code to access
to the documentation pages:
> h e l p ( pac kage = ”PdM”)

4 Conclusion
We conclude, that the proposed framework PdM in this study can be applied
for proactive decision support. The proposed method was applied to predict the
remaining useful life of the equipment in the concept of industrial predictive
maintenance.
In future work, we wrap the functionalities of the proposed predictive main-
tenance framework PdM into a graphical user interface. This enables the user
to conduct all steps of the predictive maintenance building workflows from his
browser without using codes.

References
1. Allaire, J., Chollet, F.: R Interface to ‘Keras’. R package version 2.2.4 (2018).
https://CRAN.R-project.org/package=keras
2. Dui, H., Si, S., Zuo, M., Sun, S.: Semi-Markov process-based integrated importance
measure for multi-state systems. IEEE Trans. Reliab. 64(2), 754–765 (2015)
358 V. C. Sai et al.

3. Hanachi, H., Liu, J., Banerjee, A., Chen, Y., Koul, A.: A physics-based modeling
approach for performance monitoring in gas turbine engines. IEEE Trans. Reliab.
64(1), 197–205 (2015)
4. Huang, Z., Xu, Z., Wang, W., Sun, Y.: Remaining useful life prediction for a
nonlinear heterogeneous Wiener process model with an adaptive drift. IEEE Trans.
Reliab. 64(2), 687–700 (2015)
5. Khelif, R., Chebel-Morello, B., Malinowski, S., Laajili, E., Fnaiech, F., Zerhouni,
N.: Direct remaining useful life estimation based on support vector regression.
IEEE Trans. Industr. Electron. 64(3), 2276–2285 (2017)
6. Kuhn, M.: caret: Classification and Regression Training. R package version 6.0-82
(2019). https://CRAN.R-project.org/package=caret
7. Li, X., Ding, Q., Sun, J.: Remaining useful life estimation in prognostics using deep
convolution neural networks. Reliab. Eng. Syst. Saf. 172, 1–11 (2018)
8. Long, B., Xian, W., Jiang, L., Liu, Z.: An improved autoregressive model by particle
swarm optimization for prognostics of Lithium-Ion batteries. Microelectron. Reliab.
53(6), 821–831 (2013)
9. Malhi, A., Yan, R., Gao, R.: Prognosis of defect propagation based on recurrent
neural networks. IEEE Trans. Instrum. Meas. 60(3), 703–711 (2011)
10. Qian, Y., Yan, R., Hu, S.: Bearing degradation evaluation using recurrence quantifi-
cation analysis and Kalman filter. IEEE Trans. Instrum. Meas. 63(11), 2599–2610
(2014)
11. Soh, S., Radzi, N., Haron, H.: Review on scheduling techniques of preventive main-
tenance activities of railway. In: Fourth International Conference on Computational
Intelligence, Modelling and Simulation. IEEE, pp. 310–315, Kuantan, Malaysia,
September 2012. https://doi.org/10.1109/CIMSim.2012.56
12. Si, X., Wang, W., Chen, M., Hu, C., Zhou, D.: A degradation path-dependent app-
roach for remaining useful life estimation with an exact and closed-form solution.
Eur. J. Oper. Res. 226(1), 53–66 (2013)
13. The Comprehensive R Archive Network. https://cran.r-project.org/. Accessed 20
Jan 2019
14. Turbofan engine degradation simulation data set. https://c3.nasa.gov/dashlink/
resources/139/. Accessed 18 Jan 2019
15. Wang, S., Zhao, L., Su, X., Ma, P.: Prognostics of Lithium-Ion batteries based
on battery performance analysis and flexible support vector regression. Energies
7(10), 6492–6508 (2014)
16. Wickham, H.: ggplot2: Elegant Graphics for Data Analysis. Springer, New York
(2016). https://doi.org/10.1007/978-0-387-98141-3
17. Yu, J., Mo, B., Tang, D., Liu, H., Wan, J.: Remaining useful life prediction for
Lithium-Ion batteries using a quantum particle swarm optimization-based particle
filter. Qual. Eng. 29(3), 536–546 (2017)

View publication stats

You might also like