Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Original article

Textile Research Journal


2015, Vol. 85(13) 1367–1380

A model for predicting drying time period ! The Author(s) 2015


Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
of wool yarn bobbins using computational DOI: 10.1177/0040517514553879
trj.sagepub.com
intelligence techniques

Ugur Akyol1, Pınar Tüfekci2, Kamil Kahveci3 and Ahmet Cihan4

Abstract
In this study, a predictive model has been developed using computational intelligence techniques for the prediction of
drying time in the wool yarn bobbin drying process. The bobbin drying process is influenced by various drying param-
eters, 19 of which were used as input variables in the dataset. These parameters affect the drying time of yarn bobbins,
which is considered as the target variable. The dataset, which consists of these input and target variables, was collected
from an experimental yarn bobbin drying system. Firstly, the most effective input variables on the target variable, named
as the best feature subset of the dataset, were investigated by using a filter-based feature selection method. As a result,
the most important five parameters were obtained as the best feature subset. Afterwards, the most successful method
that can predict the drying time of wool yarn bobbins with the highest accuracy was explored amongst the 16 compu-
tational intelligence methods for the best feature subset. Finally, the best performance has been found by the REP tree
method, which achieved minimum error and time taken to build the model.

Keywords
prediction of drying time, wool, bobbin, feature selection, machine learning regression method, REP tree method

Drying of solids is one of the oldest and most common prediction of the drying time of textile bobbins will
unit operations found in diverse processes such as those assist optimization of labor planning and manufactur-
used in the agricultural, ceramic, chemical, food, ing of the bobbins. It will also facilitate efficient energy
pharmaceutical, pulp and paper, mineral, polymer, management so that the sector can establish appropri-
and textile industries.1 The drying process is used in a ate budget allocations for the cost of energy usage.
number of stages in the textile industry, including the Moreover, determination of drying process parameters
bobbin drying process. The purpose of drying is to is of great practical importance for predicting the
remove the water inside bobbins while keeping the drying time of yarn bobbins.
quality of textile without damage. Convection dryers
are the most common type of dryers used for drying
textiles and carpets. Convection dryers consist of a hot
1
air stream passing over the surface of the material to be Department of Mechanical Engineering, Namık Kemal University, Çorlu,
dried.2 Turkey
2
Department of Computer Engineering, Namık Kemal University, Çorlu,
Drying of a wet textile material is a time-consuming
Turkey
process and also requires high energy input; it consti- 3
Department of Mechanical Engineering, Trakya University, Edirne,
tutes one of the primary cost elements among textile Turkey
finishing operations.3–5 All industries consider how to 4
Department of Mechanical Engineering, Beykent University, _Istanbul,
optimize their processes to reduce overall operating Turkey
costs. In this challenging environment, optimization
Corresponding author:
studies on drying enable reduced energy consumption Ugur Akyol, Department of Mechanical Engineering, Çorlu Faculty of
and help companies to be much more competitive Engineering, Namık Kemal University, 59860 Çorlu, Tekirdağ, Turkey.
in the textile manufacturing market. In this context, Email: uakyol@nku.edu.tr
1368 Textile Research Journal 85(13)

In the literature, several studies have been undertaken elaborated, whereas the experimental work is described
to determine drying process and drying moisture in the following section. We then provide a discussion
transfer parameters and drying time period for textile of the study, before summarizing conclusions in the
products. The effects of different drying parameters on final section.
the drying process of viscose bobbins were determined
to optimize operating conditions.6 The results showed
that both the drying temperature and pressure had sig- Materials and methods
nificant effects on drying rate. Ribeiro and Ventura
modeled the convective drying process of textile bobbins
System description
numerically by using various drying parameters.4 Sousa The experiments were conducted in a pressurized
et al. analyzed the convective drying process of crude hot-air bobbin dryer, which was constructed as a
cotton fabric as a function of various drying parameters, prototype for previous studies,3,6 as shown in Figure 1.
such as initial moisture content, drying air velocity, and In this dryer system, ambient air was directed to a
temperature.7 Also, studies on the investigation of mois- 25 kW electrical heater by a centrifugal fan and the air
ture transfer and diffusion parameters can be found in pressure was supplied by a compressor. The power of
the literature for drying of solid,8 food,9,10 and wood the heating process is controlled by a solid-state relay.
products.11,12 Sahin et al. developed a simple method The heaters are PID (proportion integral derivate) con-
for determination of drying time of multidimensional trolled. After the heater, air enters a bobbin carrier
food products by considering an analogy between system where the bobbins are dried. In order to
the heat diffusion and moisture transfer models.13 reduce heat losses, both the surfaces of the heater and
In another study conducted by Sahin et al.,14 a new the bobbin carrier were insulated with glass wool. An
analytical model was developed for predicting drying analog pressure sensor with 0.5% sensitivity was used
times of irregular shaped multidimensional moist to measure the pressure of drying air. Control of the
solids. Laing et al. studied the determination of drying drying system was provided by a PLC (programmable
time for apparel fabrics.15 The research in the literature logic controller). After the carrier, drying air first enters
reveals that the study in this paper appears to be the a cooling exchanger to reduce the relative humidity of
first for the prediction of the duration of the yarn the air. Afterwards, drying air enters a separator. In the
bobbin drying process in textile finishing by using the separator, water droplets hanging on the air are sepa-
present methods applied in this study. rated from the air. The weights of the bobbins can be
Optimization of a drying process is significant to measured continuously by means of a load-cell.
reduce energy consumption. Moving from this point, The carrier has been placed on an analog load-cell
in this study, a predictive model is developed using with 600 kg capacity and 1 g sensitivity. Weights of
computational intelligence techniques. First, the most the bobbins can be measured continuously by this
important parameters for the drying process among 19 load-cell.
parameters, which were located in the empirical drying The experimental study has been carried out for
process, were determined using a feature selection bobbin outer diameters of 18 cm for drying air tem-
method. For this purpose, the original feature set con- peratures of 80 C, 90 C, and 100 C, for effective
sisting of all the drying process parameters was applied drying pressures of 1 bar, 2 bar, and 3 bar for a
to a filter-based feature selection method, which con- constant volumetric flow rate of 67.5 m3/h per
tains a CFS SubSet Evaluator with several search pro- bobbin. Wool yarn bobbins have been used in the
cedures to determine the reduced feature subsets. experimental study. In order to measure the tem-
Afterwards, the enhancement of learning achievement perature variations inside bobbins, seven copper con-
with the selected feature subsets was measured and stantan thermocouples were embedded into one of
evaluated, and then compared with 16 machine learn- the bobbins with equal spacings both in the radial
ing regression methods with the consideration of these and azimuthal directions. The relative humidity of
measurements. As a result of the feature selection pro- the drying air at the outlet of the bobbin carrier
cess, the duration of yarn bobbin drying process could and at the outlet of the separator was monitored
be predicted accurately by using determined param- using humidity transmitters.
eters. Finally, a predictive model was developed by The parameters related to the drying process of wool
choosing the most important parameters and achieving bobbins considered in this study were defined as
the most successful machine learning regression follows.
method, which can build the model with minimum
error in minimum time. Drying air pressure. Drying air pressure (DAP) is one of
The remainder of this paper is organized as follows. the most important drying parameters because it affects
In the next section, materials and methods are the drying time considerably. In this study, drying
Akyol et al. 1369

Figure 1. Schematic view of the experimental bobbin dryer system.

experiments were conducted for various drying air pres- Carrier outlet temperature (COT). This is the temperature
sures: 0.5 bar, 1 bar, 2 bar, and 3 bar. of drying air measured in the outlet of the bobbin car-
rier during the drying process.
Drying air temperature. Drying air temperature (DAT) is
also one of the most important drying parameters Separator outlet temperature (SOT). This is the tempera-
among the drying conditions; it affects the drying ture of drying air measured in the outlet of the separ-
time significantly. In this study, drying experiments ator during the drying process.
were conducted for various drying air temperatures:
80 C, 90 C, and 100 C. Fan outlet temperature (FOT). This is the temperature of
drying air measured in the outlet of the fan during the
Thermocouples. Seven copper constantan thermocouples drying process.
(T-1–T-7), equally spaced both in the radial and azi-
muthal directions and equidistant from both the Carrier outlet humidity (COH). A humidity sensor with
lower and upper surfaces of the bobbin to measure 4–20 mA output and 0.1 g/m3 sensitivity is used to
the temperature distribution, are used in the bobbin measure the relative humidity of the drying air just
during the drying process. after the carrier. This sensor is used to determine the
water content of the drying air just before the cooling
Carrier inlet temperature (CIT). A Cu–Ni thermocouple exchanger.
was used to enable feedback control for the inlet tem-
perature of the drying air at the carrier inlet where the Separator outlet humidity (SOH). The moisture in drying
bobbins are placed in and dried. air condenses on the surface of a cooling exchanger.
The purpose of this process is to reduce relative humid-
Carrier temperature (CT). This is the temperature of ity of the air. Afterwards, drying air enters to a separ-
drying air measured in the bobbin carrier during the ator. In the separator, water droplets hanging on the
drying process. air are separated from the air. A humidity sensor with
1370 Textile Research Journal 85(13)

4–20 mA output and 0.1 g/m3 sensitivity is used to


measure the relative humidity of the drying air just Original Feature Set
after the separator to determine the amount of the
water discharged from the drying during the drying
process.
SUBSET GENERATION
Instantaneous mass (IM). This is a mass value of the * Subset Search Procedures
bobbins which is measured instantaneously during the - Best First
drying process.
- Exhaustive Search
Equilibrium mass (EM). This is mass of the bobbins with - Genetic Search
the environmental conditions at the drying temperature
- Random Search
and pressure.
- Scatter Search
Moisture ratio. The dimensionless moisture ratio is - Greedy Stepwise Forward Direction
defined as: MR ¼ (M  Me)/(Mo  Me)
Candidate Subset
Drying time period. This is the drying time period (DTP)
of bobbins subjected to the hot-air drying process until SUBSET EVALUATION
the desired moisture content within the bobbin is * Subset Evaluator based on Filter Method
reached.
- CFS Subset Evaluator

Feature selection Selected Subset


Data sets may vary in dimension from two to hundreds
of features, and many of these features may be irrele- No
Stopping
vant or redundant. Feature subset selection decreases Criteria
the data set dimension by removing irrelevant and
redundant features from an original feature set. The
objective of feature subset selection is to procure a min- Yes
imum set of original features. Using the decreased set of
original features enables machine learning (ML) algo- Validation
rithms to operate faster and more effectively. However, (ML regression methods)
it helps to predicate more correctly by increasing learn-
ing accuracy of ML algorithms and improving result
comprehensibility.16 The basic stages, such as gener- Best Subset
ation, evaluation, and stopping criteria with validation
of the feature selection process, are presented in
Figure 2. Feature selection process.
Figure 2.

Subset generation. The feature selection process starts by


inserting an original feature set, which includes n procedures, which are the Best First (BF), Exhaustive
number of features. Subset generation is a process Search (ES), Genetic Search (GS), Random Search
that uses a search strategy to produce candidate feature (RS), Scatter Search (SS), and Greedy Stepwise
subsets of the original feature set for evaluation. Forward Direction (GSFD), are chosen, as indicated
Abstractly, the current best subset of the original fea- in Figure 2.
ture set can be performed by evaluating all the possible
feature subsets, which are all the contending 2n candi- Subset evaluation. After the candidate subsets are gener-
date subsets. This search is known as exhaustive search, ated, an evaluation algorithm determines a current best
which is too costly and impracticable if the original subset. The evaluation algorithms are divided into two
feature set consists of enormous features.17 To find broad groups such as filter and wrapper methods based
the optimal subset of the original feature set, more real- on their dependence on the inductive algorithm that
istic, easier, and more practical search procedures are will finally use the selected subset. In this study, a
developed.18 In this study, to select the candidate fea- filter-based feature selection method, CFS Subset
ture subsets for evaluation stage, several search Evaluator, is adopted to the data set of yarn bobbin
Akyol et al. 1371

dryer system. CFS uses a correlation-based heuristic to Functions. Functions contain algorithms, which are
evaluate the subsets of features. According to the cal- based on the mathematical models. The isotonic regres-
culations of feature–class and feature–feature correl- sion (IR) method chooses the feature which results in
ations, a heuristic search strategy is applied by the the lowest squared error.21 The least median square
CFS evaluator to obtain a good subset of features. (LMS) method reduces the median squared error.22
The evaluator prefers the subset of features that are The linear regression (LR) method deals with weighted
highly correlated with the class and have low inter- instances to create a prediction model.23 The multilayer
correlation with each other.19 perceptron (MLP) method represents input data onto
relevant outputs.24 The pace regression (PR) method
Stopping criteria and validation. A stopping criterion is creates a regression model by using a cluster analysis
needed for stopping the search and preventing an to enhance the statistical basis.25 The radial basis func-
exhaustive search of subsets. The feature selection pro- tion neural network (RBF) has emerged as a variant of
cess halts by outputting the selected subset of features, the neural network.24 Simple linear regression (SLR)
which is then validated. The validation stage checks the generates a regression model which has the lowest
validity of the selected subset and compares the results squared error.26
of some ML algorithms to find the best feature
subset.17 Lazy-learning algorithms. Lazy-learning algorithms delay
dealing with training data until a query is answered.
They store the training data in memory and find rele-
Machine learning regression methods
vant data in the database to answer a particular query.
In this study, regression models are generated to choose IBk, Kstar (K*), and locally weighted learning (LWL)
the best predictive model, which can predict the drying algorithms are instance-based algorithms for regression
time of wool bobbins with the highest prediction accur- that are considered as the lazy-learning algorithms. IBk
acy. For this purpose, several ML regression algorithms uses the k-nearest neighbors algorithm (k-NN) and K*
shown in Table 1 are applied to the selected subsets to uses some similarity function based on an entropy-
determine, firstly, the best subset and then best regres- based distance function.27,28 LWL assigns instance
sion algorithm. Those methods are divided weights for a regression model.29
into five categories such as functions, lazy-learning
algorithms, meta-learning algorithms, rule-based algo- Meta-learning algorithms. Meta-learning algorithms inte-
rithms, and tree-based learning algorithms, stated by grate different kinds of learning algorithms to improve
the WEKA statistical analysis package.20 the performance of existing learning algorithms.
Additive regression (AR) is a meta-learning algorithm
which produces predictions by combining contributions
Table 1. Machine learning regression methods used in this from an ensemble (collection) of different models.28
study Bagging (BREP) is applied to tree-based methods
such as REP trees to reduce the variance associated
Group Method Abbreviation
with prediction, and therefore, increase the accuracy
Functions Isotonic regression IR of the resulting predictions.30
Least median square LMS
Linear regression LR Rule-based algorithms. Rule-based algorithms use deci-
Multilayer perceptron MLP sion rules for regression model. For example, a decision
Pace regression PR table (DT) algorithm outlines the dataset with a deci-
Radial basis function RBF
sion table and a model trees rules (M5R) algorithm uses
a decision list.31,32
Simple linear regression SLR
Lazy IBk linear NN search IBk
Decision tree-based algorithms. Decision trees are used for
Kstar K* making predictions via a tree structure. Leaves of the
Locally weighted learning LWL tree structures illustrate classifications and branches of
Meta Additive regression AR the tree structures denote conjunctions of features.
Bagging REP tree BREP The M5P algorithm creates a regression model by split-
Rules Decision table DT ting the input space progressively based on divide and
Model trees rules M5R conquer decision tree methodology.33 REP creates a
Trees Model trees regression M5P regression model using the node statistics such as infor-
REP trees REP mation gain or variance reduction measured in the
top-down phase.34
1372 Textile Research Journal 85(13)

yarn as the response. For this purpose, two main stages


Prediction accuracy have been used experimentally. The first is to obtain the
In this paper, the prediction accuracy is evaluated by best feature subset of the original feature set, which has
using mean absolute error (MAE) and root mean 20 parameters related to drying process of wool bob-
squared error (RMSE). The MAE is an average of the bins. The second is to find the best machine learning
difference between predicted and actual value in all test regression method which predicts the drying time
cases, without considering their direction:35 period of wool yarn with the minimum errors. The
flow diagram of the prediction process is denoted in
ja1  c1 j þ ja2  c2 j þ    þ jan  cn j Figure 3.
MAE ¼ ð1Þ
n
Dataset
The RMSE is a frequently used measure of differences
between values predicted by a model or estimator and The dataset used in this study consists of 19 input vari-
the values actually observed from the process being ables and a target variable. It corresponds to data
modeled or estimated:35 which was received every 10 s from the control system
of the yarn bobbin dryer system illustrated in Figure 1.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
The input variables affect the full drying time period
ða1  c1 Þ2 þ ða2  c2 Þ2 þ    þ ðan  cn Þ2
RMSE ¼ (DTP) of wool bobbins, which is considered as the
n target variable in the dataset. The dataset was collected
ð2Þ from 20 experiments for drying of different weighted
wool bobbins. Each experiment was carried out with
In equations (1) and (2), a is the actual value of the samples comprising eight wool yarn bobbins with
output and c is the predicted value of the output. In hollow cylindrical shapes for four different effective
all error measurements, a lower value means a more drying air pressures (DAP ¼ 0.5, 1, 2, and 3 bar) at
precise model, with a value of 0 depicting a statistically three different drying air temperatures (DAT ¼ 80, 90,
perfect model.36 and 100 C), as shown in Table 2.

Methodology Data pre-processing


In order to find the most successful predictive Data pre-processing is a significant process that con-
model, first of all the dataset was applied by using tains the processes of cleaning, integration, and trans-
cross-validation methodology. The traditional cross- formation of data for using quality data in ML
validation methodology splits the data into two mutu- algorithms. It has a very large impact on the success
ally exclusive subsets, for training and testing. In the of predictions. Therefore, the original dataset consists
general case of k-fold CV, at each machine learning of 20 files formatted as Excel (.xls) files. At the begin-
experiment one subset is used for validation (i.e. to ning of the pre-processing stage, each data set, which
test the predictive model) and the rest is for training.37 belongs to 20 different experiments, are cleaned from
The cross-validation estimate of the overall accuracy is any noisy and incompatible data. Then the data sets of
calculated as the average of the k individual accuracy 20 experiments are merged to eliminate any duplication
measures, as in: data and to be integrated data set. After that, the data-
set integrated as a .xls file has been transformed to an
1X k .arff format that is necessary for processing in the
CV ¼ Ai ð3Þ WEKA tool.20 At the end of pre-processing, the dataset
k i¼1
is composed of 18,942 data points with 20 total param-
eters and collected from the experiments for drying
where CV stands for the cross-validation accuracy, k is of different weighted wool bobbins as the integrated
the number of folds used, and A is the accuracy meas- data set.
ure of each fold. Tenfold cross-validation (CV) was
applied as a validation scheme where the dataset is par-
titioned into equal sized subsets.
Feature selection process
Subset generation and evaluation. After the data pre-
processing, the dataset is applied to the process of
Comparative experiments and the results
feature selection to eliminate irrelevant and redundant
The purpose of this study is to choose a minimal model features, and to reduce the size of the dataset. First, we
which correctly predicts the drying time period of wool have performed some simple statistics of the original
Akyol et al. 1373

Raw Data Source


(Control System of a
Bobbin Dryer System)

Data Pre-processing

Feature Selection Process

Regression
Modelling

CV 10

Performance Evaluating

The Predictive Model


(Best Subset &
Best Regression Method)

Figure 3. Flow diagram of the prediction process.


1374 Textile Research Journal 85(13)

Table 2. Bobbins used in the experiments at some constant Table 3. Basic statistics of dataset
DAP and DAT conditions
Parameters Min Max Mean Std Dev
Experiments DAP (bar) DAT ( C) Bobbins (g)
DAP 0.5 3 1.87 0.962
1 0.5 100 999.04 DAT 80 100 90.06 8.003
2 0.5 90 1003.48 T-1 20.9 100.6 81.654 14.424
3 1 100 1002.15 T-2 18.6 100.3 77.175 18.815
4 1 90 1007.30 T-3 18.4 100.5 73.01 20.864
5 1 80 1014.02 T-4 18.4 100.3 68.314 21.652
6 1 100 571.78 T-5 17.9 99.7 63.938 21.866
7 1 90 574.72 T-6 18 99.9 58.278 20.245
8 1 80 578.55 T-7 18.2 99.6 53.795 18.39
9 2 100 1007.38 CIT 23.5 101.3 87.023 10.194
10 2 90 1013.72 CT 20.7 75.2 55.804 11.351
11 2 80 1020.71 COT 23 79.4 55.45 12.402
12 2 100 574.76 SOT 21.4 40.2 35.018 2.957
13 2 90 578.38 FOT 25 68.4 48.725 8.137
14 2 80 582.37 COH 8.24 99.97 47.131 38.093
15 3 100 1013.38 SOH 15.63 99.86 70.141 15.42
16 3 90 1019.72 IM 3.29 19.33 10.13 3.088
17 3 80 1026.71 EM 4.57 8.21 7.164 1.538
18 3 100 578.76 MR 0.335 1 0.408 0.269
19 3 90 583.38 DTP 3600 17,340 11,261.959 4104.809
20 3 80 587.37

feature set, as denoted in Table 3. Afterwards, the CFS feature subset in the validation stage of the feature
SubSetEval, which is a filter-based evaluator, with six selection process, as shown in Table 5. For evaluation
search procedures, which are the BF, ES, GS, RS, SS, of the results in Table 5, tenfold cross-validation (CV)
and GSFD, was applied to the original feature set and has been used. Hence, firstly, the prediction accuracy of
then selected subsets (SS0, SS1, . . ., SS10) are obtained, the original feature set with full parameters (20 vari-
as shown in Table 4. ables) is measured by the learning algorithms, and the
As can be seen in Table 4, the selected subset SS0, best performance is obtained by the DT algorithm with
which consists of the variables T7, FOT, EM, and MAE value of zero. Then the prediction accuracies of
DTP, is chosen by the BF, ES, GS, RS, and SS the selected subsets (SS0, . . . , SS10) are also measured
search procedures as the same subset. No threshold cri- by the learning algorithms, and then they are compared
terion has been pre-determined for these search proced- with each other.
ures. However, a threshold as the number of selected The purpose of the comparison is to find the best
input variables has been pre-determined from 1 to 10 predictive model, which has a performance determined
for the search procedure GSFD, which has selected 10 by applying the best feature subset which has the smal-
different subsets in total for the threshold in forward lest MAE error with the minimum number of variables.
direction (selected subsets: SS1, SS2, . . . , SS10). Hence, the highest performances with zero MAE are
obtained by IBk for the subset SS7; BREP for the sub-
Validation stage. After the selected subsets, which are the sets SS4 and SS5; DT for the subsets SS7, SS8, SS9, and
subsets SS0, SS1, . . . , SS10, were obtained, the best fea- SS10; and REP for the subsets SS4, SS5, and SS6, as
ture subset, which has the highest performance for the illustrated in Figure 4. As a result of the comparison in
prediction of drying time period, is explored among Figure 4, the selected subset (SS4) with five variables,
them. For this purpose, these subsets are applied to which contains four input variables such as EM, MR,
the ML regression methods, which are indicated in IM, and DAP with the target variable DTP, is found as
Table 1. the best feature subset.
The results, which are evaluated using MAE values, Table 6 denotes the covariance matrix of the best
are compared with each other to determine the best subset SS4, which indicates that the parameters are
Akyol et al. 1375

Table 4. Selected subsets obtained by CFS SubsetEval with the six search procedures

GSFD
Search Pro.
1 2 3 4 5 6 7 8 9 10
input input input input input input input input input input
Variables BF ES GS RS SS var. var. var. var. var. var. var. var. var. var.

Input DAP * * * * * * *
variables DAT * * * *
T-1
T-2
T-3
T-4
T-5
T-6
T-7 * * * * * * *
CIT
CT *
COT
SOT * * * * * *
FOT * * * * * * * * * *
COH
SOH * * *
IM * * * * * * * *
EM * * * * * * * * * * * * * * *
MR * * * * * * * * *
Target DTP * * * * * * * * * * * * * * *
variable
Selected subset SS0 SS1 SS2 SS3 SS4 SS5 SS6 SS7 SS8 SS9 SS10

not independent. Table 7 illustrates the parameters’ MAE and RMSE errors for the prediction of the
cross-correlation. drying time period.
Figure 5 illustrates a pairwise scatter plot of the best
subset. When the Table 7 and Figure 5 are examined
Results and discussion
together, the highest correlation among the features of
the best subset SS4 is observed between IM and EM The results show that there are two regression methods,
(0.75). Moreover, the highest correlations with target that is, BREP and REP, which produce the same high
variable (DTP) are also observed with EM (0.80) and performances with the smallest value for MAE and
IM (0.60). RMSE error measures. In Table 8, the methods are
also compared with each other according to the time
taken to build the model. As can be seen from the table,
Regression modelling
REP algorithm creates the learning model in 0.06 s, and
After choosing the best feature subset, the regression BREP algorithm produces the learning model in 1.4 s.
methods are re-applied to the best subset SS4 and then Hence, to build the model by REP algorithm takes less
the results are compared with each other to choose the time as compared to BREP algorithm. As a result, the
most successful regression method with high perform- best model can be built by REP algorithm in less time
ance, as shown in Table 8. According to this compari- with the best subset SS4, which includes smallest
son, BREP and REP methods are the most successful numbers of attributes and the smallest value of MAE
methods with MAE values of zero. Afterward, these and RMSE error measures.
methods are followed by IBk, M5R, M5P, K*, MLP, Finally, Figure 6 illustrates the scatter plots of the
AR, DT, IR, LMS, LR, PR, LWL, SLR, and RBF, actual and predicted drying time period (DTP) of wool
respectively. The RBF method is found to be the poor- yarn bobbins for the developed predictive model with
est performing regression method with the maximum the most successful method REP algorithm for the best
1376

Table 5. Results of the ML regression methods for original feature set (OFS) and the selected subsets (SS0, . . . , SS10)

OFS SS0 SS1 SS2 SS3 SS4 SS5 SS6 SS7 SS8 SS9 SS10
Reg. 20 var. 4 var. 2 var. 3 var. 4 var. 5 var. 6 var. 7 var. 8 var. 9 var. 10 var. 11 var.
Groups Met. (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE) (MAE)

Functions IR 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96 1697.96
LMS 1485.13 2077.99 2028.19 2026.35 2012.52 1701.43 1704.75 1696.80 1698.75 1585.70 1520.36 1512.97
LR 1312.93 1943.82 1994.56 1997.88 1990.39 1837.28 1837.28 1597.13 1621.00 1462.55 1406.99 1363.78
MLP 44075.00 1865.32 2261.95 2267.61 1470.52 768.83 464.95 573.14 110.52 122.16 25750.00 41679.00
PR 1314.01 1943.82 1994.56 1997.88 1990.39 1837.28 1837.30 1597.13 1621.00 1462.55 1407.42 1363.78
RBF 3453.71 2552.41 2024.72 2029.16 2048.32 2661.50 2656.89 2671.87 2666.62 2654.12 2650.51 2354.96
SLR 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56 1994.56
Lazy IBk 0.10 14.72 421.62 459.63 44531.00 0.05 46388.00 41671.00 0.00 0.90 0.90 0.48
K* 41641.00 375.34 1629.81 1585.59 1354.50 680.04 250.36 105.91 18.55 13.60 15888.00 30072.00
LWL 1981.20 1862.11 1997.14 2036.60 2033.83 1898.22 1922.58 1902.98 1906.92 1911.13 1910.54 1927.95
Meta AR 844.44 1243.27 1496.73 1496.73 1496.73 943.50 943.50 957.59 799.67 799.67 799.67 799.67
BREP 42887.00 16.63 421.96 425.03 112.26 0.00 0.00 0.01 42887.00 42887.00 42887.00 42887.00
Rules DT 0.00 1082.8 2040.31 2035.93 1909.73 1052.38 481.23 392.17 0.00 0.00 0.00 0.00
M5R 24.97 48.73 429.88 428.99 205.99 32021.00 32021.00 32021.00 13058.00 13789.00 13058.00 13058.00
Trees M5P 38.40 62.00 437.86 431.79 199.65 20.47 20.47 20.47 38.45 38.45 38.45 38.45
REP 42887.00 21.56 421.62 421.15 118.59 0.00 0.00 0.00 42887.00 42887.00 42887.00 42887.00
Textile Research Journal 85(13)
Akyol et al. 1377

Figure 4. The subsets with MAE value of zero.

parameter in the prediction of drying time period of


Table 6. Covariance matrix of the best subset wool yarn bobbins. Afterward, this parameter is fol-
lowed by respectively IM, MR, and DAP parameters.
EM MR IM DAP DT
In a similar study,6 according to the experimental
EM 2 0 4 0 5056 results of viscose bobbin drying process, drying time
MR 0 0 0 91 period is strongly dependent on drying air pressure,
IM 10 0 7655 drying air temperature, and drying air flow rate and
DAP 1 133 increase at these parameters shortens the drying time
DTP 2E+07
period considerably. Furthermore, the simulation of
drying behavior of wool yarn bobbins and also deter-
mination of the optimum operating conditions have
been investigated by using thermodynamic analysis.39
Table 7. Correlation matrix of the best subset The drying experiments show that drying time period is
strongly dependent on drying air temperature and
MR IM DAP DTP
drying air pressure.
EM 0.067 0.753 0.108 0.801 It has seen in previous studies,6,39 in which thermo-
MR 0.570 0.175 0.082 dynamic approaches were used, that DAP is one of the
IM 0.033 0.604 most important parameters in the yarn bobbin drying
DAP 0.034 process. In our study, DAP was also found to be
an important parameter. However, the effects of EM,
IM, and MR were more significant on drying time
period.

subset SS4, which includes four input variables, such as,


Conclusion
EM, MR, IM, and DAP, in less time as compared other In this study, a predictive model for predicting the time
algorithms. period of wool yarn bobbin drying process was devel-
These input variables respectively are presented in oped using computational intelligence techniques with
ranking order list, which is created by filtering by the optimum accuracy in two steps. The first step was to
ReliefF Ranking Filter, as shown in Figure 7. The determine the best feature subset, which involved the
ReliefF Ranking Filter evaluates the worth of an attri- most important parameters for drying process within
bute by repeatedly sampling an instance and consider- 20 parameters. As a result of the feature selection
ing the value of the given attribute for the nearest process, the best subset (SS4) was obtained with five
instance of the same and different class.38 Therefore, variables, which are the most important input variables
the relative parameter importance of the best subset such as EM, IM, MR, DAP, and the target variable
(SS4) is determined as the ordered list. According to (DTP), by Greedy Stepwise search method with for-
this order list, EM is found as the most important ward direction and CFS SubSetEval evaluator.
1378 Textile Research Journal 85(13)

Figure 5. Pairwise scatter plot of the best subset SS4.

Table 8. Results of different ML regression methods on the The second step of this study was to figure out the
best subset SS4 most successful machine learning regression method in
prediction of drying time period for wool yarn bobbins.
Regression
Groups methods MAE RMSE Time (s) The best feature subset, which was found in the first
step, was applied to the 16 machine learning algorithms
Functions IR 1697.96 2202.08 0.96 to compare the each result of these algorithms. As a
LMS 1701.43 2462.98 24.51 result of this comparison, the REP tree algorithm
LR 1837.28 2343.51 0.07 built the model in less time than BREP algorithm.
MLP 768.83 1260.96 7.91 Hence, the REP tree algorithm performed as the most
PR 1837.28 2343.51 0.11 successful algorithm with highest performance in terms
RBF 2661.50 3215.51 0.59 of accuracy and time.
SLR 1994.56 2459.20 0.03
Finally, the time period of wool yarn bobbin drying
process was predicted with a model which achieved the
Lazy IBk 0.05 7.27 0.01 best performance by determining the most important
K* 680.04 834.26 0 parameters, and the most successful algorithm, which
LWL 1898.22 2368.37 0 can build the model with minimum error in minimum
Meta AR 943.50 1157.60 0.39 time. As a result, production planning of yarn bobbins
BREP 0.00 0.00 1.40 and reduced energy consumption can possibly be
achieved by the prediction of drying time period for
Rules DT 1052.38 1393.04 1.5 yarn bobbins by determining the optimum drying
M5R 9.87 14.04 4.64 conditions.
Trees M5P 20.47 40.51 1.24 Future works on this subject are planned to involve
REP 0.00 0.00 0.06 the investigation of the feature subset selection and the
learning methods for a different kind of yarn bobbins to
Akyol et al. 1379

Figure 6. Scatter plot of actual DTP and predicted DTP by the developed predictive model in this study.

Figure 7. Ranking order list of the best subset (SS4) based on a ReliefF ranking filter.

compare predictive accuracy and learner efficiency. References


Additionally, effects of the most important parameters 1. Mujumdar AS. Handbook of industrial drying, 2nd ed.
on energy consumption and efficiency of drying process New York: Marcel Dekker, 1995.
will be investigated in our future studies. 2. Mujumdar AS. Handbook of industrial drying, 3rd ed.
New York: CRC Press, 2006.
Funding 3. Akyol U, Cihan A and Shaliyev R. Thermophysical
_
This work was supported by TÜBITAK (grant number parameter estimation of a wool bobbin during convective
108M274). drying process. Inverse Prob Sci Eng 2010; 18: 227–240.
1380 Textile Research Journal 85(13)

4. Ribieiro J and Ventura JMP. Evaluation of textile 23. Wilkinson GN and Rogers CE. Symbolic descriptions of
bobbins drying processes: experimental and modeling factorial models for analysis of variance. Appl Stat 1973;
studies. Drying Technol 1995; 13: 239–265. 22: 392–399.
5. Oğulata RT. Utilization of waste-heat recovery in textile 24. Haykin S. Neural networks, a comprehensive foundation,
drying. Appl Energy 2004; 79: 41–49. 2nd ed. New Jersey: Prentice Hall, 1999.
6. Akyol U, Kahveci K and Cihan A. Determination of 25. Ekinci S, Celebi UB, Bal M, et al. Predictions of oil/
optimum operating conditions and simulation of drying chemical tanker main design parameters using computa-
in a textile drying process. J Text Inst 2013; 104: 170–177. tional intelligence techniques. Appl Soft Comput 2011; 11:
7. Sousa LHCD, Lima OCM and Pereira NC. Analysis of 2356–2366.
drying kinetics and moisture distribution in convective 26. Smola AJ and Schölkopf B. On a kernel-based method
textile fabric drying. Drying Technol 2006; 24: 485–497. for pattern recognition, regression, approximation and
8. Dincer I and Dost S. A modeling study for moisture operator inversion. Algorithmica 1998; 22: 211–231.
diffusivities and moisture transfer coefficients in drying 27. Cleary JG and Trigg LE. K*: an instance-based learner
of solid objects. Int J Energy Res 1996; 20: 531–539. using an entropic distance measure. In: 12th international
9. Dincer I, Sahin AZ, Yilbas BS, et al. Exergy and energy conference on machine learning, Tahoe City, CA, 9–12
analysis of food drying systems. Progress report 2, July 1995, pp.108–114.
KFUPM Project #ME/ENERGY/203, 2000. 28. Friedman JH. Stochastic gradient boosting. Technical
10. Dincer I, Hussain MM, Sahin AZ, et al. Development of report, Stanford University, Statistics Department, Palo
a new moisture transfer (Bi–Re) correlation for food Alto, CA, 1999.
drying applications. Int J Heat Mass Transfer 2002; 45: 29. Atkeson C, Moore A and Schaal S. Locally weighted
1749–1755. learning. Artif Intell Rev 1997; 11: 11–73.
11. Dincer I. Moisture loss from wood products during 30. Breiman L. Bagging predictors. Mach Learn 1996; 24:
drying. Part I: moisture diffusivities and moisture transfer 123–140.
coefficients. Energy Source 1998; 20: 67–75. 31. Kohavi G and John G. Wrappers for feature subset selec-
12. Dincer I. Moisture loss from wood products during tion. Artif Intell 1997; 97: 273–324.
drying. Part II: surface moisture content distributions. 32. Holmes G, Hall M and Frank E. Generating rule sets
Energy Source 1998; 20: 77–83. from model trees. In: 12th Australian joint conference on
13. Sahin AZ, Dincer I, Yilbas BS, et al. Determination artificial intelligence, Sydney, Australia, 6–10 December
of drying times for regular multi-dimensional objects.
1999, pp.1–12.
Int J Heat Mass Transfer 2002; 45: 1757–1766.
33. Bhattacharya B and Solomanite DP. Neural network
14. Sahin AZ and Dincer I. Prediction of drying times for
and M5 model trees in modelling water level-discharge
irregular shaped multi-dimensional moist solids. J Food
relationship. Neurocomputing 2005; 63: 381–396.
Eng 2005; 71: 119–126.
34. Witten IH and Frank E. Data mining: Practical machine
15. Laing RM, Wilson CA, Gore SE, et al. Determination
learning tools and techniques with java implementations.
the drying time of apparel fabrics. Text Res J 2007; 77:
San Francisco: Morgan Kaufmann Publishers, 2005.
583–590.
35. Challagulla VUB, Bastani FB, Yen IL, et al. Empirical
16. Han J and Kamber M. Data mining: Concepts and tech-
assessment of machine learning based software defect
niques. San Francisco: Morgan Kauffmann Publishers,
prediction techniques. In: 10th IEEE international work-
2001.
shop on object-oriented real-time dependable systems,
17. Dash M and Liu H. Consistency-based search in feature
selection. Artif Intell 2003; 151: 155–176. Sedona, AZ, 2–4 Feb 2005, pp.263–270.
18. Yun C, Oh B, Yang J, et al. Feature subset selection 36. Liu H, Gopalkrishnan V, Quynh KTN, et al. Regression
based on bio-inspired algorithms. J Inf Sci Eng 2011; models for estimating product life cycle cost. J Intell
27: 1667–1686. Manuf 2009; 20: 401–408.
19. Hall MA and Holmes G. Benchmarking attribute selec- 37. Alpaydın E. Introduction to machine learning, 2nd ed,
tion techniques for discrete class data mining. IEEE MIT Press, 2010.
Trans Knowl Data Eng 2003; 15: 1437–1447. 38. Robnik-Sikonja M and Kononenko I. An adaptation of
20. Weka 3: Data mining software in Java. Machine Relief for attribute estimation in regression. In: Fisher D
Learning Group at University of Waikato. www.cs. (ed): Machine learning. Proceedings of the 14th interna-
waikato.ac.nz/ml/weka/ (accessed 28 March 2014). tional conference on machine Learning (ICML’97),
21. Stout QF. Isotonic regression via partitioning. Nashville, TN.
Algorithmica 2013; 66: 93–112. 39. Akyol U, Akan AE and Durak A. Simulation and ther-
22. Portnoy S and Koenker R. The Gaussian hare and the modynamic analysis of a hot-air textile drying process.
Laplacian tortoise: computability of squared-error versus J Text Inst 2014 (in press), DOI: 10.1080/
absolute-error estimators. Stat Sci 1997; 12: 279–300. 00405000.2014.916062.

You might also like