Modeling and Analysis of Correlations Between Cutting Parameters PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

J Braz. Soc. Mech. Sci. Eng.

(2013) 35:111–121
DOI 10.1007/s40430-013-0012-3

TECHNICAL PAPER

Modeling and analysis of correlations between cutting parameters


and cutting force components in turning AISI 1043
steel using ANN
Miloš Madić • Miroslav Radovanović

Received: 5 May 2011 / Accepted: 20 October 2011 / Published online: 29 March 2013
Ó The Brazilian Society of Mechanical Sciences and Engineering 2013

Abstract Predictive modeling is essential to better Fc Main cutting force, N


understanding and optimization of machining processes. Ff Feed force, N
Modeling of cutting forces has always been one of the main Fp Passive force, N
problems in metal cutting theory. In this paper, artificial H Number of hidden neurons
neural networks (ANNs) were used for modeling correla- I Number of input neurons
tions between cutting parameters and cutting force com- M Measured (experimental) values
ponents in turning AISI 1043 steel. Cutting force MAPE Mean absolute percentage error, %
components were predicted by changing cutting speed, feed N Number of data for training
rate, depth of cut and cutting edge angle under dry condi- O Number of output neurons
tions. In order to improve generalization capabilities of the P Predicted values by ANN
ANN models, Bayesian regularization is used in ANN PI Performance index
training. Considering experimental data for ANN training, R Correlation coefficient
five ANN models were tested. For evaluating the predictive Ra Average surface roughness, lm
performance of ANN models, three performance criteria re Tool nose radius, mm
were given consideration. The overall mean absolute per- RMSE Root mean squared error
centage error for cutting force components was around 3 %. SSE Sum of squared errors
This study concludes that Bayesian regularized ANN of SSW Sum of squares of the ANN weights
quite basic architecture using small training data is capable v Cutting speed, m/min
of modeling multiple outputs with high prediction accuracy.
Greek symbols
a, b Bayesian hyperparameters
Keywords Prediction  Cutting force components 
c Rake angle, °
Artificial neural networks  Bayesian regularization
j Cutting edge angle, °
k Angle of inclination, °
List of symbols
l Marquardt adjustment parameter
ap Depth of cut, mm
f Feed rate, mm/rev Subscripts
tr Performance using data for ANN training
ts Performance using data for ANN testing
Technical Editor: Alexandre Abrão.

M. Madić (&)  M. Radovanović


1 Introduction
Faculty of Mechanical Engineering, University of Niš,
A. Medvedeva 14, 18000 Niš, Serbia
e-mail: madic@masfak.ni.ac.rs Machining is one of the most important and widely used
M. Radovanović manufacturing processes. Predictive modeling is essential
e-mail: mirado@masfak.ni.ac.rs to better understanding and optimization of machining

123
112 J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121

processes. Modeling of machining processes has attracted the predictive modeling of machining processes. Early
the attention of a number of researchers in view of its papers on the use of ANN in modeling turning process by
significant contribution to the overall cost of the product Luong and Spedding [16] and Tarng et al. [30] showed
[21]. In turning processes, modeling and prediction of that ANN offered great possibilities and advantages in
cutting performance such as cutting forces, tool wear and modeling and prediction of process parameters. Modeling
surface quality are of high importance. Study of cutting of cutting forces using ANN methodology and experi-
forces is critically important in turning operations because mental data was given by Szecsi [28]. In ANN model
cutting forces are in direct relation with surface quality, development, training and architectural parameters were
machined piece dimensions, tool wear, tool breakage, varied to obtain optimal ones. It was shown that ANN
cutting temperature, self-excited and forced vibrations, and with one hidden layer having seven (eight) hidden neurons
moreover, with power requirements of the machine tool. was sufficient to model 12 input machining parameters
A large number of parameters influence the cutting force and three output parameters (cutting force components).
such as cutting speed, feed rate, depth of cut, rake angle, Hans Ray et al. [11] showed some advantages of using
tool nose radius, cutting edge inclination angle, physical Levenberg-Marquartd (LM) ANN training algorithm over
and chemical characteristics of the workpiece, chip breaker the standard backpropagation (BP) algorithm. The exper-
geometry, etc.; therefore, it is a very difficult task to imental results and the predicted values for cutting forces
develop a proper cutting force analytic model [28]. In sit- were much more accurate than those obtained earlier with
uations where the variables to be studied have complex or a BP. Chien and Chou [8] developed a predictive ANN
nonlinear relationships; artificial neural networks (ANNs) model for machinability of 304 stainless steel, i.e., for the
are an efficient tool when compared with other classical prediction of surface roughness, cutting force and tool life.
prediction methods [1]. The learning ability of nonlinear The authors found that the estimated force was within an
relationship in a cutting operation without going deep into error of 5.4 %. Al-Ahmari [2] compared regression anal-
the mathematical complexity, or prior assumptions on the ysis (RA), response surface method (RSM) and ANN
functional form of the relationship between inputs, in- models for prediction of tool life, cutting force, and sur-
process parameters and outputs makes ANN an attractive face roughness. It was found that the ANN models were
alternative choice to model cutting processes [22]. superior to the RSM and RA models. Similar conclusions
Although ANNs possess many attractive characteristics were made by Bajić et al. [3] who tested the RA and ANN
and advantages, there are some drawbacks and limitations models for the ability of interpolation and extrapolation.
when using ANN based modeling. It is widely reported that They also showed that, although trained with small data
data collected for ANN training, data pre-processing, dif- set, ANN was capable for achieving accurate predictions.
ferent types of activation functions, initialization of Sharma et al. [27] developed an ANN model for estima-
weights, parameters of training algorithms and ANN tion of cutting forces and surface roughness in hard
architecture have strong influence on the effectiveness and turning. The optimal ANN architecture was found using
performance of the trained ANN. Above all, increased the linear programming method by minimizing test error
attention has been especially directed to finding the best with testing data, minimizing training time and mean
architecture [5]. Many ANN parameters must be deter- square error for training data. The model achieved overall
mined with few guidelines and no standard procedure to 76.4 % accuracy. Ezugwu et al. [9] used an ANN
define the ANN architecture. This commonly results in the approach to model the correlation between cutting and
ANN architecture being mostly determined by the trial and process parameters in high-speed machining of Inconel
error method [24] which is very time-consuming. While 718 alloy. They developed an ANN model with correla-
this is a common procedure in determining ANN archi- tion coefficient between the model prediction and exper-
tecture, it is our opinion that the trial and error method imental values ranging from 0.6595 for cutting force to
should be limited by considering the number of data 0.9976 for nose wear prediction. Hao et al. [10] developed
available for training the ANN in addition to following an ANN model for prediction of cutting force in self-
rules of thumb given in literature. Furthermore, to improve propelled rotary tool in turning. In order to improve cut-
the generalization of ANNs, Bayesian regularization (BR) ting force model performances, the hybrid BP and genetic
for ANN training can be applied. algorithm were applied. Lin et al. [14] used the abductive
neural network for predicting surface roughness and cut-
ting force. The advantage of using the abductive network
2 Literature review was that these networks were self-configured to form an
optimal network hierarchy using a predicted square error
In the past years, ANNs have proven to be an efficient criterion. However, the comparison of experimented and
modeling tool and have become increasingly popular in predicted cutting force showed that abductive network

123
J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121 113

yielded average absolute error of 3.38 %, whereas RA Table 1 summarizes the previous studies that dealt with
model yielded 3.14 %. the cutting force prediction in turning using the ANN
Based on the previous studies, the abilities of ANN for models.
modeling the machining process may include the From Table 1 one can see that the researchers applied
following: ANNs of very different architectures trained with small and
large data sets. In some situations very basic ANN archi-
1. ANN is able to model complex and nonlinear
tectures provided an accurate and reliable model. On the
relationships between inputs and outputs, including
other hand, there were situations where very complex ANN
modeling of multiple outputs.
architectures with a huge number of hidden neurons were
2. ANN model prediction accuracy is generally better
used.
when compared to RA and RSM.
In this paper, using the experimental data, ANN models
3. Development of an accurate ANN model is feasible
were developed for the prediction of cutting force com-
even using small training data set.
ponents during longitudinal turning of AISI 1043 steel.
4. The ANN model does not need any preliminary
Four cutting parameters were considered as ANN model
assumptions on the functional form of inputs/outputs
inputs: cutting speed [v (m/min)], feed rate [f (mm/rev)],
relationships.
depth of cut [ap (mm)] and cutting edge angle [j (8)]. The
Also, it should be noted that trial and error still remain ANN model attempted to predict cutting force components:
the most frequently applied method for determining ANN main cutting force [Fc (N)], feed force [Ff (N)], and passive
architecture. force [Fp (N)].

Table 1 Application of ANN for modeling the cutting force in turning


References Model inputs ANN Data for ANN Training
architecture training algorithm

Tarng et al. Undeformed chip thickness, chip width, cutting speed, tool rake angle 4-11-7-2 1,100 BP
[30] experimental
data
Luong and Hardness, tool rake angle, tool side cutting edge angle, feed rate 4-x-3 Machining data BP
Spedding handbook
[16]
Szecsi [28] Tensile strength, hardness, cutting tool nose radius, clearance angle, rake 12-8-3 or 3,200 BP
angle, major cutting edge angle, minor cutting edge angle, major cutting 12-7-3 experimental
edge inclination angle, cutting speed, feed rate, type of machined material, data
average flank wear
Hans Ray Cutting speed, feed rate, depth of cut 3-7-7-2 24 LM
et al. [11] experimental
data
Chien and Cutting speed, feed rate, depth of cut 3-8-1 56 BP
Chou [8] experimental
data
Lin et al. [14] Cutting speed, feed rate, depth of cut Abductive neural network
Ezugwu et al. Cutting speed, feed rate, depth of cut, cutting time, coolant pressure 5-10-10-7 68 LM with
[9] experimental Bayesian
data regularization
Hao et al. Cutting speed, feed rate, depth of cut, tool inclination angle 4-21-3 192 total BP
[10] experimental
data
Al-Ahmari Cutting speed, feed rate, depth of cut, tool nose radius 4-73-45-3 28 BP
[2] experimental
data
Sharma et al. Approaching angle, cutting speed, feed rate, depth of cut 4-20-2 30 BP
[27] experimental
data
Bajić et al. Cutting speed, feed rate, depth of cut 3-20-1 18 Resilient BP
[3] experimental
data

123
114 J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121

The greatest challenge of this work was to develop an cutting force components. In turning process, cutting force
ANN model of quite basic architecture, considering limited F can be decomposed, in coordinate system of the machine,
experimental data, which would be able to accurately into three components (Fig. 2):
model the correlations between cutting parameters and Fc, main cutting force (tangential force), acting on the
three cutting force components simultaneously. rake face of the tool in the direction of motion of the
workpiece;
Ff, feed force (axial force), acting on the tool in the
3 Cutting force direction of tool movement;
Fp, passive force (radial force), acting on the tool in the
The cutting force occurs, during the cutting process, due to radial direction, normal to the machined surface.
resistance of workpiece material. Cutting force in turning is
influenced by varying amounts of factors such as cutting
parameters, tool related parameters, properties of work- 4 Experimental setup
piece material and environment parameters. To this end, a
great deal of research has been performed to quantify the The experiments for measuring the cutting force compo-
effect of these parameters on cutting force. Figure 1 shows nents were conducted on the universal lathe ‘‘Potisje’’
the influential factors on cutting force. PA-C30, with a power of 11 kW. The longitudinal turning
For the present work, cutting parameters (depth of cut, experiments were performed using cutting tool with tool
cutting speed, feed rate) and tool parameter (cutting edge holder PCLNR3225P12 and insert CNNM120408P25
angle) were selected for studying their influences on the (4025), rake angle c = -6°, angle of inclination k = -6°,
tool nose radius re = 0.8 mm. The workpiece material
used in this experiment was AISI 1043 steel. The chemical
composition and the mechanical properties of selected
material are given in Table 2.
A workpiece in the form of a bar with diameter of
60 mm was held in the machine with chuck and center to
minimize run-out and maximize rigidity. All experiments
were conducted without cooling and lubrication agents.
The cutting forces were measured with a three-component
force dynamometer Kistler type 9441, mounted on the lathe
via a custom designed adapter for the tool holder creating a
very rigid tooling fixture. The charge signal generated at
the dynamometer was amplified using amplifier Kistler
type 5007A. The amplified signal was acquired and sam-
pled using computer Hewlett Packard HP 9000/300. Before
conducting the measurements all the measuring instru-
ments were calibrated. Experimentally measured cutting
Fig. 1 Cause and effects diagram of factors that influence the cutting
force force values are given in Table 3.

Table 2 Chemical composition and mechanical properties of the


workpiece material AISI 1043 steel
%C % Mn % Si %P %S

Chemical composition
0.45 0.65 0.25 0.045 0.045
Mechanical properties
Yield strength, MPa 360
Ultimate strength, MPa 650
Brinell hardness, HB 206
Fig. 2 Components of the cutting force

123
J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121 115

Table 3 Experimental data systems that consist of simple processing units, called
Trial Cutting parameters Cutting force
neurons, linked by weighted connections. Each neuron
no. components receives input signals (information) from other neurons and
a bias adjustment, processes it locally through a transfer
ap v (m/ f (mm/ j Fc Ff Fp
(mm) min) rev) (8) (N) (N) (N) (activation) function and generates an output that can be
seen as the reflection of local information that is stored in
1 1.5 143 0.499 95 1,671 702 475 connections. The output signal of a neuron is fed to other
2 0.75 143 0.499 95 840 294 338 neurons as input signals via connections. Although the
3 1.5 143 0.124 95 535 356 182 capability of a single neuron is limited, a synchronous
4 0.75 143 0.124 95 300 178 158 assembly of neurons has universal function approximation
5a 1.5 94 0.499 95 1,663 765 523 capability.
6 0.75 94 0.499 95 837 317 413 There are about 30 different ANN architectures of
7 1.5 94 0.124 95 575 394 185 which multi layer perceptron (MLP) is the most popular.
8 0.75 94 0.124 95 330 205 153 Every ANN is designed for modeling a concrete problem.
9 1 116 0.249 95 674 374 266 For an ANN to react appropriately when unknown input
10 1 116 0.249 95 672 367 273 data are presented to it, it must be trained. Essentially,
11a 1 116 0.249 95 676 371 282 training is the process of determining weights of con-
12 1 116 0.249 95 682 377 282 nections and bias adjoined to every neuron. There is a
13 1.5 143 0.499 85 1,472 631 414 great number of training algorithms which can be used to
14 0.75 143 0.499 85 843 353 350 train MLP such as: BP algorithm and its variations, con-
15 1.5 143 0.124 85 502 352 143 jugate gradient algorithms, quasi-Newton algorithms and
16a 0.75 143 0.124 85 301 201 137 LM algorithm. The LM algorithm belongs to the algo-
17 1.5 94 0.499 85 1,532 756 420 rithms that converge very fast (especially for smaller and
18 0.75 94 0.499 85 904 417 400 medium large ANN), with less danger from entrapment in
19 1.5 94 0.124 85 542 384 141 local minimum, before reaching global minimum at error
20 0.75 94 0.124 85 317 217 140 surface, while at the same time it can provide high
21 1 116 0.249 85 628 387 233 accuracy of prediction. Furthermore, methods such as BR
22 1 116 0.249 85 644 381 233 and early stopping can be used to improve the general-
23a 1 116 0.249 85 639 401 226 ization of ANNs.
24 1 116 0.249 85 630 371 230
Detailed description of the ANNs can be found in
25 a
1.5 143 0.499 75 1,460 690 328
numerous references including Haykin [12] and Bishop [6].
26 0.75 143 0.499 75 801 390 311
27 1.5 143 0.124 75 492 359 118 5.2 Architecture of ANN and optimization
28 0.75 143 0.124 75 282 206 106
29 1.5 94 0.499 75 1,567 886 401 Specifying the internal architecture requires determining
30 0.75 94 0.499 75 847 459 346 the number of hidden layers, and the number of neurons in
31 1.5 94 0.124 75 511 367 109 each hidden layer. An illustration of ANN architecture with
32 0.75 94 0.124 75 286 217 110 two hidden layers and neurons is given in Fig. 3. The ANN
33 1 116 0.249 75 615 429 186 architecture consists of four layers which are the input
34a 1 116 0.249 75 618 401 198 layer, two hidden layers and output layer. Four neurons for
35 1 116 0.249 75 619 421 192 the input layer stand for the four cutting parameters of the
36 1 116 0.249 75 631 422 199 case study which are depth of cut, cutting speed, feed rate,
a and cutting edge angle. Three neurons for the output layer
Data for ANN testing
stand for the predicted cutting force components values of
5 ANN based modeling the case study.
It has been shown that at most two hidden layers are
5.1 Overview of ANNs needed to approximate a particular set of functions to a
given accuracy [13], and this therefore, reduces the prob-
Artificial neural networks, originally developed to mimic lem of defining the ANN architecture to one of choosing
basic biological neural systems, are massive parallel the number of hidden neurons.

123
116 J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121

Fig. 3 Illustration of MLP


network architecture with layers
and neurons

The number of neurons in the hidden layers determines Table 4 Recommendations for determining the number of hidden
the expressive power of the ANN model and its general- neurons
ization capability. Generally, the number of neurons in Number of hidden neurons, H References
each hidden layer depends upon the complexity of the
function being approximated. The number of hidden layer 2Iþ1 Lippman [15]
neurons is usually found with trial and error approach, in 2I Wong [32]
which, a large number of different architectures are I Tang and Fishwick [29]
examined and compared to one another. However, an upper 3=4  I Salchenberger et al. [25]
bound on the number of hidden neurons is of the order of I  log2 N Marchandani and Cao [19]
1=2 Masters [20]
the number of training samples used. The number of neu- ðI  OÞ
rons in the hidden layer increases the amounts of connec-
tions and weights to be fitted. This number cannot be 5.3 Bayesian regularization
increased without limit because one may reach a situation
where the number of the connections to be fitted is larger Bayesian regularization is often used to improve the gen-
than the number of the data pairs available for training. eralization in neural networks, especially when there is a
Though the neural network can still be trained, the case is limited amount of data [4]. The basic idea in BR is that the
mathematically undetermined [26]. true underlying function between input and output param-
For a well generalized ANN model, there should be eters is assumed to have a degree of smoothness. By
about ten times as many training data points as there are keeping the network weights and biases small the network
weights in the network [6]. Masters [20] stated that the response will be smooth. This approach involves modify-
required minimum ratio of the number of training samples ing the objective function, which is normally chosen to be
to the number of connection weights should be two and, the sum of squared errors. With regularization, the objective
minimum ratio of the optimum training sample size to the function becomes:
number of connection weights should be four.
This study prefers to apply different ANN architectures F ¼ b  SSE þ a  SSW: ð1Þ
by following the guidelines given in previous theoretical where SSE is the sum of squared errors, SSW is the sum of
and practical investigations. The recommended number of squares of the network weights, a and b are parameters
neurons (H) in the hidden layers for I input neurons and optimized in Bayesian framework of MacKay [17, 18].
O output neurons and for N training data is given in Training with BR yields important parameters such as
Table 4. SSE, SSW and number of effective parameters used in
Since the number of input neurons is four in the case neural network, which can be used to reduce guesswork in
study, the number of output neurons is three, and consid- selection of number of neurons in the hidden layer [4, 23].
ering available data for training, the following ANN Besides, Bayesian regularized ANNs have additional
architectures are developed: 4-2-3, 4-3-3, 4-4-3, 4-2-2-3 advantages. They are difficult to overtrain, as an evidence
and 4-3-3-3. procedure provides an objective criterion for stopping

123
J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121 117

training. It has been shown mathematically that they do not Table 5 Prediction accuracy of developed ANN models on training
strictly need a test set, as they produce the best possible and testing data
model most consistent with the data [7]. Statistic Rtr Rts RMSEtr RMSEts MAPEtr MAPEts

Cutting force component Fc


5.4 ANN design and training 4-2-3 0.996 0.996 26.945 42.089 2.766 2.496
4-3-3 0.997 0.999 27.081 18.116 2.760 2.801
Throughout this study, the available input/output data set 4-4-3 0.998 1 23.994 10.284 2.146 0.808
(36 data) was divided randomly into two sets: training data 4-2-2-3 0.997 0.999 29.701 25.002 2.835 3.273
set with 30 data and test data set with six data. MATLAB 4-3-3-3 0.999 1 16.073 20.890 1.811 1.664
software package was used for development of MLP with Cutting force component Ff
one and two hidden layers and different number of hidden 4-2-3 0.943 0.988 19.976 32.123 9.594 7.406
neurons. In general, the networks have four neurons in the 4-3-3 0.982 0.991 29.293 28.374 5.876 4.683
input, corresponding to each of the four cutting parameters
4-4-3 0.991 0.998 21.472 13.587 4.288 2.246
and three neurons in the output layer, corresponding to
4-2-2-3 0.963 0.997 42.285 18.189 6.814 3.787
each of the process parameter.
4-3-3-3 0.998 0.990 10.520 27.622 2.230 4.323
For all ANNs linear transfer function ‘‘purelin’’ and
Cutting force component Fp
tangent sigmoid transfer function ‘‘tansig’’ were used in the
4-2-3 0.956 0.891 15.821 57.806 11.946 16.234
output and hidden layer, respectively. Using tangent sig-
4-3-3 0.985 0.961 18.872 35.031 7.210 7.812
moid transfer functions in hidden layers allow the ANN to
4-4-3 0.989 0.964 16.103 33.224 5.053 6.137
perform the nonlinear modeling, whereas for the prediction
4-2-2-3 0.981 0.951 21.598 38.961 8.319 7.761
purposes, it is sufficient to use linear transfer functions in
4-3-3-3 0.987 0.947 17.432 40.064 4.838 8.174
output layer.
The weights and biases of the ANNs were initialized to
small random values. The LM training algorithm was used
together with BR in training the ANNs using ‘‘trainbr’’
procedure. The initial value for Marquardt adjustment Table 6 PI for the developed ANN models
parameter (l) was 0.005 with decrease and increase factors
ANN model 4-2-3 4-3-3 4-4-3 4-2-2-3 4-3-3-3
of 0.1 and 10, respectively. The training was stopped when
l became larger than 1010. Fc 0.792 0.796 0.634 0.815 0.691
Ff 0.888 0.871 0.811 0.859 0.820
5.5 Methodology used to compare the ANNs Fp 0.851 0.851 0.771 0.853 0.794

In order to determine the best ANN architecture three


criteria were given consideration. The selection of the best
ANN model was carried out in terms of (i) prediction
accuracy, (ii) network complexity and (iii) convergence
speed. The strongest criterion was prediction accuracy,
network complexity and network convergence speed,
respectively. For assessing the ANN prediction accuracy
three statistical measures were calculated, correlation
coefficient (R), root mean squared error (RMSE), and mean
absolute percentage error (MAPE). These statistics are
defined by:
! 0 !1=2
XN XN
R¼ ðmi  m  Þ  ðpi  pÞ =@ ðmi  m
Þ 2

i¼1 i¼1
!1=2 1
X
N
 ðpi  pÞ2 A; ð2Þ
i¼1
Fig. 4 ANN models network complexity and convergence speed

123
118 J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121

!1=2
1 XN
RMSE ¼  ð m i  pi Þ 2 ; ð3Þ
N i¼1

X
N
MAPEð%Þ ¼ 1=N  ðjmi  pi j=jmi jÞ; ð4Þ
i¼1

where: m, p are measured and predicted values, respec-


tively, where the bars indicate mean values, and N is the
sample size.
The calculated values for the training and testing data
are given in Table 5.
In order to assess the prediction accuracy of the devel-
oped ANN models, the performance index (PI) in the form
of equation is proposed as:
PI ¼ ðRtr þ Rts Þ=2  1=ðRMSEtr þ RMSEts Þ
 1=ðMAPEtr + MAPEts Þ: ð5Þ

The smaller the value of PI is the better ANN model in


terms of prediction accuracy. The PI is given in Table 6.
The number of connection weights in the ANN model is
considered as network complexity. When choosing the final
architecture, the model with fewer hidden neurons should
be chosen, because for two ANNs with similar errors on the
training sets, the simpler one is likely to predict better on
new cases [6]. Generally, for both practical and theoretical
reasons, it is always desirable to use well trained smaller
ANN models. Network complexity was assessed by calcu-
lating the ratio of number of training data and number of
ANN weights. Network convergence speed was measured
as a number of iterations required for ANN convergence.
Comparison of developed ANN models in terms of network
complexity and convergence speed is illustrated in Fig. 4.
As could be seen, 4-4-3 and 4-3-3-3 ANN models
proved to be most effective models, since for the least ratio
of number of training data and number of weights, were
successfully trained with least number iterations. However,
considering the previous three criteria, the ANN model
having 4-4-3 architecture was chosen as the best model.

5.6 Testing and performance of the 4-4-3 ANN model

The performance of the 4-4-3 ANN model for prediction of


cutting force components using the entire data set in the
form of regression is shown in Fig. 5.
With this analysis it is possible to determine the
response of the ANN model with respect to the experi-
mentally measured values. The correlation coefficient of
around 0.99 for all three cutting force components confirms
the prediction accuracy of the developed ANN model.
Fig. 5 Correlations between the predicted and the experimentally
The results of the prediction of the selected ANN in measured cutting force components using the entire data set: a main
relation to the experimental data on test data set (not cutting force; b feed force; c passive force

123
J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121 119

Table 7 Comparison between ANN predicted and experimental


results on test data set
Cutting force components
Fc (N) Ff (N) Fp (N)
Exp. ANN Exp. ANN Exp. ANN

1,663 1,645 765 750 523 466


676 668 371 361 282 272
301 301 201 197 137 132
639 649 401 401 226 229
1,460 1,448 690 714 328 385
618 617 401 414 198 197
R = 0.999 R = 0.998 R = 0.964

originally used by the ANN training) are given in Table 7.


The results confirm good generalization capability of the
selected ANN model.

6 Simulation and analysis

The selected 4-4-3 ANN model was developed to predict


cutting force components based on the cutting conditions,
with a high degree of accuracy within the scope of cutting
conditions investigated in the study. Thus, the influence of
the cutting parameters on the cutting force components can
be studied using the ANN model. Effects of depth of cut,
cutting speed, feed rate and cutting edge angle on the
cutting force components are illustrated in Fig. 6.

6.1 Effect of cutting parameters on cutting force


components

Figure 6 indicates that as depth of cut, feed rate and cutting


edge angle increases, the main cutting force and passive
force also increase. As depth of cut and feed rate increases,
feed force increases, while increasing cutting edge angle
results in decreasing feed force. The cutting speed has
negative impact on all three cutting components, i.e., as the
cutting speed increases the cutting force components
decrease. As depth of cut increases, chip thickness
becomes significant, which causes the growth of the vol-
ume of deformed metal and hence cutting force compo-
nents increase. As feed rate increases, the section of
sheared chip increases because the metal resists the rupture
more which leads to increase in cutting force component.

Fig. 6 ANN prediction of the influence of cutting parameters onc


cutting force components: a depth of cut; b cutting speed; c feed rate;
d cutting edge angle

123
120 J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121

With an increase of cutting speed the cutting force com- influences main cutting force and passive force positively,
ponents decrease. This is due to the softening of the and feed force negatively.
workpiece material since the temperature at the cutting
zone increases, and in part due to a decrease in tool-chip Acknowledgments This work was carried out within the project TR
35034 financially supported by the Ministry of Education and Science
contact length [31]. With an increase of cutting edge angle of the Republic of Serbia.
the main cutting force and passive force increase, while
feed force decreases. The feed rate has maximum influence
on the main cutting force and feed force followed by depth References
of cut, cutting speed and the cutting edge angle. Passive
force is most influenced by feed rate and, cutting edge 1. Aguiar PR, Paula WCF, Bianchi EC, Ulson JAC, Cruz CED
angle, whereas the influences of depth of cut and cutting (2010) Analysis of forecasting capabilities of ground surfaces
valuation using artificial neural networks. J Braz Soc Mech Sci
speed are negligible. Eng 32(2):146–153
2. Al-Ahmari AMA (2007) Predictive machinability models for a
selected hard material in turning operations. J Mater Process
Technol 190(1–3):305–311
7 Conclusions 3. Bajić D, Lela B, Cukor G (2008) Examination and modeling of the
influence of cutting parameters on the cutting force and surface
In this paper, a review of the ANN based techniques for roughness in longitudinal turning. J Mech Eng 54(5):322–333
4. Beale MH, Hagan MT, Demuth HB (2004) Neural network
developing the prediction models for cutting force is dis-
toolbox user’s guide. The Mathworks Inc, Massachusetts
cussed. Examples of studies are given, highlighting the 5. Benardos PG, Vosniakos GC (2007) Optimizing feedforward
abilities and limitations of using ANN in predictive mod- artificial neural network architecture. Eng Appl Artif Intell
eling of the turning process focusing on the prediction of 20(3):365–382
6. Bishop CM (1995) Neural networks for pattern recognition.
cutting force components. The determination of the number
Oxford University Press, Oxford, p 482
of layers and neurons in the hidden layers using the trial 7. Burden F, Winkler D (2008) Bayesian regularization of neural
and error method can be limited considering number of networks. In: Livingstone D (ed) Artificial neural networks:
available data for training. In addition combining a LM methods and applications. Humana Press, Totowa, p 254
8. Chien W-T, Chou C-Y (2001) The predictive model for
with BR might also prove beneficial in both ANN training
machinability of 304 stainless steel. J Mater Process Technol
(improved generalization) and ANN architecture deter- 118(1–3):442–447
mining. In this study, five different ANN architectures were 9. Ezugwu EO, Fadare DA, Bonney J, Da Silva RB, Sales WF
applied, and which are 4-2-3, 4-3-3, 4-4-3, 4-2-2-3, and (2005) Modeling the correlation between cutting and process
parameters in high-speed machining of Inconel 718 alloy using
4-3-3-3. All developed ANN models showed high predic-
an artificial neural network. Int J Mach Tools Manuf
tion accuracy. However, based on three performance cri- 45(12–13):1375–1385
teria, 4-3-3 ANN model was selected as the best model. 10. Hao W, Zhu X, Li X, Turyagyenda G (2006) Prediction of cutting
A very good performance was achieved with the 4-3-3 force for self-propelled rotary tool using artificial neural net-
works. J Mater Process Technol 180(1–3):23–29
ANN model, with correlation coefficient between the
11. Hans Ray K, Sharma RS, Srivastava S, Patvardhan C (2000)
model prediction and experimental values ranging from Modeling of manufacturing processes with ANN for intelligent
0.998 for main cutting force, 0.992 for feed force and 0.984 manufacturing. Int J Mach Tools Manuf 40(6):851–868
for passive force. With a total available data of 36, ran- 12. Haykin S (1999) Neural networks: a comprehensive foundation.
Prentice Hall, New Jersey p 842
domly divided into 30 data for training and six data for
13. Hornik K (1989) Multilayer feedforward networks are universal
testing, it was found that ANN could be trained success- approximators. Neural Netw 2(5):359–366
fully. This study also proves that the ANN model is capable 14. Lin WS, Lee BY, Wu CL (2001) Modeling the surface roughness
for accurate prediction of multiple outputs with average and cutting force for turning. J Mater Process Technol
108(3):286–293
MAPE of around 3 % for three cutting force components
15. Lippmann RP (1987) An introduction to computing with neural
using the test data. nets. IEEE Acoust Speech Signal Process Mag 4(2):4–22
In summary, quite basic ANN model architecture, 16. Luong LHS, Spedding TA (1995) A neural-network system for
trained with LM with BR using small training data, is predicting machining behavior. J Mater Process Technol
52(2–4):585–591
capable of modeling multiple outputs with high prediction
17. MacKay DJC (1992) Bayesian interpolation. Neural Comput
accuracy. Thus, the influence of the cutting parameters on 4(3):415–447
the cutting force components can be simulated and ana- 18. MacKay DJC (1992) A practical bayesian framework for back-
lyzed using the developed ANN model. Depth of cut and propagation networks. Neural Comput 4(3):448–472
19. Marchandani G, Cao W (1989) On hidden nodes for neural nets.
feed rate influence main cutting force, feed force, and
IEEE Trans Circuits Syst 36(5):661–664
passive force positively. Cutting speed influences all three 20. Masters T (1993) Practical neural network recipes in C??.
cutting components negatively. Cutting edge angle Academic Press, San Diego, p 493

123
J Braz. Soc. Mech. Sci. Eng. (2013) 35:111–121 121

21. Merchant ME (1998) An interpretative look on 20th century 27. Sharma VS, Dhiman S, Sehgal R, Sharma SK (2008) Estimation
research on modeling of machining. Mach Sci Technol 2(2): of cutting forces and surface roughness for hard turning using
157–163 neural networks. J Intell Manuf 19(4):473–483
22. Mukherjee I, Ray PK (2006) A review of optimization techniques 28. Szecsi T (1999) Cutting force modeling using artificial neural
in metal cutting processes. Comput Ind Eng 2(1–2):15–34 networks. J Mater Process Technol 92(93):344–349
23. Özel T, Karpat Y (2005) Predictive modeling of surface rough- 29. Tang Z, Fishwick PA (1993) Feedforward neural nets as models
ness and tool wear in hard turning using regression and neural for time series forecasting. ORSA J Comput 5(4):374–385
networks. Int J Mach Tools Manuf 45(4–5):467–479 30. Tarng YS, Wang TC, Chen WN, Lee BY (1995) The use of
24. Pontes FJ, Silva MB, Ferreira JR, Paiva AP, Balestrassi PP, neural networks in predicting turning forces. J Mater Process
Schönhorst GB (2010) A DOE based approach for the design of Technol 47(3–4):273–289
RBF artificial neural networks applied to prediction of surface 31. Trent EM (1991) Metal cutting, 3rd edn. Butterworth-Heinemann
roughness in AISI 52100 hardened steel turning. J Braz Soc Mech Ltd, Oxford, p 273
Sci Eng 32(5):503–510 32. Wong FS (1991) Time series forecasting using backpropagation
25. Salchenberger LM, Cinar EM, Lash NA (1992) Neural networks: neural networks. Neurocomputing 2(4):147–159
a new tool for predicting thrift failures. Decis Sci 23(4):899–916
26. Sha W, Edwards KL (2007) The use of artificial neural networks
in materials science based research. Mater Des 28(6):1747–1752

123

You might also like