Professional Documents
Culture Documents
Artificial Neural Network Based On Genetic Learning Polyetheretherketone Composite Materials
Artificial Neural Network Based On Genetic Learning Polyetheretherketone Composite Materials
Artificial Neural Network Based On Genetic Learning Polyetheretherketone Composite Materials
DOI 10.1007/s00170-007-1304-5
ORIGINAL ARTICLE
Received: 26 June 2006 / Accepted: 2 November 2007 / Published online: 1 December 2007
# Springer-Verlag London Limited 2007
a geometrical relationship when correlating the flank wear on workpiece material. Then, a set of these experimental
cutting tool with changes in workpiece dimensions. Briceno results is used in the learning algorithm of ANNs to
et al. [5] showed the feasibility of radial basis network (RBN) establish the relationship between the referred input
in the modelling process of metal cutting operations. ANN parameters and the output machining parameters such as
models were proposed by Oktem et al. [6], Tsai et al. [7], and cutting force, feed force, and chip thickness after cutting.
Suresh et al. [8] to predict surface roughness during the end The learning procedure is performed as an optimization
milling process. In particular, Oktem et al. [6] and Suresh algorithm based on a genetic search. The optimal ANN is
et al. [8] used ANN together with genetic algorithm (GA) to tested after the learning process using another set of
minimize the surface roughness. experimental data. Figure 1 shows the flowchart of learning
A literature survey of ANN applications shows that only a and testing procedures of the proposed ANN model.
small number of works is devoted to orthogonal cutting and
all of them are not applied to polymeric composite materials. 2.2 Neural network topology
Modelling of orthogonal cutting of composite materials is
implemented in this work. By definition, the cutting is The artificial neural network (ANN) is a computational
orthogonal when the tool has a position angle of 90° and an structure inspired by biological neural human systems.
inclination angle of 0°, enabling this way a bi-dimensional ANN is formed of simple and highly interconnected
representation of the plane deformation of the chip as processors called neurons. These neurons are connected to
referred by Childs et al. [11]. To explain this phenomenon, each other by weighted links denoted by synapses. These
the physical-mathematical model of approximation of connections establish the relationship between input data
Merchant has been used to obtain the vector analysis of Oinp
i and output data Oout
j .
cutting forces and stresses of the machining process. The biases in the neurons of the hidden and output layers,
ð1Þ ð2Þ
However, for some applications such as in composite bk and bk respectively, are controlled during data process-
materials machining processes, the Merchant model is not ing. Input, hidden, and output layers form the topology of the
satisfactory. Furthermore some machining parameters as- ANN used in this work. Figure 2 shows the topology of the
sume discrete values, making the use of polynomial ANN together with the input and output parameters.
approximations inappropriate. The proposed alternative is Two sigmoid activation functions are used in hidden and
to use an approximation model based on ANN. output layers. The relative error determined by comparing
The objective of this work is to develop an ANN model experimental and numerical results is used to monitor the
based on evolutionary learning, applied to polymeric learning process as an optimization process. The objective is to
composite materials machining. Section 2 presents the obtain the completeness of modelling of the machining process.
model overview, the topology of ANN, the input and A regularization function related with biases in hidden and
output parameters, the data pre-processing and the dynam- output neurons is introduced to improve the learning process.
ics of ANN. Section 3 describes the ANN learning process
based on genetic search. The learning variables, the code 2.3 Input and output parameters
format, the fitness function and the genetic algorithm are
the main aspects of Section 3. The results of learning In order to sufficiently train the network arrays of input and
procedure and the discussion of the best configuration for output, a considerable number of data pairs is considered.
the ANN are presented in Section 4. The influence of the Two materials are used in the experimental procedure: (1)
regularization term associated with biases in hidden and unreinforced polyetheretherketone (PEEK) and (2) rein-
output nodes is studied. The testing of the optimal ANN forced polyetheretherketone with 30% of glass fibres
topology is presented and discussed also in this section. (PEEK GF 30) (supplied by ERTA®). Table 1 shows the
Finally, the achievements obtained with the developed mechanical and thermal properties of these materials.
approach are presented in Section 5. The orthogonal cutting tests were carried out in extruded
workpieces with a diameter of 50 mm and a length of
100 mm, using a polycrystalline (PCD) insert tool and
2 Artificial neural networks model cemented carbide (K20) tool. A type SDJCL 2020 K16 tool
holder was used. The tool geometry was as follows: rake
2.1 Model overview angle 0°, clearance angle 7°, cutting edge angle 91° and
cutting edge inclination angle 0°.
In the present work, a set of experimental results in turning A CNC lathe “Kingsbury MHP 50” of 18 kW spindle
of polyetheretherketone (PEEK) composite materials has power and a maximum spindle speed of 4,500 rpm were
been obtained considering as process parameters the cutting used to perform the experiments. Figure 3 shows the
speed, feedrate, type of insert of the tool, and type of experimental apparatus used in the process.
Int J Adv Manuf Technol (2008) 39:1101–1110 1103
ANN topology
definition
Normalization
data
Simulation
w (1) , w (2) ,
(2) Upgrading
b1(1) , b ANN model
1
GA operators:
Relative error - selection
+ - crossover
Regularization term - mutation
- replacement
by similarity
No
Convergence
?
Optimal solution:
Predicting of w (1) , w (2) ,
ANN outputs
(2)
b1(1) , b
1
Optimal ANN
validation
ANN testing
The plan of tests was developed without refrigeration for polymeric composite materials. Then, in the experimen-
and contemplates combinations between four values of tal tests, the following values were considered for cutting
cutting velocity and five values of feedrate. A constant parameters: cutting speed: {80, 160, 320, 500} [m/min],
depth of cut at 2.5 mm was considered. A Kistler® 9121 feedrate: {0.05, 0.10, 0.15, 0.20, 0.30} [mm/rev].
piezoelectric dynamometer with a charge amplifier (model The above values of the process parameters were combined
5019) was used to acquire the cutting forces and feed with: Two workpiece materials: (1) unreinforced PEEK and (2)
forces. Data acquisition was made through the charge glass reinforced PEEK GF 30: Two insert tools: (1) K20 and
amplifier and a computer using the appropriate software (2) PCD to obtain the experimental results for cutting force,
(Dynoware by Kistler®). The chip thickness measurement feed force, and chip thickness after cutting. A total number of
after cutting was implemented through a Mitutoyo® digital 67 patterns have been obtained from this experimental
micrometer with a range of 0–25 mm and a resolution of procedure as presented in Table 2. Close to 80% of these
0.001 mm. The process parameters and respective ranges pattern results are used in ANN learning procedure and the
were selected according to acquired industrial experience remaining ones used in testing the achieved optimal ANN.
1104 Int J Adv Manuf Technol (2008) 39:1101–1110
Type of O2
inp b (2)
insert 2
.
. O2
out Cutting
. (1) force
Cutting O3
inp
b
speed m –1
b (2)
wik(2) 3
Feed rate O4inp (1)
bm O3
out Feed
force
Each pattern consisting of an input vector and an output The output of the k-th neuron in the hidden layer Ohid
k is
vector needs to be normalized to avoid error propagation define as,
during the learning process of the ANN. This is achieved 1
using the following data normalization: k ¼
Ohid hid ð2Þ
Ik
1 þ exp T ð1Þ
ON max ON min
Ok ¼ ðOk Omin Þ þ ON min ð1Þ with
Omax Omin
X
N inp
ð1Þ ð1Þ
where Ok is the real value of the variable before normali- Ikhid ¼ wjk Oinp
j þ bk ð3Þ
j¼1
zation, Omin and Omax are the minimum and the maximum
values of Ok in the data set to be normalized. Then they where Ninp is the number of elements in the input, Oinp j is
ð1Þ
are normalized to values ON min and ON max such that the input data, wjk is the connection weight of the synapse
ON min Ok ON max . Depending on the input or the output between the j-th neuron in the input layer and the k-th
ð1Þ
data different predefined values of ON min and ON max can be neuron in the hidden layer, bk is the bias in the k-th neuron
(1)
used. of the hidden layer and T is a scaling parameter.
Experiment Cutting speed Feedrate Insert Workpiece Chip thickness Cutting Feed
number [m/min] [mm/rev] tool material [mm] force [N] force [N]
Table 2 (continued)
Experiment Cutting speed Feedrate Insert Workpiece Chip thickness Cutting Feed
number [m/min] [mm/rev] tool material [mm] force [N] force [N]
The value of the output neuron is calculated as, fittest according to Darwin’s theory. The GAs use a
1 structured exchange of data to explore all regions of the
k ¼
Oout out ð4Þ domain and lead some operators to exploit potential areas.
I
1 þ exp Tkð2Þ Besides genetic operator’s basic concepts such as chromo-
some representation, generation of initial population,
with
stopping criteria and fitness function are important in GAs.
X
N hid
ð2Þ ð2Þ
The use of GAs in ANN learning is a methodology
Ikout ¼ i þ bk
wik Ohid ð5Þ applied by some authors such as Mok et al. [12], Nakhjavani
i¼1 and Ghoreishi [13] in materials processing simulation. In
where Nhid is the number of neurons in the hidden layer, the present work, the objective is to explore some
ð2Þ
wik is the connection weight of the synapse between the advantages of GAs in handling discrete input parameters
i-th neuron of the hidden layer and the k-th neuron in the associated with the machining process such as type of
ð2Þ
output layer, bk is the bias in the k-th neuron of the output workpiece material and type of insert of the tool. This way
layer and T(2) is a scaling parameter for this layer. the learning process becomes an optimization process.
During training, the computed output is compared with the By means of training, the neural network models the
measured output and the mean relative error is calculated as underlying process of a certain mapping. In order to model
the mapping, the network weights and biases of the ANN
E wð1Þ ; wð2Þ ; bð1Þ ; bð2Þ topology have to be found. There are two categories of
! training algorithms: supervised and unsupervised. In this
N out exp out
1 X N exp
1 X Oi Oi work supervised learning is used and a set of data samples
¼ exp ð6Þ
N m¼1 N out i¼1 Oexp i
called a training set for which the corresponding network
m
outputs are known is provided. A schematic representation
where Nout is the number of neurons of the output layer, Nexp of the ANN learning algorithm is given in Fig. 1.
out exp
is the number of experimental patterns and Oi and Oi are
the normalized predicted and measured values, respectively. 3.1 Learning variables and encoding
The error obtained from (6) is back propagated in to the
artificial network. This means that from output to input the The supervised learning process of the ANN based on a
weights of the synapses and the biases can be modified genetic algorithm uses the weights of synapses and the bias
until the error falls within a prescribed value. of neural nodes of hidden and output layers as design
variables. Thus, the learning variables are the weights of
synapses linking the input layer nodes to hidden layer
ð1Þ
3 Learning based on genetic search nodes wjk , the weights of synapses linking hidden layer
ð2Þ
nodes to output layer nodes wik and the bias in nodes of
ð1Þ ð 2Þ
Genetic algorithms (GAs) have been used with increasing hidden and output layers, bk and bk , respectively.
interest in a large variety of applications. They search the A binary code is used to encode the domain values for
solution space simulating the evolution of survival of the the learning variables. The number of digits of each
Int J Adv Manuf Technol (2008) 39:1101–1110 1107
variable can be different. The bounds of interval domain of Step 2: Selection. This operator chooses the population part
learning variables influence the sensitivity of sigmoid that will be transferred into the next generation
activation functions and must be controlled. Their influence after ranking based fitness of the actual population.
is similar to the one for scaling parameters T(1) and T(2) at An elitist strategy where only the best chromo-
hidden and output layers, respectively. somes from the actual population will pass into the
next population is adopted. The operator selects the
3.2 Fitness function progenitors: one from the best-fitted group (elite)
and another from the least fitted one. This selection
The proposed optimisation problem formulation is based is done randomly with an equal probability for each
on the minimisation of the mean relative error defined in individual. Transfer of the whole population to an
Eq. (6). Thus in the evolutionary search it is intended to intermediate step where they will join the offspring
maximise a global fitness function FIT and the optimisation determined by the Crossover operator.
problem is defined as follows Step 3: Crossover. The offspring genetic material is obtained
h i using a modification of the “Parametrized Uniform
Maximize FIT ¼ K E wð1Þ ; wð2Þ ; bð1Þ ; bð2Þ þ B bð1Þ ; bð2Þ Crossover” technique proposed by Spears and
DeJong [15] and modified by António and Davim
ð7Þ
[16]. This is a multipoint combination technique
applied to the binary string of the selected chromo-
where B bð1Þ ; bð2Þ is a regularization function associated to
somes. The offspring gene is selected on a biased
mean quadratic biases of hidden and output layers and
way given a defined probability for choosing the
defined as
gene from the elite chromosome.
1 1=2
Step 4: Replacement by similarity. A new ranking of the
B bð1Þ ; bð2Þ ¼ exp Bð1Þ þ Bð2Þ ð8Þ
N enlarged population according to individual fitness is
with implemented. Then, it follows Elimination of sol-
" # utions with similar genetic properties and subsequent
X N
N exp
1 X
hid
ð1Þ ð1Þ 2 Substitution by new randomly generated individuals.
B ¼ b ð9Þ Substitution and new ranking updating is followed
m¼1
N hid k¼1 k
m
by Elimination corresponding to the deletion of the
and worst solutions. The exclusion of individuals with
" # low fitness and also the natural death of old
X N
N exp
1 X
out
ð2Þ ð2Þ 2 individuals are simulated by this operator. Now, the
B ¼ b ð10Þ
m¼1
N out k¼1 k dimension of the population is smaller than the
m
ð 1Þ ð2Þ original one. The original size population will be
being bk and bk the associated biases in the nodes of
recovered after including a group of new solutions
hidden and output layers, respectively. Regularization
obtained from the Mutation operator.
functions are used to accelerate the convergence of the
Step 5: Implicit Mutation. To avoid the rising of local
evolutionary process.
minima, a chromosome group, where genes are
generated in a random way, is introduced into the
3.3 Genetic algorithm
population. This operation is called implicit muta-
tion. The number of individuals in Implicit mutation
Four main operators applied to the population support the
is an important GA parameter to keep the diversity
genetic algorithm (GA): selection, crossover, implicit muta-
of the population in each generation. Experience
tion and replacement of similar individuals as proposed by
indicates that the advised number is located
António et al. [14]. These operators are performed by an
between 20% and 35% of the population size.
elitist strategy that always preserves a core of best individuals
Step 6: Stopping criterion. The stopping criterion used in the
of the population that is transferred into the next generations.
evolutionary process is based on the relative variation
The offspring group formed by the crossover operator will
of the mean fitness of a reference group during a
make part of the population of the next generation. To avoid
fixed number of generations and the feasibility of the
the rising of local minima, a chromosome set in which genes
corresponding solutions. The size for the reference
are generated randomly is introduced into the population. The
group is predefined as referred by António et al. [14].
genetic algorithm performs as follows:
Step 1: Initialization. The initial population is randomly The flowchart of the ANN learning and testing proce-
generated. dures coupled with the genetic algorithm is presented in
1108 Int J Adv Manuf Technol (2008) 39:1101–1110
Fig. 1. The operating conditions of the GA have been given considering a number of eight neurons for the hidden layer
above and follow the model detailed in António et al. [14]. that will be used in the next presentation and discussion.
6,00
relative error (%)
5,00
4,00
3,00
2,00
1,00
0,00
4 5 6 7 8 9 10 11 12
number of hidden nodes
Fig. 4 Mean relative errors in learning and testing processes as Fig. 5 Mean relative error and regularization term behaviours during
function of the number of nodes of the hidden layer ANN learning process based on genetic search
Int J Adv Manuf Technol (2008) 39:1101–1110 1109
300 experimental
simulated
250
cutting force, Fc
200
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13
sample test
Fig. 6 Mean relative error differences ΔE(%) associated with
regularization term in fitness function Fig. 8 Testing process of ANN with optimal topology: comparisons
for cutting force, Fc (N)
0, 400 50
chip thickness, e'
0, 350
40
feed force, Fa
0, 300
0, 250 30
0, 200
20
0, 150
0, 100
10
0, 050
0, 000 0
1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11 12 13
sample test sample test
Fig. 7 Testing process of ANN with optimal topology: comparisons Fig. 9 Testing process of ANN with optimal topology: comparisons
for chip thickness, e’ (mm) for feed force, Fa (N)
1110 Int J Adv Manuf Technol (2008) 39:1101–1110