Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Int J Adv Manuf Technol (2010) 51:965–971

DOI 10.1007/s00170-010-2668-5

ORIGINAL ARTICLE

Prediction of surface roughness in turning operations


by computer vision using neural network trained
by differential evolution algorithm
S. H. Yang & U. Natarajan & M. Sekar & S. Palani

Received: 26 December 2008 / Accepted: 12 April 2010 / Published online: 23 April 2010
# Springer-Verlag London Limited 2010

Abstract In recent years, the measurement of surface ments has emerged. Besides, the measurement speed of the
roughness of a workpiece plays a vital role since the stylus is also slow [1–3]. Many researchers developed the
roughness of a surface has a considerable influence on the models to predict the surface roughness using computer
product quality and the functional aspects. In this work, a vision system. Lee and Tarng [1] developed a self-
differential evolution algorithm (DEA)-based artificial neural organizing adaptive learning algorithm called polynomial
network (ANN) has been used for the prediction of surface network to map the actual surface roughness of a turned
roughness in turning operations. Cutting speed, feed rate, workpiece with the surface image feature and the various
depth of cut, and average gray level of the surface image of cutting conditions. Shinn-Ying Ho et al.[2] used an
workpiece, acquired by computer vision, were taken as the adaptive neuro-fuzzy inference system model to predict
input parameters and surface roughness as the output the surface roughness of the workpiece. Kiran et al. [3]
parameter. The results obtained from the DEA-based ANN made an attempt to evaluate the surface roughness of
model were compared with the backpropagation (BP)-based workpiece surfaces using direct image processing, light
ANN. It is found that the error percentage is very close, and it sectioning method, and phase shifting approach. It is
is also observed that the convergence speed for the DEA- reported that the direct imaging approach is quick and
based ANN is higher than the BP-based ANN. easy to apply in the shop floor level. Few researchers
evaluated the surface roughness using image processing
Keywords Turning . ANN . Backpropagation . DEA based on monochromatic speckle correlation technique
[4–7]. Shahabi et al.[8] proposed an alternative method
for the measurement of surface roughness in turning
1 Introduction operation using the 2-D profile extracted from an edge
image of the workpiece surface. They reported that there is a
In the present industrial scenario, the major concern is to 10% error in the average surface roughness value between the
produce the quality products at competitive price. Since the stylus method and vision method. They also proposed
direct contact methods (stylus instruments) of measuring machine vision methods [9, 10] to assess and monitor the
surface roughness are having limited flexibility in handling tool wear in turning operation. Hu Zhongxiang et al.[11]
free-form parts, the indirect method using optical instru- evaluated three-dimensional surface roughness parameters
based on digital image processing. Recently, there have been
significant research efforts to apply evolutionary computa-
S. H. Yang : U. Natarajan (*) : M. Sekar
tional techniques for determining the neural network weights
School of Mechanical Engineering,
Kyungpook National University, [12–15]. Hence, a nontraditional optimization algorithm
Daegu, South Korea differential evolution algorithm (DEA) was used in artificial
e-mail: nattu6963@yahoo.com neural network (ANN) training as an attempt to predict the
surface roughness of the turned components. DEA is one of
S. Palani
Mount Zion College of Engineering, the recent population-based global optimization methods. It
Pudukkottai, Tamil Nadu, India is widely used in the various fields like engineering, medical,
966 Int J Adv Manuf Technol (2010) 51:965–971

and banking[16–20]. Some researchers used DEA to train Hence, the DE performs a global exploratory search during
the ANN [21–29] in which the experimental data are small- the early stages of the evolutionary process and local
scale problems. In this work, a new DEA was applied to train exploitation during the mature stage of the search.
feed-forward multilayer perceptron neural networks In DE, a solution, l, in a generation is a multidimensional
T
(MLPNN) and compared with the ANN trained with vector x!l
G¼i ¼ ðx1 ; :::; xN Þ . A population, PG=K, at genera-
backpropagation (BP) method. tion G=k is a vector
 of M solutions(M>4).
 The initial
This paper is organized as follows: In Section 2, the population, PG¼0 x!l !M l
G¼0 ; :::; xG¼0 , is initialized as xl;G¼0 ¼
DEA methodology is described. In Section 3, the neural lower(xi) + rand i[0, 1]×(upper(xi)−lower(xi)),
network training using DEA is presented. In Section 4,
experimental setup for the measurement of surface finish is l ¼ 1; . . . ; M i ¼ 1; 2; . . . ; N ð1Þ
presented. Section 5 explains the implementation of neural-
network-trained DE. Section 6 deals with results and where M is the population size, N is the solution’s
discussion, and Section 7 concludes the paper. dimension, and each variable i in a solution vector l in the
initial generation, G ¼ 0; x!l G¼0 , is initialized within its
boundaries (lower(xi), upper(xi)). For each target vector
xji;G¼k , i = 1,2,3,…M, a mutant vector vi;G¼k is generated
0

2 Differential evolution methodology


according to the following equation:
Differential evolution (DE) is a type of evolutionary 0
 
algorithm developed by Rainer Storn and Kenneth Price vi;G¼k ¼ xri;G¼k1
3
þ F  xri;G¼k1
1
 xri;G¼k1
2
ð2Þ
[14–16] for optimization problems over a continuous
domain. The prime idea of DE is to adapt the search where the random indices r1, r2, r3 Єf1; 2; :::; M g, F is a
during the evolutionary process. During the initial stage of scaling factor Є[0, 1] which controls the amplification of the
evolution, the perturbations are large since parent individ- differential variation ðxri;G¼k1
1
 xri;G¼k1
2
Þ and represents the
uals are far away from each other. As the evolutionary amount of perturbation added to the main parent. The values
process matures, the population converges to a small of each variable in the mutant vector are changed with some
region, and the perturbations adaptively become small. crossover probability (CR) to

8 
> r3
< xi;G¼k1 þ F  xri;G¼k1
1
 xri;G¼k1
2
0
8  N ; xi;G¼k ¼ if ðrandom½0; 1  CR ^ i ¼ irand Þ ð3Þ
>
:
xji;G¼k1 otherwise

The new solution replaces the old one if it is better, and at 3 Neural network training using differential evolution
least one of the variables should be changed. The latter is algorithm
represented in the algorithm by randomly selecting a
variable, irand Є (1, N). After crossover, if one or more of Differential evolution algorithm is a heuristic method for
the variables in the new solution are outside their optimizing nonlinear and nondifferentiable continuous space
boundaries, the following repair rule is applied. functions. Hence, it can be applied to global searches within
8 the weight space of a typical neural network. In this work, a
j
xi;G þlowerðxi Þ
>
> if j
xi;Gþ1  lowerðxi Þ most popular feed-forward MLPNN is used. Training an
< 2
x;i;G¼k ¼ x j upperðxi Þ
lowerðxi Þ þ i;G 2 j
xi;Gþ1  upperðxi Þ
MLPNN to recognize the objectors is typically realized by
>
>
if
: j adopting an error correction strategy that adjusts the network
xi;Gþ1 Otherwise
weights through minimization of learning error:
ð4Þ
E ¼ E ðY0 ; Y Þ ð5Þ
Price and Storn [23] attempted various strategies in DE
based on the vector to be perturbed, number of difference Where Y is the real output vector of an MLPNN, Y0 is
vectors considered for perturbation, and the type of the target output vector, and Y is a function of synaptic
crossover. However, strategy 7 (DE/rand/1/bin) is the most weights w and input values X. In the MLPNN, the input
successful and most widely used strategy. In this work, vector x and the target output vector Y0 are known, and the
strategy 7 (“Appendix”) was attempted. The detailed synaptic weights in W are adapted to obtain appropriate
algorithm (pseudocode) is available in the literature [18–20]. functional mappings from the input x to the output Y0.
Int J Adv Manuf Technol (2010) 51:965–971 967

Normally, the adaptation can be carried out by minimizing tungsten carbide as the cutting tool. Twenty-seven experi-
the network error function E, i.e., network training ments have been conducted for the various sets of cutting
procedure. The optimization goal is to minimize the conditions, i.e., cutting speed, feed rate, and depth of cut as
objective function E by optimizing the values of the shown in Table 1. The ranges of cutting parameters are given
network weights: below:
W ¼ ðw1 ; w2 ; :::; wD Þ ð6Þ Cutting speedðV Þ : 42  201 m=min
Differential evolution maintains a population M of Feed rateðFÞ :0:05  0:33 mm=rev
constant size and the real value vector liG, where i(i=1,2, Depth of cutðDÞ :0:5  2:5 mm
…,M) is the index to the population and G(G=1,2,…Gmax)
The surface roughness of the turned workpiece was
is the generation to which the population belongs:
measured by using SE-1100 portable surfcorder, within a
 
PG ¼ l1G ; l2G ; . . . ; lM
G
ð7Þ sampling length of 8 mm and with a measurement speed of
0.5 mm/s. The surface roughness Ra is the arithmetic
Based on the differential evolution methodology dis- average of the absolute value of the heights of roughness
cussed in Section 2, each individual of the population is irregularities from the mean value measured:
compared with its counterpart in the current population, and
the vector with the lower objective function value wins a 1X n
Ra ¼ j yi j ð9Þ
place in the next generation’s population. As a result, all the n i¼1
individuals of the next generation are as good as or better
where yi is the height of roughness irregularities from the
than their counterparts in the current generation. Schematic
mean value and n is the number of sampling data. To
flow chart for the neural network training using differential
implement the proposed methodology, initially, a popula-
evolution algorithm is shown in Fig. 1
tion of size M is randomly generated for neural network
weightages. The error between the target output vector and
the real output vector of an MLPNN is evaluated by using
4 Experimental setup for the measurement of surface
the experimental data sets. Then, the differential evolution
image of a workpiece
method is applied to train the neural network. From the
initial population, a target vector and base vector (r3) are
The surface image of a workpiece is grabbed using high-
chosen. Thereafter, two vectors are selected randomly (r1,
resolution digital camera through frame grabber, as shown in
r2) from the same population, and the weighted difference
Fig. 2. The captured image is processed in the computer
vector (r1 ∼r2) is computed. Then, the weighted difference
using Matlab Software (version7). The image is digitized
is added to the base vector. In order to increase the diversity
into a rectangular array of intensity values. Each array
of the perturbed vectors, crossover is performed. After the
element called “pixel” corresponds to the mean intensity in a
crossover, a trial vector as shown in Eq. 3 is formed. This
small rectangular area of the original image. These values are
trial vector is then compared with the target vector using
referred as the gray levels of the corresponding pixels. Each
greedy criterion to decide the better one. In this manner, a
pixel corresponds to a gray intensity level. The arithmetic
new population is generated and evaluated until the solution
average of the gray level Ga can be expressed as:
converges.
1X n
Ga ¼ j gi j ð8Þ
n i¼1
6 Results and discussion
Where gi is the gray level of the surface image and n is
the total number of pixels. The photographic view of the The experimental data sets shown in Table 1 were used to
Rapid I machine vision system used for the image train the neural network with differential evolution
acquisition and subsequent processing is shown in Fig. 3. algorithm using C program. The crossover constant (CR)
and scaling factor (F) are taken (optimal values obtained
after different trials) as 0.8 and 0.5, respectively. Neural
5 Implementation of neural-network-trained differential network with different topologies has been trained, and the
evolution optimal structure (4-6-6-1) was found out by trial-and-
error approach for the error convergence 0.01. Similarly,
The experimental works were carried out in CNC Lathe under another neural network model has been trained with scaled
different cutting conditions to investigate the surface rough- conjugate backpropagation algorithm, and the optimal
ness. EN-24 steel was used as the workpiece material and structure (4-5-5-1) was found out. The number of
968 Int J Adv Manuf Technol (2010) 51:965–971

Fig. 1 Schematic flow chart for


the neural network training Begin
using differential evolution
algorithm
Initialize the population
for weightages

Evaluate the error


E=E(Y0,Y)

gen = 0
Apply differential evolution

Target vector Base vector Weighted difference vector

Add

Trial vector
gen = gen+1

Cross- over

Select trial or target vector

Generate new population

No Is solution
converged?

Yes
End

Fig. 2 Schematic diagram of computer vision system for surface Fig. 3 The photographic view of Rapid I computer vision system
roughness measurement (adapted from Lee and Tarng [1]) used to capture surface image of turned components
Int J Adv Manuf Technol (2010) 51:965–971 969

Table 1 Experimental turning data sets for training the neural network model

Cutting speed (V), m/min Feed rate (F), mm/rev Depth of cut (D), mm Average gray level Surface roughness (Ra), μm

42 0.05 0.5 10.31 6.713


132 0.05 0.5 8.39 0.882
200 0.05 0.5 7.78 0.664
42 0.05 1.5 10.33 7.176
132 0.05 1.5 8.41 0.887
200 0.05 1.5 7.8 0.825
42 0.05 2.5 10.34 8.54
132 0.05 2.5 8.41 2.30
200 0.05 2.5 7.81 2.265
42 0.16 0.5 28.52 9.0
132 0.05 0.5 23.21 7.02
200 0.05 0.5 21.54 6.713
42 0.05 1.5 28.58 9.20
132 0.05 1.5 23.26 8.70
200 0.05 1.5 21.58 7.273
42 0.05 2.5 28.61 9.62
132 0.05 2.5 23.28 8.934
200 0.05 2.5 21.6 7.81
42 0.33 0.5 53.73 16.461
132 0.33 0.5 43.72 12.41
200 0.33 0.5 40.57 10.50
42 0.33 1.5 53.85 17.98
132 0.33 1.5 43.82 13.121
200 0.33 1.5 40.66 11.71
42 0.33 2.5 53.91 21.32
132 0.33 2.5 43.87 17.631
200 0.33 2.5 40.7 12.93

iterations used in DE-based ANN was 5,000, whereas in backpropagation were validated by using eight sets of
BP-based ANN, it was 10,000. The computation time for testing data shown in Table 2. To evaluate the perfor-
the BP-based ANN was 4.8 min, whereas in the DE-based mance, the predicted surface roughness values were
ANN, it was only 3 min which is nearly 38% less than that compared with the experimental values and summarized
of the conventional BP-based ANN. The neural network in Table 3, and the performance with respect to percentage
models trained by differential evolution algorithm and the error is shown in Fig. 4. The average absolute percentage

Table 2 Testing data for the validation of developed models

Test no. Cutting speed (V), m/min Feed rate (F), mm/rev Depth of cut (D), mm Gray level (Ga) Surface roughness (Ra), µm

1 132 0.33 0.5 43.72 12.41


2 200 0.33 0.5 40.57 10.50
3 42 0.33 1.5 53.85 17.98
4 132 0.33 1.5 43.82 13.121
5 200 0.33 1.5 40.66 11.71
6 42 0.33 2.5 53.91 21.32
7 132 0.33 2.5 43.87 17.631
8 200 0.33 2.5 40.7 12.93
970 Int J Adv Manuf Technol (2010) 51:965–971

Table 3 Comparison of mea-


sured and predicted surface Test no. Surface roughness (Ra) Absolute percentage error
roughness values
Measured BP-based ANN DEA-based ANN BP-based ANN DEA-based ANN

1 12.41 12.44 12.525 0.24 0.921


2 10.50 10.525 10.398 0.231 0.965
3 17.98 17.869 17.894 0.617 0.478
4 13.121 13.085 13.191 0.27 0.53
5 11.71 11.625 11.76 0.72 0.425
6 21.32 21.389 21.109 0.32 0.989
7 17.631 17.714 17.546 0.47 0.481
8 12.93 12.985 12.9 0.42 0.203

error is 0.62 for the DE-based ANN and 0.41 for the BP- ANN is lesser than the BP-based ANN. The proposed DE-
based ANN. However, unlike the BP-based ANN, the DE- based ANN model is simple, faster, and robust at numerical
based ANN is simple and robust. The solution accuracy optimization. It is a powerful population-based, direct
greatly depends on the experimental setup and the image search algorithm for globally optimizing the functions with
measurement system configuration. It is difficult to ensure real valued parameters. The solution accuracy may be
absolute flatness during the image acquisition of turned further enhanced by using high-resolution frame-grabber
components. This drawback can be eliminated by using card in the machine vision system and with the use of
shadow removal algorithm. shadow removal algorithm.

7 Conclusion Appendix

In this work, a novel attempt has been made to predict the Price and Storn [23] suggested ten different working
surface roughness of the turned components using neural strategies of DE and some guidelines in applying these
network model trained by nontraditional optimization strategies for any given problem for which DE is applied.
technique. It is observed that the error percentage of The strategies can vary based on the vector to be perturbed,
DEA-based ANN is closer to the BP-based ANN, and it number of difference vectors considered for perturbation,
is also found that the convergence speed for DEA-based and finally the type of crossover used. The following are
the ten different working strategies proposed by Price and
Storn:

1.5 1. DE/best/1/exp
BP based ANN 2. DE/rand/1/exp
DE based ANN
3. DE/rand-to-best/1/exp
1
4. DE/best/2/exp
5. DE/rand/2/exp
0.5
Percentage error

6. DE/best/1/bin
7. DE/rand/1/bin
0 8. DE/rand-to-best/1/bin
1 2 3 4 5 6 7 8
9. DE/best/2/bin
-0.5 10. DE/rand/2/bin
The general convention for the above strategies is DE/x/
-1
y/z. DE represents differential evolution, x represents a
string denoting the vector to be perturbed, y is the number
-1.5
of difference vectors considered for perturbation of x, and z
Test no.
stands for the type of crossover used (exp, exponential; bin,
Fig. 4 Performance of DE-based ANN with BP-based ANN binomial).
Int J Adv Manuf Technol (2010) 51:965–971 971

References 15. Du J-X et al (2007) Shape reconstruction based on neural


networks trained by differential evolution algorithm. Neuro-
computing 70:896–903
1. Lee B-Y, Tarng Y-S (2001) Surface roughness inspection by 16. Magoulas GD, Plagianakos VP, Vrahatis MN (2004) Neural
computer vision in turning operations. Int J Adv Manuf Technol network-based colonoscopic diagnosis using on-line learning
41:1251–1263 and differential evolution. Applied Soft Computing 4(1):369–
2. Ho SY et al (2002) Accurate modeling and prediction of surface 379
roughness by computer vision in turning operations using an 17. dos Santos Coelho L et al (2010) Model-free adaptive control
adaptive neuro-fuzzy inference system. Int J Machine Tools design using evolutionary-neural compensator. Expert Syst Appl
Manuf 42:1441–1446 37(1):499–508
3. Kiran M-B et al (1998) Evaluation of surface roughness by vision 18. Nikunj Chauhan, Ravi V, Karthik Chandra D (2009) Differen-
system. Int J Machine Tools Manuf 38(5–6):685–690 tial evolution trained wavelet neural networks: application to
4. Dhanasekaran B et al (2008) Evaluation of surface roughness bankruptcy prediction in banks. Expert Syst Appl 36(4):7659–
based on monochromatic speckle correlation using image pro- 7665
cessing. Precision Eng 32:196–206 19. dos Santos Coelho L, Guerra FA (2008) B-spline neural network
5. Yamaguchi I et al (2004) Measurement of surface roughness by design using improved differential evolution for identification of an
speckle correlation. Soc Photo-Optical Instrum Eng 43(11):2753–61 experimental nonlinear process. Applied Soft Computing 4(8):1513–
6. Persson U (1993) Measurement of surface roughness on rough 1522
machined surface using speckle correlation and image analysis. 20. Basturk A, Gunay E (2009) Efficient edge detection in digital
Wear 160:221–225 images using a cellular neural network optimized by differential
7. Persson U (1992) Real time measurement of surface roughness on evolution algorithm. Expert Syst Appl 36(2):2645–2650
ground surfaces using speckle contrast technique. Opt Laser Eng 21. Ilonen J et al (2003) Differential evolution training algorithm for
17:61–67 feed-forward neural networks. Neural Process Lett 17:93–105
8. Shahabi HH, Ratnam MM (2009) Non-contact roughness mea- 22. Zelinka I, Lampinen J (1999) An evolutionary learning algorithms
surement of turned parts using machine vision. Int J Adv Manuf for neural networks. Fifth International Conference on Soft
Technol. doi:10.1007/s00170-009-2101-0 Computing: MENDEL’99:410–414
9. Shahabi HH, Ratnam MM (2008) Assessment of flank wear and nose 23. Storn R, Price K (2005) Differential evolution—a practical
radius wear from workpiece roughness profile in turning operation approach to global optimization. Springer, Berlin
using machine vision. Int J Adv Manuf Technol. doi:10.1007/ 24. Storn R, Price K (1997) Differential evolution—a simple
s00170-008-1688-x evolution strategy for fast optimization. Dr Dobb’s J 22(4):18–
10. Shahabi HH, Ratnam MM (2008) In-cycle monitoring of tool nose 24
wear and surface roughness of turned parts using machine vision. Int 25. Storn R, Price K (1995) Differential evolution—a simple and
J Adv Manuf Technol 40:1148–1157. doi:10.1007/s00170-008- efficient adaptive scheme for global optimization over continuous
1430-8 spaces. Technical Report TR-95-012, International Computer
11. Zhongxiang H et al (2008) Evaluation of three dimensional surface Science Institute, Berkeley, CA, USA
roughness parameters based on digital image processing. Int J Adv 26. Mayer D-G et al (2005) Differential evolution—an easy and efficient
Manuf Technol 40:342–348. doi:10.1007/s00170-007-1357-5 evolutionary algorithm for model optimization. Agric Syst 83:315–
12. Eberhart R-C, Shi Y (1998) Comparison between genetic 328
algorithms and particle swarm optimisation. The 7th Annual 27. Coello CA et al (2007) Evolutionary algorithms for solving multi-
Conference on Evolutionary Programming, San Diego, USA objective problems, 2nd edn. Springer, Heidelberg
13. Rajasekaran S, Vijayalakshmi Pai GA (1996) Genetic algorithm 28. Abbass H-A, Sarker R (2002) The Pareto differential evolution
based weight determination for back-propagation networks. Proceed- algorithm. Int J Artif Intell Tools 11(No.4):531–552
ings of Fourth International Conference on Advanced Computing, pp 29. Babu B-V, Rakesh Angira (2002) A differential evolution
73–79 approach for global optimization of MNLP problems. Proceedings
14. Kennedy J, Eberhart RC (1995) Particle swarm optimization. of 4th Asia-Pacific Conference on Simulated Evolution and
Proceedings of IEEE International conference on neural networks Learning (SEAL’02), Singapore, paper no. 1033, vol. 2, pp 880–
IV, pp 1942–1948 884, November 18–22

You might also like