Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Mechanical Systems and Signal Processing 88 (2017) 100–110

Contents lists available at ScienceDirect

Mechanical Systems and Signal Processing


journal homepage: www.elsevier.com/locate/ymssp

Neural network approach for automatic image analysis of cutting


edge wear
cross

T. Mikołajczyka, K. Nowickib, A. Kłodowskic, D.Yu. Pimenovd,
a
Department of Production Engineering, UTP University of Science and Technology, Al. prof. S. Kaliskiego 7, Bydgoszcz 85-796, Poland
b
Department of Computer Methods, UTP University of Science and Technology, Al. prof. S. Kaliskiego 7, Bydgoszcz 85-796, Poland
c
Laboratory of Machine Design, Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta 53850, Finland
d
Department of Automated Mechanical Engineering, South Ural State University, Lenin Prosp. 76, Chelyabinsk 454080, Russia

A R T I C L E I N F O ABSTRACT

Keywords: This study describes image processing systems based on an artificial neural network to estimate
Edge wear tool wear. The Single Category-Based Classifier neural network was used to process tool image
Flank wear data. We present a method to determine the rate of tool wear based on image analysis, and
Tool discuss the evaluation of errors. Using the proposed algorithm, we made in Visual Basic the
Image analysis
special Neural Wear software for analysis of the worn part of the cutting edge. For example, the
Neural networks
image of worn edge was created determining the optimum setting of Neural Wear software to
automatically indicate the wear area. The result of the analysis was the number of pixels that
belonged to the worn area. Using these settings, we made an image analysis of edge wear for
different working times. We used the calculated parameters of correlation between the number
of pixels and VB index. Our results promise a good correlation between the new methods and the
commonly used optically measured VB index, with an absolute mean relative error of 6.7% for
the tools’ entire life range. Automatic detection of wear of the cutting edge can be useful in many
applications; for example, in predicting tool life based on the current value of edge wear.

1. Introduction

Analysis of the tool wear conditions during machining is very important to predict the period of resistance and schedule of the
cutting edge replacement before the occurrence of damage and catastrophic failures. Untimely replacement of the worn cutting edge
increases cutting forces [1,2], which, in turn, contribute to poor roughness and dimensional accuracy. As noted in [3], tool failure
constitutes 20% of CNC machine downtime. [4] shows the relationship of the durability of a machining tool with the wear and tear of
single abrasive grains. In [5], empirical models of cutting parameters and flank wear are determined using fuzzy logic for predicting
the tool breakage (tool life). Tool wear prediction provides processing parameter settings to achieve high quality of the machined
parts. For example, [6] describes the online monitoring and measurement of tool wear during turning of stainless steel parts to
improve the processing accuracy. [7] shows that the cutting force is the main factor affecting the distribution of errors in the
machining shafts, along with thermal expansion and tool wear. In [8], it is shown that the prediction of the tool wear and surface
roughness of a treated surface plays an essential role in the machining industry for the correct planning and control of processing
parameters and optimizing cutting conditions. In [9], the impact of the conditions of face milling and tool wear on the flank surface
of the tool on the machined surface roughness is discussed. Flexible machining centers, currently popular in industry, are used for


Corresponding author.
E-mail addresses: tami@utp.edu.pl (T. Mikołajczyk), krzysztof.nowicki@utp.edu.pl (K. Nowicki), Adam.Klodowski@lut.fi (A. Kłodowski),
danil_u@rambler.ru (D.Y. Pimenov).

http://dx.doi.org/10.1016/j.ymssp.2016.11.026
Received 20 July 2016; Received in revised form 28 October 2016; Accepted 19 November 2016
0888-3270/ © 2016 Elsevier Ltd. All rights reserved.
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Nomenclature examples
F={f1, f2, f3, … fn} Binary vector
VB Flank wear m Neighborhood radius
VBnn Flank wear obtained with Neural Wear software δwI Increment value
dVB Absolute differences of wear areas measured and η Rate of neural network learning
obtained with Neural Wear software Λ Neighborhood function that takes a value between
S={s1, s2, s3, …, sn} SCBC (Single Category-Based 0–1
Classifier) vector classification η Rate of neural network learning
W={w1, w2, w3, …,;wn} Vector of weighting factors Λ Neighborhood function that takes a value between
wi Corresponding weight 0–1
si Signal to be classified γ, α Rake and back angles
φi Size comparison function λ Angle of the main cutting edge
fi Binary representation of the relationship between kr Major cutting edge angle
φi and the threshold h εr Tool included angle
h Equal to the threshold rε Corner radius
max(si) Maximum brightness in the i-th column of pixels V Cutting speed for turning
in rows f Feed rate
µi Average brightness value of the i-th column of ap Depth of cut
pixels in rows T Tool life
k Number of rows of pixels belonging to the training

the production of product series of different sizes. This complicates the prediction of tool failure based solely on operation time.
Other studies have described two main ways of tool wear prediction: a forecast and direct prediction [10–13].
The forecast approach is based on tool wear evaluation using indirect observations such as processing time, the material and tool
parameters or the machining parameters. For example, in [14], a model of regression analysis of the flank wear on the processing
time is proposed, taking into account the feed, depth and cutting speed of the face milling. In [15], models are proposed on the basis
of a multifactorial experiment to predict the tool wear, depending on the properties of the workpiece and the cutting parameters. In
[16], the wear pattern on the rear surface of the cutter, the cutting force components, and the cutting parameters are analyzed using
functional analysis methodology. [17] suggests determining the plowing force necessary to control the wear degree of the cutting tool
by extrapolation of the chip thickness to zero. In [18], a mathematical model is presented of the main drive power expended in the
face milling adjusted for tool wear. [19] focuses on tool wear prediction limitations on milling CGI plates 450 through the
simultaneous detection of the acceleration and the drive of the spindle by the signals of the current sensor. In [20], a model of tool
wear monitoring systems is developed with particular emphasis on the module for collecting and processing the vibration
acceleration signal for turning, and so on.
The direct approach focuses on tool measurement. Prognostic methods are based on a statistical representation of the original
data, and provide an opportunity to evaluate tool wear without removing the tool from the machine, a key advantage. However,
prognostic methods often require complex models describing the relationship between the measured parameters and the tool wear
coefficients [21,22]. In addition, the use of prognostic methods in old machines requires expensive sensor installation. Direct
methods are based on observations of the actual tool wear and may require removal of the tool and, thus, stops during production to
assess the status of the cutting edge. However, they do not require real tool wear modeling.
Dependencies between tool wear and the process parameters are often non-linear and very complex [1,14,23–25]. Moreover,
they often should be customized to a particular machine and the tool. Thus, a number of methods to deal with the modeling of the
relationship between the input (the process data) and output (the tool wear) data has been developed. Methods of signal processing
are common, regardless of the point of the input data entry. Among the methods of processing, multiple regression [26], artificial
neural networks [27–34], neuro-fuzzy techniques [35] or genetic algorithms [36] are usually used. The most common signals used to
describe the process of turning into the prognostic methods include cutting forces [36–40], acoustic output [41–43], acceleration
[44], the torque change spindle [45], or the motor spindle current [46]. Tool source images as input data for evaluation of tool wear
are typically used in direct approaches [46–53]. Among the indirect tool wear monitoring methods, the chip geometry can be used to
predict tool wear [54]. In this particular case, we propose a genetic algorithm or an effective method for monitoring the grinding
wheel wear by analyzing the vibration signals proposed in [55].
In general, experimental studies often neglect the possibility of using image analysis to study tool wear because of the low
productivity of classical image processing techniques. The use of artificial intelligence in combination with effective computer
technology makes it possible to improve the situation in this area. Neural networks are one of the most common implementations of
artificial intelligence [56]. They allow one to exploit the natural redundancy of image data. With parallel processing, the neural
networks' ability to adapt to new working conditions, their low sensitivity to noise and resistance to damage make them a promising
instrument for cutting tool wear analysis using image recognition. The method of using image analysis to evaluate tool wear is under
constant development. A comprehensive review of tool wear monitoring using the machine vision system is given elsewhere [3,46–
53]. In [47], images (photos) are used as the source of input data for estimating the cutter wear. In [46], the unsupervised neural
network is introduced for prompt detection of tool failure during machining using multiple sensors. [48] describes the use of digital
image processing techniques to analyze images of worn cutting tools in order to assess the degree of their wear and, thus, the

101
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

remaining service life. In [49], digital images of the cutting edge are used with the contour signatures of the wear region and the
neural network. [50] presents an optical monitor (video) depreciation development tool. The methodology developed an artificial
neural network (NN) for the automatic detection of tool wear. In [51], various aspects are used to describe the tool wear images, and
the tool states in the wear degrees are classified. [52] proposes to monitor tool wear development in machining using an optical
sensor, followed by computer processing of the digital image of the cutting tool. This article used a classifier that could detect the
actual damage to the tool, but is not used to determine the level of consumption. That is, it cannot be used to determine the degree of
wear and to make decisions about tool replacement before the fact of damage. In [53], a method is described based on computer
vision and a statistical training system, which helps in the assessment of the level of the cutting inserts' wear in order to determine
the time to replace them. Using geometric descriptors allows one to split the tool image into low, medium and high wear categories
with a success probability of 98.63% [53]. However, the algorithm presented in [51–53] requires manual image segmentation to
achieve these results. Automatic image segmentation using statistical analysis and basic filtering was achieved in [21] with good
results. The system was only unable to classify low wear tools.
The edge elements contacting with the workpiece and the chip during cutting are subject to wear. The wear origin differs,
depending on the workpiece material properties and the cutting edge processing parameters. Among the most commonly used
parameters characterizing tool wear, the average width of the flank wear (VB) is one that is easy to measure. The tool friction contact
to the processed object leads to increased heat generation in the relief surface of the cutting edge, which in extreme conditions can
even cause combustion. Crater wear, on the other hand, can occur on the tool's front surface due to chip formation [57,58]. It is
therefore important to have a detection system for identifying the state of the cutting edge wear during production.
Thus, analysis shows reference perspective systems for the automatic measurement of cutting edge wear based on image analysis.
However, many jobs require manual image segmentation to achieve good results. It is important to minimize errors in image
recognition when evaluating tool wear; the most important aim is to make the process more easy. Moreover, we need to be able to
automate this process with the possibility of integration into the machine CNC system. Having software for the image analysis can be
used to determine the degree of wear and to make decisions about its replacement before any actual damage.
The objective of this study is to develop a system for automated measurement of the cutting edge wear by analyzing its image. For
this purpose, application software had to be developed to identify the state of the used cutting edge, and to distinguish its worn
surface. We use a system based on a single category unsupervised classifier (SCBC) of neural networks [46] to carry out the image
analysis of the tool wear. The objective is to detect wear with the indicator automatically, which can be used to predict the service life
of the cutting tool in automated mode. This method for measuring the tool wear can be carried out using both classical algorithms
and artificial neural networks [26–28].

2. Materials and methods

2.1. Research problem

Wear will occur on the uniform flank (Fig. 1a), the characteristic fields of the edge wear (Fig. 1b), which an experienced
researcher identifies and uses to determine VB of the flank surface. A typical indicator of wear edges can be identified by a skilled
researcher subjectively to determine VB of the flank surface. Here, we present a solution for identifying the state of the used cutting
edge and separating it from the surface of wear.

2.2. Single category-based classifier neural network

Unsupervised training of the neural network was utilized here. The view of cutting edge detection was performed by the tool's
image area classification (Fig. 2). The image was segmented into two zones: normal (i.e. points outside the tool contact area) and
active (i.e. points lying within the tool contact area). The classification was solely based on the normal category description. In other
words, the part of the image not belonging to the normal category was automatically assigned as the active category. This mode of the
neural network operation is called Single Category-Based Classifier (SCBC) [27].
Identifying the image zone categories in the network is performed by SCBC vector classification S={s1, s2, s3, …, sn}, containing
information about the brightness of a single row of pixels (Fig. 2). Each pixel of the image was then assigned one of the two possible

Fig. 1. View of cutting edge's flank surface of: a) new tool, b) used tool (work time T=16 min, flank wear VB=0.406 mm).

102
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Fig. 2. Image parameters processing cycle.

categories.
The neural network is defined by the vector of weighting factors W={w1, w2, w3, …,;wn}. The classification is made by
measuring the similarity of the brightness of each point si to the size of the prototype wi and, based on the size comparison function
φi, is defined as:

[si − wi ]2
ϕi =
wi2 (1)

where:si is a signal to be classified, and wi is a corresponding weight. The signal is classified as normal if ϕi is less than or equal to the
threshold h , as shown in Eq. (2).
⎧ 0, φi ≤ h
fi = ⎨
⎩1, φi > h (2)

where: fi is a binary representation of the relationship between ϕi and the threshold h . The threshold h is defined as:
n
1 [max(si ) − μi ]2
h= ∑
n i =1
μi2 (3)

where: max(si ) is the maximum brightness in the i-th column of pixels in rows belonging to the group of examples from the training
set; and μi is the average brightness value of the i-th column of pixels in rows belonging to the group of training examples, defined as:
k
1
μi = ∑ si
k i =1 (4)

where: k is the number of rows of pixels belonging to the training examples.


After classifying the brightness of a single line of the image, each line is represented as a binary vector F={f1, f2, f3, … fn}. The
vector F is further processed using the voting algorithm. For each value of fi, the following condition is checked to determine whether
fi is similar to its neighborhood. If fi is in the minority of its neighbours, its value is updated according to Eq. (5).
⎧ 0, 1 i+m
⎪ ∑i = i − m fi ≤ 0.5
fi = ⎨
2m + 1
⎪1, 1 i+m
∑i = i − m fi > 0.5
⎩ 2m + 1 (5)

where: m is the neighborhood radius.


Eq. (5) determines if at least 50% of the points in the m radius surrounding the test pixels (in a single line), belong to the category
normal. If the condition is satisfied, the point is assigned the category “normal”; otherwise, it is considered as “active”. It is worth
noting that the changes are applied to the F vector only after complete vector processing. In a programming sense, this means that a
copy of the F vector is made before the voting; then it is used for evaluating Eq. (5) while the changes are applied to the original
vector. This allows biasing the votes by the previous voting results to be avoided.
After the classification of all the measurement vectors, F, is complete, and the weight vector W is updated to adapt it to the
changed lighting conditions (within a single image) and errors associated with distortions or imperfections related to the image
acquisition. The weight vector is conditioned according to the following procedure.
wi is updated using the initial value wI and the increment value δwI as described in Eq. (6), where the increment value is defined in
Eq. (7).

103
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

wi = wI + δwI (6)

⎧ ηΛ [sI − wI ] forfI = 0
δwI = ⎨
⎩0 forfI ≠ 0 (7)
where:η determines the rate of neural network learning, Λis the neighborhood function that takes a value between 0–1; 0
corresponds to noise absence, while 1 represents noisy data. In practice, Λdoes not exceed 0.5.
The SCBC neural network supports noise suppression through the contrast enhancement of the input values. Adaptation of the
weight vector leads to an increased uniformity of classification between the image lines. Based on the properties of the network, if the
majority of the measured values is classified as “normal”, then the corresponding weights, which are classified as erroneous
measurements, are adapted so that the probability of classifying all the signals as “normal” increases for identical signals. The
contrast enhancement procedure can be described using Eq. (8).
∀ i ∈ I − m, I + m ∩ i ≠ I : wi = wi + δwi (8)
In the learning algorithm (Eqs. 6 and 7), I iterates from 1 to n, in order to adapt all the weights wi . Each wi is updated based on the
neighboring values and the initial wi value, as shown in Eqs. (7) and (8).
An adaptive algorithm presented in Eqs. (6) to (8) allows continuous updating and maintaining the best possible weight vector.
The aadaptation should ideally be performed using all the past measurement vectors. Each image vector analysis increases the
number of lines that needs to be analyzed; thus the algorithm slows down with each pass. Therefore, only the m first lines of the
image are used for the neural network training.

2.3. Image analysis

After completion of the experimental part of the study, the images were analyzed using the artificial neural network software
Neural Wear developed in Visual Basic environment for the purpose of this project. The number of pixels in the image representing
the worn part of the tool cutting edge was used as a measure for tool wear. The images were analyzed using the following algorithm
(see Fig. 3 for graphical representation):

• Determining the tool wear search area within the image. The number of rows in the training cycle was set to 50, the neighborhood
radius was set to 2, and the scanning direction used was top-to-bottom,
• Setting brightness threshold level, h, for each pixel column in the processing area using the relationship from Eq. (3),
• The weight vectors, w, are initialized with the maximum pixel grayscale levels in the corresponding lines,
• For all pixels in a single row, the comparative φi function is evaluated according to Eq. (1).
• Eq. (2) is evaluated,
• The weight updating is performed for those columns of pixels in which tool wear was not detected using Eqs. (6) and (7). The
neural network learning speed coefficient h was assumed to be 0.1.
• Weight updating is performed in the columns adjacent to the column where tool wear has not been detected using Eq. (8),
assuming that the neighborhood function, Λ, has a value of 0.5. The neighboring radius is set to 2. This means that the weights of
the two columns adjacent on the left and right sides of the column where tool wear was not detected, are affected.
• Finally, each pixel is assigned to “normal” or “active” category using the voting procedure. A neighborhood radius of 2 was used in
this procedure. The pixel was assigned to the same category as the majority of the neighboring pixels.

In addition to algorithmic parameters, the program enables also a definition of parameters that expand the possibilities of the
algorithm used (Fig. 4). Such parameters include the analysis area selection and direction of the image data, i.e. top to bottom or
other ways around.
The parameters of the Neural Wear software (Fig. 4) are indicators defining the area of analysis of x, y, width and height, where x
and y are the coordinates of the starting point, and the value of width is related to the width of the trace of wear, linked to the depth
of cut. Since the purpose of the program is to determine the number of pixels in the area used for calculating the value of VB, it is
assumed to be the width according to the width of the wear, while the height of the area is associated with the view of the wear area.
To expedite this process, the work program should be selected in proportion to the wear area. The number of pixels in the area of

Fig. 3. Single pixel analysis algorithm.

104
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Fig. 4. Image processing parameter setup in the Neural Wear application.

Fig. 5. Measurement system data flow diagram.

learning is assumed as 50, based on attempts. Of course, it also depends on the used image magnification of the wear area. The
importance of other parameters for the test conditions is specified in terms of the initial sample.

2.4. Experimental setup and measurement protocol

The cutting edge image was recorded using the camera type CCD-FS-5612p at 684×512 resolution (S-VHS PAL, without
overscan), digitized using the Aver card utilizing Aver compression module for the image compression (Fig. 5). An optical camera
zoom of 100x was used. Finally, the image was converted to 256 grayscale windows bitmap file (BMP) format. The image analysis
was performed on a PC. Self-developed software was used to simulate the operation of the SCBC neural network. For the turning
process experiment, a modified universal lathe SNB400 (Romania) was used, which was equipped with a variable spindle speed
control utilizing the TPC 60 inverter.
Experiments of machining were performed on the specimens made of C45 carbon steel in the normalized state. The workpiece
was a cylinder of 120 mm diameter. The chemical compositions of the selected materials are specified in Table 1.
All cutting parameters employed for the turning operations, and the characteristics of the used tool edges, are specified in
Table 2.
The cutting edge image was acquired before the turning process and digitized. Turning was carried out for 1–2 min depending on
the total tool's operation time. The first five minutes of operation time were used as one-minute turning series. Between 4 and
12 min of the tool's life, two-minute turning sessions were used. Finally, after 12 min of the tool's life, one-minute operation series
were used.
After each work cycle, the cutting insert was removed from the holder and moved to the imaging station, where digitization of
flank and rake sides of the insert was performed. The resulting image files were then saved to the hard drive of the computer. At the
same time, the VB wear indicator value was read.
Measurements were carried out until the cutting edge-life indicator VB reached value of 0.4 mm. Three cutting inserts were
tested consequently using this method.

3. Results and discussion

The flank surface of the cutting edge was analyzed using Neural Wear software. The analysis was conducted in two stages. At
first, the image processing parameters were analyzed. Then, the software was used for automatic edge state assessment.

3.1. Determination of neuralwear image processing parameters

To determine the image processing parameters, the worn edge image (after 16 min of machining) was used (Fig. 1b). The
learning cycle was set to 50 pixels, and the width and height of the area were both set to 200 pixels. Bottom-up and up-bottom
scanning directions were tested with 8 pixel neighborhood (Fig. 6). The edge wear zone is marked in white and back, and the

105
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Table 1
Chemical compositions of material of the machined workpiece.

Material Microstructure Chemical composition %

C Mn Si Cr Ni S

C45 Perlite+Ferrite 0.45 0.65 0.25 0.20 0.20 0.04

Table 2
Specifications of machining conditions.

Workpiece Machining Cutting parameters


material operation
Rake Back Angle of the Major Tool Corner Cutting speed Feed Depth of
angle, γ, angle, α, main cutting cutting edge included radius, rε, for turning, rate, f, cut, ap,
deg deg edge, λ, deg angle, kr, angle, εr, deg mm V, m/min mm/rev mm
deg

C45 Longitudinal −6 6 −6 45 90 0.4 220 0.067 0.6


turning with S20
uncoated carbide

Fig. 6. Example of worn region detection on the cutting edge using NeuralWear software with 8 pixel neighborhood, and different scanning directions: a) top-bottom,
b) bottom-up.

undamaged part of the edge is marked in black within the test area, in Fig. 6.
The results indicate that the software correctly recognizes the cutting edge when the training set contains it; however, the
background recognition is characterized by a significant uncertainty level. To mitigate the problem, the background was changed to a
solid black color. The results of scanning with neighborhood size of 2–14 pixels are presented in Fig. 7.
The results of neighborhood size influence on the worn edge area presented on Fig. 7 are shown in Fig. 8.
Fig. 8 shows that the neighborhood size changes from 2 to 14 pixels of the worn edge area decrease, because false pixels were
eliminated (Fig. 7a-f). The minimum value is equal to 16346 pixels when the number of pixels of the worn edge area is equal to eight.
Based on neighborhood size influence analysis, it was determined that an eight-pixel neighborhood would be used in the next stage
of analysis. The background color replacement to black was another modification to the algorithm that was found to improve the
output quality.
We analyzed the conditions for obtaining an image of edge wear. The developed algorithm considered the possible errors through
the use of neighborhood analysis. The results of the initial assays to determine the settings of the Neural Wear software made it
possible to define and ensure the most favorable conditions of work programs to ensure the continuing adequacy of the
measurement, i.e. the minimum number of false pixels. In the tests conducted, no effects of surface preparation and blurring of the
edges were found. Obviously, the developed program will not eliminate the impact of the phenomenon of build-up edges, which may
omit measurements only as an observer. For the presentation of the effectiveness of the Neural Wear program supplemented, the
results of image-edge wear were presented for the lower run times.

3.2. Image-based tool's edge wear analysis

Analysis of the test set was performed using images of the cutting edge with the background color replaced to black. In the test,
and eight-pixel neighborhood and the top to bottom scanning direction were used. An example of other analyzed images of flank
wear for different values of time before and after the analysis is presented in Fig. 9.
In many cases, the Neural Wear software showed a convincing performance with the determination of the wear area (Fig. 9d –
6 min, Fig. 9h – 14 min), as in the example of Fig. 7d for 16 min. However, in some numbers of pixels it has been omitted, especially

106
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Fig. 7. Influence of neighborhood (ngh) size on the image segmentation results: a) ngh – 2 px, worn edge – 16711 px, b) ngh– 4 px, worn edge – 16528 px, c) ngh – 6
px, worn edge – 16371 px, d) ngh– 8 px, worn edge – 16346 px, e) ngh – 10 px, worn edge – 16360 px, f) ngh – 14 px, worn edge – 16349 px.

Fig. 8. Results of neighborhood size influence on the worn edge area.

for Fig. 9b – 2 min. As a reason to be here indicate sub-optimal lighting. In subsequent trials, better lighting of the flank wear is
required. It was relatively seldom that the classified pixels did not belong to the flank wear (Fig. 9f – 10 min and Fig. 9h – 14 min).
Their number was minimal.
The average test results obtained for the first insert for the other values of usage time subjected in the experiment are shown in
Table 3. With constant width of the scanning area, the result of the pixel count was proportional to the manually obtained VB
parameter, as can be seen in Table 3. To ease the comparison, both results are normalized and offset in the graph. The offset was
determined using the least-squares fit of the data. Using a proportional coefficient from the number of pixels of the wear area and the
width of the window, VBnn (Table 3) was calculated using the following relation:

VBnn = 0.005107 x Number of pixels/width (9)

The quotient of the number of pixels by the window width gives the average number of pixels proportional to the VB index.
The wear coefficient VB and VBnn area (Table 3) expressed from the number of pixels are compared in Fig. 10. The average and
maximum absolute differences between the measured and computed data are 0.016 mm and 0.046 mm, respectively.
Larger differences in the values of manual and calculated measurements based on the number of pixels appeared when there
were problems with the lighting wear area (Table 3 and Fig. 10: cases 2 and 10 min). In one case, a higher value of VBnn was
obtained due to bad pixels (Table 3 and Fig. 10: case of 14 min). The maximum value of relative error was 20.6% for the case of
2 min (Table 3), while the average value of relative error was 6.7%.

107
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Before analysis After Neural Wear


a) b)

Time – 2 min

c) d)
Time – 6 min

e) f)
Time – 10 min

g) h)
Time – 14 min

Fig. 9. Example of edge images for values of usage time before and after analysis (neighborhood 8 pix, width=200 px): a) before analysis, time – 2 min, b) after
Neural Wear, time – 2 min, c) before analysis, time – 6 min, d) after Neural Wear, time – 6 min, e) before analysis, time – 10 min, f) after Neural Wear, time –
10 min, g) before analysis, time – 14 min, h) after Neural Wear, time – 14 min.

Fig. 10. Comparison of tool's wear measured as VB and as normalized image pixel count.

4. Conclusions

We have alleviated the requirement of manual image segmentation, allowing us to extend from recognizing only a couple of wear
categories to the linear tool's life index.
However, it has to be noted that the particular neural network configuration is not guaranteed to correspond to the requirements
in all possible conditions. Among the variables that need to be taken into account, lighting of the tool edge during the image
acquisition needs to be standardized to achieve repeatable measurements, and to eliminate lighting variance related to daylight

108
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

Table 3
Tool's cutting edge wear indicators resulting from optical measurements and wear image analysis.

Tool life, T, min Flank wear, VB, mm Wear area, number of pixels Flank wear, VBnn, mm Abs. diff. dVB, mm Relative error %

2.00 0.224 6,963 0.178 −0.046 −20.6


4.00 0.266 10,096 0.258 −0.008 −3.1
6.00 0.287 10,945 0.279 −0.008 −2.6
8.00 0.322 13,000 0.332 0.010 3.1
10.00 0.343 12,070 0.308 −0.035 −10.2
12.00 0.364 14,152 0.361 −0.003 −0.7
14.00 0.378 16,318 0.417 0.039 10.2
16.00 0.406 16,346 0.417 0.011 2.8

changes. We propose only to use artificial light (so that the time of day when the experiment was carried out would have no effect on
changes in the image brightness). It is also advisable to use several light sources instead of a single one with an adjustable position
(for reduction of reflections on the object's edges). The background contrast with the edge wear should simplify automatic feature
recognition and segmentation of the image. These improvements could allow for the introduction of automatic determination of
parameters of the neural network. In the future, working with the entire image will be used to characterize cutting edge wear instead
of a single image line.
The results of our analyses show that our method of tool wear image analysis is also suitable for low wear tools.
Our results show a good correlation between the new methods and the commonly used VB index measured optically, with the
absolute mean relative error in the range of 6.7% over the entire lifetime of the tools.
An increase in the number of the neighborhood makes it possible to eliminate error in the classification of pixels in the wear zone.
Automatic detection of cutting edge wear can be useful in many applications, such as when the cutting edge owner changes the
worn portion of the edge influencing the surface roughness [59,60].

References

[1] J.M. Zhou, M. Andersson, J.E. Stahl, The monitoring of flank wear on the CBN tool in the hard turning process, Int. J. Adv. Manuf. Technol. 22 (2003) 697–702.
http://dx.doi.org/10.1007/s00170-003-1569-2.
[2] N. Ghosh, Y.B. Ravi, A. Patra, S. Mukhopadhyay, S. Paul, A.R. Mohanty, A.B. Chattopadhyay, Estimation of tool wear during CNC milling using neural network-
based sensor fusion, Mech. Syst. Signal Process. 21 (2007) 466–479. http://dx.doi.org/10.1016/j.ymssp.2005.10.010.
[3] S. Kurada, C. Bradley, A review of machine vision sensors for tool condition monitoring, Comput. Ind. 34 (1997) 55–72. http://dx.doi.org/10.1016/S0166-
3615(96)00075-9.
[4] Yu Novoselov, S. Bratan, V. Bogutsky, Analysis of relation between grinding wheel wear and abrasive grains wear, Procedia Eng. 150 (2016) 809–814. http://
dx.doi.org/10.1016/j.proeng.2016.07.116.
[5] P. Kovač, D. Rodić, B. Savković, V. Pucovsky, M. Gostimirović, Modelling tool wear in avoiding the risk of breakage by using fuzzy logic [Modeliranje habanja
alata radi izbegavanja rizika od loma primenom fazi logike], Struct. Integr. Life 15 (2015) 103–109.
[6] T.-I. Liu, S.-D. Song, G. Liu, Z. Wu, Online monitoring and measurements of tool wear for precision turning of stainless steel parts, Int. J. Adv. Manuf. Technol.
65 (2013) 1397–1407. http://dx.doi.org/10.1007/s00170-012-4265-2.
[7] K. Fan, J. Yang, H. Jiang, W. Wang, X. Yao, Error prediction and clustering compensation on shaft machining, Int. J. Adv. Manuf. Technol. 58 (2012) 663–670.
http://dx.doi.org/10.1007/s00170-011-3421-4.
[8] S.M. Ali, N.R. Dhar, Tool wear and surface roughness prediction using an artificial neural network (ANN) in turning steel under minimum quantity lubrication
(MQL), World Acad. Sci. Eng. Technol. 62 (2010) 830–839.
[9] D.Yu Pimenov, Experimental research of face mill wear effect to flat surface roughness, J. Frict. Wear 35 (2014) 250–254. http://dx.doi.org/10.3103/
S1068366614030118.
[10] A.G. Rehorn, J. Jiang, P.E. Orban, State-of-the-art methods and results in tool condition monitoring: A review, Int. J. Adv. Manuf. Technol. 26 (2005) 693–710.
http://dx.doi.org/10.1007/s00170-004-2038-2.
[11] S. Pal, P.S. Heyns, B.H. Freyer, N.J. Theron, S.K. Pal, Tool wear monitoring and selection of optimum cutting conditions with progressive tool wear effect and
input uncertainties, J. Intell. Manuf. 22 (22) (2011) 491–504. http://dx.doi.org/10.1007/s10845-009-0310-x.
[12] C. Scheffer, H. Engelbrecht, P.S. Heyns, A comparative evaluation of neural networks and hidden Markov models for monitoring turning tool wear, Neural
Comput. Appl. 14 (2005) 325–336 doi: 10.1007 / s00521-005-0469-9.
[13] B. Sick, On-line and indirect tool wear monitoring in turning with artificial neural networks: a review of more than a decade of research, Mech. Syst. Signal
Process. 16 (2002) 487–546 doi:10.1006 / mssp.2001.1460.
[14] D.Yu Pimenov, The effect of the rate flank wear teeth face mills on the processing, J. Frict. Wear 34 (2013) 156–159. http://dx.doi.org/10.3103/
S1068366613020104.
[15] D.V. Manoshin, Tool wear and surface quality in the turning of precision alloys, Russ. Eng. Res. 34 (2014) 539–541. http://dx.doi.org/10.3103/
S1068798X14080085.
[16] S. Jozić, B. Lela, D. Bajić, A new mathematical model for flank wear prediction using functional data analysis methodology, Adv. Mater. Sci. Eng. (2014). http://
dx.doi.org/10.1155/2014/138168.
[17] A. Dugin, A. Popov, Increasing the accuracy of the effect of processing materials and cutting tool wear on the ploughing force values, Manuf. Technol. 13 (2013)
169–173.
[18] D.Yu Pimenov, Mathematical modeling of power spent in face milling taking into consideration tool wear, J. Frict. Wear 36 (2015) 45–48. http://dx.doi.org/
10.3103/S1068366615010110.
[19] P. Stavropoulos, A. Papacharalampopoulos, E. Vasiliadis, G. Chryssolouris, Tool wear predictability estimation in milling based on multi-sensorial data, Int. J.
Adv. Manuf. Technol. 82 (2016) 509–521. http://dx.doi.org/10.1007/s00170-015-7317-6.
[20] A. Antić, G. Šimunović, T. Šarić, M. Milošević, M. Ficko, A model of tool wear monitoring system for turning [Model sustava za klasifikaciju trošenja alata pri
obradi tokarenjem], Teh. Vjesn. 20 (2013) 247–254.
[21] M. Sortino, Application of statistical filtering for optical detection of tool wear, Int. J. Mach. Tools Manuf. 43 (2003) 493–497. http://dx.doi.org/10.1016/
S0890-6955(02)00266-3.
[22] V.I. Guzeev, D.Yu Pimenov, Cutting force in face milling with tool wear, Russ. Eng. Res. 31 (2011) 989–993. http://dx.doi.org/10.3103/S1068798X11090139.

109
T. Mikołajczyk et al. Mechanical Systems and Signal Processing 88 (2017) 100–110

[23] S. Kalidass, P. Palanisamy, V. Muthukumaran, Prediction and optimisation of tool wear for end milling operation using artificial neural networks and simulated
annealing algorithm, Int. J. Mach. Mach. Mater. 14 (2013) 142–164. http://dx.doi.org/10.1504/IJMMM.2013.055734.
[24] J. Kundrák, A.P. Markopoulos, T. Makkai, Assessment of tool life and wear intensity of CBN tools in hard cutting, Key Eng. Mater. 686 (2016) 1–6. http://
dx.doi.org/10.4028/www.scientific.net/KEM.686.1.
[25] A. Attanasio, E. Ceretti, C. Giardini, Analytical models for tool wear prediction during AISI 1045 turning operations, Procedia CIRP 8 (2013) 218–223. http://
dx.doi.org/10.1016/j.procir.2013.06.092.
[26] J. Lin, D. Bhattacharyya, V. Kecman, Multiple regression and neural networks analyses in composites machining, Compos. Sci. Technol. 63 (2003) 539–548.
http://dx.doi.org/10.1016/S0266-3538(02)00232-4.
[27] G.C. Balan, A. Epureanu, The monitoring of the turning tool wear process using an artificial neural network. Part 2: the data processing and the use of artificial
neural network on monitoring of the tool wear, Proc. Inst. Mech. Eng. B: J. Eng. Manuf. 222 (2008) 1253–1262. http://dx.doi.org/10.1243/
09544054JEM1011.
[28] T. Özel, Y. Karpat, Predictive modelling of surface roughness and tool wear in hard turning using regression and neural networks, Int. J. Mach. Tools Manuf. 45
(2005) 467–479. http://dx.doi.org/10.1016/j.ijmachtools.2004.09.007.
[29] F. Cuš, U. Župerl, Real-time cutting tool condition monitoring in milling, Stroj. Vestn.-J. Mech. E 57 (2011) 142–150. http://dx.doi.org/10.5545/sv-
jme.2010.079.
[30] M.E. Nakai, P.R. Aguiar, H. Guillardi Jr., E.C. Bianchi, D.H. Spatti, D.M. D'Addona, Evaluation of neural models applied to the estimation of tool wear in the
grinding of advanced ceramics, Expert Syst. Appl. 42 (2015) 7026–7035. http://dx.doi.org/10.1016/j.eswa.2015.05.008.
[31] K. Patra, S.K. Pal, K. Bhattacharyya, Artificial neural network based prediction of drill flank wear from motor current signals, Appl. Soft Comput. 7 (2007)
929–935. http://dx.doi.org/10.1016/j.asoc.2006.06.001.
[32] D. D'Addona, T. Segreto, A. Simeone, R. Teti, ANN tool wear modelling in the machining of nickel superalloy industrial products, CIRP J. Manuf. Sci. Technol. 4
(2011) 33–37. http://dx.doi.org/10.1016/j.cirpj.2011.07.003.
[33] V.P. Legaev, L.K. Generalov, O.A. Galkovskii, Neural-network system for extremal control of the cutting precision, Russ. Eng. Res. 36 (2016) 565–570. http://
dx.doi.org/10.3103/S1068798X16070145.
[34] S. Achiche, M. Balazinski, L. Baron, K. Jemielniak, Tool wear monitoring using genetically-generated fuzzy knowledge bases, Eng. Appl. Artif. Intel.l 15 (2002)
303–314. http://dx.doi.org/10.1016/S0952-1976(02)00071-4.
[35] A. Gajate, R. Haber, R.D. Toro, P. Vega, A. Bustillo, Tool wear monitoring using neuro-fuzzy techniques: A comparative study in a turning process, J. Intell.
Manuf. 23 (2012) 869–882. http://dx.doi.org/10.1007/s10845-010-0443-y.
[36] L.C. Lee, K.S. Lee, C.S. Gan, On the correlation between dynamic cutting force and tool wear, Int. J. Mach. Tools Manuf. 29 (1989) 295–303 doi:101016/0890-
6955(89)90001-1.
[37] V.S. Sharma, S.K. Sharma, A.K. Sharma, Cutting tool wear estimation for turning, J. Intell. Manuf. 19 (2008) 99–108. http://dx.doi.org/10.1007/s10845-007-
0048-2.
[38] K. Bouacha, A. Terrab, Hard turning behavior improvement using NSGA-II and PSO-NN hybrid model, Int. J. Adv. Manuf. Technol. (2016). http://dx.doi.org/
10.1007/s00170-016-8479-6.
[39] X. Li, A. Djordjevich, P.K. Venuvinod, Current-sensor-based feed cutting force intelligent estimation and tool wear condition monitoring, IEEE Trans. Ind.
Electron. 47 (2000) 697–702. http://dx.doi.org/10.1109/41.847910.
[40] G.C. Balan, A. Epureanu, The monitoring of the turning tool wear process using an artificial neural network Part 1: The experimental set-up and experimental
results, Proc. Inst. Mech. Eng. B: J. Eng. Manuf. 222 (2008) 1241–1251. http://dx.doi.org/10.1243/09544054JEM1009.
[41] L.I. Burke, An unsupervised neural network approach to tool wear identification, IIE Trans. 25 (1993) 16–25. http://dx.doi.org/10.1080/07408179308964262.
[42] V.Ts Zoriktuev, Yu.A. Nikitin, A.S. Sidorov, Monitoring and prediction of cutting-tool wear, Russ. Eng. Res. 28 (2008) 88–91. http://dx.doi.org/10.1007/
s11980-008-1020-1.
[43] S.Y. Liang, D.A. Dornfeld, Tool wear detection using time series analysis of acoustic emission, J. Eng. Ind. 111 (1989) 199–205.
[44] G. Lim, Tool-wear monitoring in machine turning, J. Mater. Process. Technol. 51 (1995) 25–36. http://dx.doi.org/10.1016/0924-0136(94)01354-4.
[45] E. Govekar, I. Grabec, Self-organizing neural network application to drill wear classification, J. Eng. Ind. 116 (1994) 233–238.
[46] V. Jammu, K. Danai, S. Malkin, Unsupervised neural network for tool breakage detection in turning, CIRP Ann. Manuf. Technol 42 (1993) 67–70. http://
dx.doi.org/10.1016/S0007-8506(07)62393-2.
[47] T. Pfeifer, L. Wiegers, Reliable tool wear monitoring by optimized image and illumination control in machine vision, Measurement 28 (2000) 209–218. http://
dx.doi.org/10.1016/S0263-2241(00)00014-2.
[48] D. Kerr, J. Pengilley, R. Garwood, Assessment and visualisation of machine tool wear using computer vision, Int. J. Adv. Manuf. Technol. 28 (2006) 781–791.
http://dx.doi.org/10.1007/s00170-004-2420-0.
[49] E. Alegre, R. Alaiz-Rodríguez, J. Barreiro, J. Ruiz, Use of contour signatures and classification methods to optimize the tool life in metal machining, Estonian, J.
Eng. 15 (2009) 3–12 doi:103176/eng.2009.1.01.
[50] D.M. D'Addona, D. Matarazzo, A.M.M. Sharif Ullah, R. Teti, Tool wear control through cognitive paradigms, Procedia CIRP 33 (2015) 221–226. http://
dx.doi.org/10.1016/j.procir.2015.06.040.
[51] J. Barreiro, M. Castejón, E. Alegre, L.K. Hernández, Use of descriptors based on moments from digital images for tool wear monitoring, Int. J. Mach. Tools
Manuf. 48 (2008) 1005–1013. http://dx.doi.org/10.1016/j.ijmachtools.2008.01.005.
[52] D.M. D'Addona, R. Teti, Image data processing via neural networks for tool wear prediction, Procedia CIRP 12 (2013) 252–257. http://dx.doi.org/10.1016/
j.procir.2013.09.044.
[53] M. Castejón, E. Alegre, J. Barreiro, L.K. Hernández, On-line tool wear monitoring using geometric descriptors from digital images, Int. J. Mach. Tools Manuf. 47
(2007) 1847–1853. http://dx.doi.org/10.1016/j.ijmachtools.2007.04.001.
[54] M. Zadshakoyan, V. Pourmostaghimi, Cutting tool crater wear measurement in turning using chip geometry and genetic programming, Int. J. Appl.
Metaheuristic Comput. (IJAMC) 6 (2015) 47–60. http://dx.doi.org/10.4018/ijamc.2015010104.
[55] D.M. D'Addona, D. Matarazzo, P.R. De Aguiar, E.C. Bianchi, C.H.R. Martins, Neural Networks tool condition monitoring in single-point dressing operations,
Procedia CIRP 41 (2016) 431–436. http://dx.doi.org/10.1016/j.procir.2016.01.001.
[56] R. Tadeusiewicz, Sieci neuronowe, neural nets (in Polish), Akad. Of. Wydaw. RM (1993).
[57] A. Ghasempoor, J. Jeswiet, T.N. Moore, Real time implementation of on-line tool condition monitoring in turning, Int. J. Mach. Tools. Manuf. 39 (1999)
1883–1902. http://dx.doi.org/10.1016/S0890-6955(99)00035-8.
[58] C. Chungchoo, D. Saini, A computer algorithm for flank and crater wear estimation in CNC turning operations, Int. J. Mach. Tools. Manuf. 42 (2002)
1465–1477. http://dx.doi.org/10.1016/S0890-6955(02)00065-2.
[59] T. Mikolajczyk, L. Romanowski, Optimisation of single edge tools exploitation process, Appl. Mech. Mater. 332 (2013) 431–436. http://dx.doi.org/10.4028/
www.scientific.net/AMM.332.431.
[60] T. Mikolajczyk, Edge properties restoration in cutting with single edge tool, academic, Academic J. Manuf. Eng.J. Manuf. Eng. 11 (2013) 72–77.

110

You might also like