Quantum Machine Learning

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/327929467

An improved fault diagnosis approach for FDM process with acoustic emission

Article  in  Journal of Manufacturing Processes · September 2018


DOI: 10.1016/j.jmapro.2018.08.038

CITATIONS READS
70 344

4 authors:

Jie Liu Hu Youmin


Huazhong University of Science and Technology Huazhong University of Science and Technology
82 PUBLICATIONS   1,052 CITATIONS    96 PUBLICATIONS   1,039 CITATIONS   

SEE PROFILE SEE PROFILE

bo wu Yan Wang
Huazhong University of Science and Technology Georgia Institute of Technology
94 PUBLICATIONS   1,143 CITATIONS    227 PUBLICATIONS   2,850 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

MuDI: Multilevel Damage Identification View project

prognostics and health management View project

All content following this page was uploaded by Yan Wang on 12 October 2018.

The user has requested enhancement of the downloaded file.


An improved fault diagnosis approach for FDM process
with acoustic emission
Jie Liua,c , Youmin Hub , Bo Wub , Yan Wangc,∗
a School of Hydropower and Information Engineering, Huazhong University of Science and
Technology, Wuhan, Hubei 430074, P. R. China
b School of Mechanical Science and Engineering, Huazhong University of Science and

Technology, Wuhan, Hubei 430074, P. R. China


c Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia

30332, USA

Abstract

The reliability and performance of additive manufacturing (AM) machines affect


the product quality and manufacturing cost. Developing effective health monito-
ring and prognostics methods is critical to AM productivity. Yet limited work is
done on machine health monitoring. Recently, the application of acoustic emis-
sion sensor (AE) to the fault diagnosis of material extrusion or fused deposition
modeling process was demonstrated. One challenge in real-time process moni-
toring is processing the large amount of data collected by high-fidelity sensors
for diagnostics and prognostics. In this paper, the efficiency of machine state
identification from AE data is significantly improved with reduced feature space
dimension. In the proposed method, features extracted in both time and frequency
domains are combined and then reduced with the linear discriminant analysis. An
unsupervised density based clustering method is applied to classify and recognize
different machine states of the extruder. Experimental results show that the pro-

∗ Corresponding author
Email address: yan.wang@me.gatech.edu (Yan Wang)

Preprint submitted to Journal of manufacturing processes August 31, 2018


posed approach can effectively identify machine states of the extruder even within
a much smaller feature space.
Keywords: Additive manufacturing, Material extrusion, Fused deposition
modeling, Fault diagnosis, Process monitoring, Acoustic emission,
Dimensionality reduction, Clustering, Machine learning

1. Introduction

As a popular and low-cost additive manufacturing technique, material extru-


sion or fused deposition modeling (FDM) is able to fabricate prototypes and print
parts with complex geometries. Thermoplastic materials, including nylon, acry-
lonitrile butadiene styrene, polylactic acid, and others can be used [1]. Several
quality related issues exist in FDM, such as surface roughness [2], geometry de-
viation [3], shape shrinkage [4], and weak mechanical strength [5]. These quality
issues limit the potential applications of the FDM products. As the core compo-
nent in the FDM machine, the extruder is important to ensure the build quality. For
example, feeding speed, extruding speed, and nozzle temperature of the extruder
affect the bead width, which in turn determines the quality of extrusion. So far,
the majority of commercial FDM machines are not equipped with extruder condi-
tion monitoring systems [6]. Fault diagnosis on these machines relies on human
operators’ visual inspection as well as individual experiences of operators. Thus
product quality and consistency cannot be guaranteed. Material waste because of
the delayed interruption and correction is significant. Serious process failures may
also cause costly machine breakdown.
It is critical to develop automated condition monitoring and fault diagnosis
systems such that the closed-loop control of FDM process can be realized for

2
product quality assurance. Sensor-based fault diagnostics and prognostics have
been widely applied in other manufacturing equipment, where machine conditions
are monitored by analyzing signals acquired by sensors and extracting features to
identify machine states. The future states can also be predicted from statistical
machine learning models.
Limited research is done on monitoring the FDM process and machine health.
Recently acoustic emission (AE) was used to monitor the condition [7, 8, 9, 10].
In other related efforts, optical camera [11, 12] and infrared camera [13] were app-
lied to monitor the extrusion process non-intrusively, whereas fiber Bragg grating
sensor [14] was applied intrusively.
AE sensor has several advantages and has been applied in traditional manufac-
turing processes [15, 16, 17, 18, 19]. First, AE sensor is sensitive to mechanical
system dynamics caused by changes of friction, force, vibration, or structural de-
fect. It contains rich information of machine states and their changes. Second, its
implementation is simple. The source of signals can be directly from machines
themselves, and no external stimulation is needed. Third, AE monitoring system
can be made non-intrusive. Thus modification of the original equipment is not
required.
To identify machine states from AE signals, the signals in waveform are first
processed by some signal decomposition methods, such as wavelet analysis [20],
empirical mode decomposition [21], and variational mode decomposition [22].
Features are then extracted from the processed signals. Mappings between the ex-
tracted features and patterns to be recognized are established. The mappings are
then used to identify machine states when new signals arrive. The major challen-
ges of this pattern recognition process for real-time applications include the large

3
volume of data because of high sampling rate used by sensors and high dimensio-
nality of feature space formed by various information extracted from the original
signals, which both lead to high computational load. The sensitivity and robust-
ness of identification depend on what features to be selected if the dimension of
feature space is to be reduced.
In our previous work of AE-based FDM machine monitoring, the original
AE waveform signals were simplified as AE hits. This significantly reduces the
amount of data to be processed. It was shown that state classification with sup-
port vector machine based on single feature in time domain is effective [7]. The
further consideration of multiple features in time domain increases detection sen-
sitivity. Hidden Markov model based on the further reduced signals by principal
component analysis (PCA) can improve efficiency [9].
In this paper, both time- and frequency-domain features from the AE hits are
used for state identification. The inclusion of frequency-domain information in the
feature vector prevents the information loss and provides a more comprehensive
and accurate approach for state recognition. With the further increased dimension
of feature space, more accurate classification approaches can be helpful. Here,
the linear discriminant analysis (LDA) [23] is applied to customize the operator
for dimensionality reduction. The customization is according to the nature of AE
hits so that the sensitivity of feature detection is maximized. After the dimen-
sion of features is reduced, the clustering by fast search and find of density peaks
(CFSFDP) approach [24] is taken for classification.
The standard linear dimension reduction techniques such as PCA reduce the
dimension of feature space with a generic criterion of sample variance. Their
use is not optimal when the ultimate goal is classification, where the differentia-

4
tion power between classes after the data size reduction is important. In contrast,
in the LDA method, the linear transformation operator for dimension reduction is
customized to maximize the differentiation power between classes based on a par-
ticular set of data. This approach can provide the optimum projection directions
in the application of classification. With the optimum transformation operator,
the data processing and feature identification based on LDA can be more accurate
than the ones based on traditional PCA.
In fault diagnosis for the FDM process, conventional supervised classification
methods, such as hidden semi-Markov model [9] and support vector machines
[7], have been applied to identify the machine states. However, the system mo-
dels need to be trained based on the previously determined features and states.
Therefore, supervised classification methods have the limitation in real-time sy-
stem identification when the knowledge about the system states is incomplete or
limited. The premise of applying classification methods is that all machine states
are known a priori. In addition, the decisions of how to choose the training data
sets and training methods will affect the final identification results. As an alter-
native, unsupervised clustering methods have been used to recognize and classify
machine states in manufacturing [25, 26, 27].
In unsupervised clustering analysis, typically cluster centers need to be de-
termined first. Then feature point regrouping and cluster update are performed
iteratively. Different machine states thus can be identified by analyzing clusters
without supervised training process. In this paper, a density-based clustering met-
hod, CFSFDP method, is used. Different from distance-based clustering methods
such as hierarchical and partitioning algorithms (e.g. k-means), density-based
clustering methods do not group data and update the clusters iteratively. This can

5
significantly save computational cost in real-time applications. More importantly,
density-based clustering algorithms, such as the recently developed CFSFDP met-
hod, do not necessarily group clustering into spherical domains. Clusters with
more generic topology can be generated. Therefore, some inherent nonlinear re-
lationships among data within a cluster can be preserved.
In summary, the efficiency of feature extraction as well as effectiveness of
dimensionality reduction for state classification in existing research approaches
need to be improved. In this work, the CFSFDP method is applied to fault diag-
nosis of FDM process for the first time, where the centers of clusters are efficiently
identified. With the LDA-based dimension reduction and classification methods,
the overall capability for real-time fault diagnosis is improved.
The rest of the paper is organized as follows. In Section 2, the architecture of
the proposed fault diagnosis approach is introduced, including feature extraction,
hybrid feature space construction and reduction, and unsupervised clustering. Ex-
perimental results in the FDM process are analyzed to demonstrate the perfor-
mance of the proposed approach in Section 3, with comparison with other appro-
aches. Section 4 concludes the paper.

2. Methodology

The basic flow chart of the improved AE sensor-based fault diagnosis system
is shown in Figure 1. The AE sensor is used to collect vibration signal of the
extruder. The acquired AE signals contain the information of different machine
states. The time and frequency domain features in AE signal are then extracted
to form the hybrid feature vector. The high-dimensional hybrid feature space is
constructed to avoid information loss. The LDA method is applied to reduce the

6
high-dimension
AE time/frequency features
AE sensor signal features
Hybrid features space
Features Hybrid features
space
extraction space reduction
construction
PAC 2/4/6 reduced
features space
label
Fault State
Clustering
diagnosis recognition
PCI-2
Engine SR

Figure 1: Basic flow of proposed healthy condition monitoring method.

Volts
Count
Amplitude

Threshold

t1 t2 Time

Duration

Figure 2: Output voltage of AE signal in time domain.

dimension of the hybrid feature space and thus the computational cost in the fol-
lowing classification step. Then the unsupervised density-based clustering techni-
que is used to identify the reduced hybrid features. Finally, machine states of the
extruder in FDM process is recognized and classified for fault diagnosis.

2.1. Signal acquisition and feature extraction

The AE signals from the AE sensor are first processed and AE hits are counted.
An AE hit u(t) is illustrated in Fig. 2, where time-domain features including
amplitude, count, and duration are shown. Other typical features such as root
mean square (RMS), peak frequency (P-Freq), absolute energy (ABS-Energy),
and signal strength are also used [7, 8, 9].

7
Specifically, the amplitude is the peak voltage of the wave within an AE hit,
which is defined as
Umax
AEA = 20 log( ), (1)
Ure f
where Umax is the peak voltage, Ure f is the reference voltage, and AEA is expressed
on a decibel (dB) scale.
Counts are the numbers of pulses that surpass a predefined threshold. Duration
is the elapsed time from the first count to the last one in one AE hit, which is
described as
AED = t2 − t1 , (2)

where t2 is the time of the last count, and t1 is the time of the first count.
P-Freq is the frequency in the power spectrum with the maximum magnitude.
Signal strength is defined as
Z t2
AEstr = |u(t)|dt, (3)
t1

where u(t) is the output voltage of the AE hit.


ABS-Energy are the absolute energy over the duration of an AE hit, which is
calculated by
Z t2
AEABS−Energy = α u(t)2 dt, (4)
t1

where α is inversely proportional to the resistance of the AE sensor.


RMS is a feature used to describe the strength of AE signal in time domain,
which is defined as s
Z t2
1
AERMS = u(t)2 dt. (5)
t2 − t1 t1

8
2.2. Hybrid feature space construction and reduction

With the above extracted features, a high-dimensional feature space is con-


structed where the feature vectors of mean fm and standard deviation fstd are

fm = [mc , md , ma , mr , ms , me , m p f ],
(6)
fstd = [stdc , stdd , stda , stdr , stds , stde , std p f ],

where mc , md , ma , mr , ms , me , m p f are the mean values of count, duration, ampli-


tude, RMS, signal strength, ABS Energy, P-Freq in the time interval. Similarliy,
stdc , stdd , stda , stdr , stds , stde , std p f are the respective standard deviations.
To reduce the dimension of feature space for a classification problem, LDA is
applied. The goal of LDA is to find a linear transformation matrix W ∈ RD×d so
as to reduce the dimension from D to d and maximize the Fisher criterion

W T SbW
J(W ) = tr( ), (7)
W T SwW

where
c
Sw = ∑ ∑ (x j − mi )(x j − mi )T (8)
i=1 x j ∈Ci

is the within-class scatter matrix, and each data vector x j belongs to one of the c
classes Ci ’s (i = 1, 2, · · · , c). In addition, mi is the centroid of the i-th class, and
c
Sb = ∑ pi (mi − m)(mi − m)T (9)
i=1

is the between-class scatter matrix, with pi as the number of features in the i-th
class, and m as the global centroid of all feature vectors.
LDA finds the optimum transformation operator W such that Sb is maximized
and Sw is minimized [23, 28]. Using the linear transformation operator W , the
original D-dimensional feature space is reduced into a d-dimensional one.

9
2.3. Unsupervised clustering

CFSFDP is a recently developed density-based clustering method [24], in


which the cluster centers are assumed to have a high local density and have relati-
vely large distances to each other. The advantages of CFSFDP include efficiency
and flexibility. This density-based clustering method does not require iterative
update of cluster centers. Clusters are formed regardless of their shapes and geo-
metric distances so that it is attractive for nonlinear problems.
For a data set with a total of n feature points, the local density ρi with respect
to the i-th point
ρi = ∑ χ(di j − dc ) (10)
j6=i

can be used to recognize the neighbors, where



1, x<0
χ(x) =
0, otherwise

is the indicator function, di j is the distance between the i-th and j-th data points,
and dc is a cutoff distance, which is usually set manually at the beginning. Based
on the Eq. (10), the local density associated with a point is the number of points
that has a smaller distance than the cutoff.
The minimum distance δi between the i-th point and any others with a higher
local density is obtained by
δi = min di j . (11)
j:ρ j >ρi

For the point that has the highest local density, its minimum distance is calculated
as δi = max j:ρ j <ρi di j . The cluster centers will be selected as those points that have
both large local densities and large minimum distances.

10
In order to identify the cluster centers with large minimum distances and large
local densities, all feature points xi (ρi , δi ) can be plotted on a two-dimensional de-
cision graph as functions of ρi and δi . The feature points with high local densities
ρi and large minimum distances δi on the top right corner of the graph are selected
as the cluster centers. The remaining feature points will be assigned to a cluster
with the cluster center as the nearest neighbor with a higher density.

3. Fault Diagnosis for FDM Extrusion Process

The dimensionality reduction and clustering methods are applied to the AE


signals collected from FDM process. Here, the procedures are described. To
demonstrate the effectiveness, the results are also compared with ones from other
data analysis methods.

3.1. Experimental setup


To apply the proposed fault diagnosis approach to FDM extrusion process, a
series of printing jobs under different extruder conditions were conducted. A hot-
heated filament extruder MK-1 of Model E5 Engine (made by Hyrel3D) was used
in the experiments. The material of filament is ABS.
The established condition monitoring platform consists of AE sensor, pream-
plifier, data acquisition card, and a desktop computer, as shown in Fig. 3. A dif-
ferential AE sensor made by MISTRAS group was attached to the extruder with
vacuum grease, which is often used for better signal transmission. Different from
the single-end AE sensor, the background noise can be reduced by approximately
2 dB. The differential AE signal was first amplified by a preamplifier PAC 2/4/6,
then processed by a data acquisition (DAQ) card PAC PCI-2. The specifications
of the FDM condition monitoring platform is listed in Table 1.

11
Extruder
AE sensor

FDM machine

Host PC
Filament

Tablet PC

Preamplifier

Figure 3: AE sensor-based Condition monitoring platform.

Table 1: Specification of the FDM condition monitoring platform.


Type Value

Responding range of the AE sensor 100-900 kHz


Gain of the amplifier 40 dB
Sampling rate of the DAQ 5 MHz
Threshold value of the DAQ 58 dB
Resolution of ADC in bits 18 bit

12
Table 2: Number of recorded AE hits in each machine state.
Machine state Number of AE hits

Normal extruding 6041


Semi-blocked 5950
Blocked 6780
Material loading/unloading 6177
Run out of materials 4672

In the experiments, different machine states of the extruder, such as normal


extruding, semi-blocked, blocked, material loading/unloading, and run out of ma-
terials, were generated intentionally [7]. For the states of normal extruding, ma-
terial loading/unloading, and run out of materials, the heating temperatures of the
extruder were set as 230 °C. For the semi-blocked and blocked states, they were
180 and 130 °C, respectively. The recording time for each state was 10 seconds.
The recorded AE hits for the five states are shown in Table 2.

3.2. Feature extraction

Based the recorded AE hits, the time- and frequency-domain features listed
in Eq.(6) are obtained. In order to reduce computational cost in fault diagnosis,
segmental analysis is applied. A total number of 100 segments with a time inter-
val of 0.1 second for each were obtained from the original AE hits for each state,
as shown in Table 2. The numbers of AE hits within each segment for different
machine states are approximately 60, 59, 67, 61, and 46 respectively. Then, the
means and standard deviations of the eight features in each segment are calcula-
ted. The means and standard deviations of each segment for different machine
states are shown in Fig. 4. Segment No.1-No.100 respond to the normal extru-

13
ding state. Segments No.101-200, No.201-300, No.301-400, and No.401-500 are
material loading/unloading, run out of materials, blocked, and semi-blocked sta-
tes respectively. Through analyzing the AE hits in each segment instead of raw
signals, computational cost for feature extraction can be reduced significantly.

3.3. Feature space dimension reduction

To reduce the computational load for classification, a LDA based dimensi-


onality reduction method is applied to the constructed 16-dimensional feature
space under different machine states of the extruder. The LDA technique pro-
jects the high-dimensional feature vectors fm and fstd to lower-dimensional ones,
which ensures that the major variance is captured, and more importantly, the dif-
ferentiation power to identify different states is maximized. Here, each of the
8-dimensional feature vectors containing means fm and variances fstd is reduced
to a vector with one mean value and one standard deviation through the LDA.
The normalized values for the reduced two-dimensional feature vectors for
means and standard deviations are shown in Fig. 5. The feature points from the
normal extruding, semi-blocked, blocked, run out of material, and loading/unloading
are denoted by the ellipses, rectangles, diamonds, up triangles, and down triangles
respectively.
The LDA tries to find a good linear project direction to maintain the clustering
groups. From Fig. 5, it is seen that even with the dimensions reduced to just two,
the feature data points are naturally clustered corresponding to the different ex-
truder states. From the low dimensional features, the states of run out of material
and loading/unloading can be differentiated easily. Nevertheless, feature points
are more mixed among the normal extruding, semi-blocked, and blocked states.
These three states are more similar to each other. In the experiments, they were

14
   





rdrr rme
 


uSu 

u 



rdr rme


uSu


 

  

 




 




   
           
  u   r  

(a) (b)
   



 p)p  p   p-a



SrS  S SV )




 p)p p-a

 
SrS SV )



 


 







   


           
  p    S 

(c) (d)
H H H H
g
 gpg 
g  geh

H
H H H S
S)S  S S- m
H
g
 gpg geh

S
S)S  S- m

H H H H

H
H H H


H H  H


H H 


   H
           
   g
 S 

(e) (f)
   

 


tete
 e 
e( H

tFtF
 F 
F( H




tete
e( H

tFtF
F( H




 

 







 


 
15  
           

e
 
F


(g) (h)

Figure 4: Interested features in different machine states (a) count, (b) duration, (c) amplitude, (d)
RMS, (e) signal strength, (f) ABS Energy, (g) Freq-C, (h) P-Freq.



r 

r
 r





    

r r

Figure 5: Normalized feature distribution after the LDA reduction.

differentiated by only setting different heating temperatures.

3.4. State classification

With the low-dimensional feature points, the unsupervised CFSFDP method


was used to recognize clusters and classify the machine states of the extruder.
Before the unsupervised classification, the cluster centers need to be selected. The
selection process is as follows. the local densities and minimum distances of all
500 feature points are obtained. The decision graph is plotted in Figure 6 based
on the calculated minimum distance δi and local density ρi . The cluster center
selection criterion is that the centers should have both large local densities and
large minimum distances [24]. In other words, a center needs to be representative
to a cluster and at the same time distinguishable from other centers. The products
of the minimum distance δi and local density ρi for all feature points are thus
calculated. Feature points with the maximum values of the product are selected
as cluster centers. The selected five cluster center are feature points #34, #142,

16




GHOWD






      
UKR

Figure 6: The corresponding decision graph with colored cluster centers.

#289, #315, and #483, which indeed were samples that are collected when the
extruder was at five respective states of normal, semi-blocked, blocked, run-out-
of-material, and loading/unloading. They are respectively denoted by a circle, a
rectangle, a diamond, an up triangle, and a down triangle in Figure 6. Therefore,
the five distinctive machine states are identified via the selection of cluster centers
in the CFSFDP method. The remaining feature points then can be classified to a
cluster with its center as the nearest neighbor with a higher local density.
Based on the five selected clustering centers, the remaining feature points were
assigned to the corresponding classes as shown in Figure 7. Feature points that
are correctly classified as the five respective machine states are plotted as black ci-
rcles, red rectangles, green diamonds, blue up triangles, and cyan down triangles.
There are still some mis-classified feature points plotted as magenta left triang-
les, which are mainly in the regions where normal, semi-blocked, blocked states
overlap.
The ability of identifying the normal-extruding, semi-blocked, and blocked

17


< 




     
;

Figure 7: Two-dimensional nonclassical multidimensional scaling distribution.

states with reduced dimensions needs to be enhanced. In general, clustering re-


sults are sensitive to the dimension of the data and the number of clusters. Here,
sensitivity analysis is performed to check the clustering results when the dimen-
sion of feature space is reduced to 2, 4, 6, and 8 respectively with the LDA, of
which one half of the dimensions are mean values and the other are standard de-
viations. The corresponding classification results from the CFSFDP method are
shown in Table 3. It is seen that higher-dimensional feature space still cannot
help to improve the classification accuracy for the semi-blocked, blocked, and
run-out-of-material states. For example, with four- and six-dimensional feature
space only the normal and loading/unloading states can be identified. At the same
time, the classification accuracy of the semi-blocked state is only about 8 % in
eight-dimensional space. When no dimension reduction is applied and the clus-
tering and classification are applied to the original 16-dimensional feature space,
the classification accuracies for the five machine states are 66%, 0%, 96%, 21%,
and 93% respectively. Sensitivity analysis of dimensions shows that among all

18
Table 3: Classification accuracy for five clustering centers using different dimension of feature
space.
Feature space dimension normal-extruding semi-blocked blocked run-out-of-material loading/unloading All states

2 99.0 % 68.0 % 88.0 % 98.0 % 98.0 % 90.2 %

4 100.0 % 4.0 % 0.0 % 16.0 % 99.0 % 43.8 %

6 100.0 % 0.0 % 0.0 % 35.0 % 100.0 % 47.0 %

8 100.0 % 8.00 % 100.0 % 91.0 % 100.0 % 79.8 %

different dimensions, a reduction to two dimensions has the best performance. In-
creasing the dimensions do not improve the performance. This is mainly because
the LDA dimension reduction method is able to optimize the projection directions
according to the nature of data sets. The results also indicate that there is a certain
level of correlation between these features.
Another sensitivity analysis is to check the results when the number of clusters
or states is four instead of five. Because the normal-extruding and semi-blocked
states are the most similar ones with the only extruding temperature difference,
these two states are merged into one state. The clustering results with four sta-
tes and different feature space dimensions are shown in Table 4. With only four
cluster centers identified, the ability of correctly classifying the blocked and run-
out-of-material is enhanced using four- and six-dimensional feature space. The
run-out-of-material state is identified the most accurately in two-dimensional fea-
ture space.
It is seen that increasing the dimension of feature space does not help to im-
prove classification accuracy, if there are five states to be identified. It could help
if the semi-blocked state is no longer of interest. With the consideration of com-
putational cost involved in classification, the optimum choice of the reduced di-
mension is considered to be two. The two-dimensional reduced feature space is

19
Table 4: Classification accuracy for four clustering centers using different dimension of feature
space.
Feature space dimension normal-extruding & semi-blocked blocked run-out-of-material loading/unloading All states

2 98.0 % 88.0 % 98.0 % 98.0 % 96.0 %

4 98.0 % 68.0 % 91.0 % 99.0 % 90.8 %

6 99.5 % 100.0 % 93.0 % 100.0 % 98.4 %

8 100.0 % 100.0 % 91.0 % 100.0 % 98.2 %

used in our further study.

3.5. Evaluation

To evaluate the effectiveness of the proposed machine state classification ap-


proach, several traditional dimensionality reduction and clustering methods were
also applied to the same data sets. The results of different methods are compared.

3.5.1. Comparison of clustering methods


In order to verify the effectiveness of CFSFDP clustering method, compari-
sons are made between the CFSFDP and other popular clustering methods in-
cluding distance based k-means and hierarchical PHA clustering, based on the
reduced features from LDA. The results of clustering accuracies and F1 scores are
compared in Table 5. The classification accuracy of the normal-extruding, semi-
blocked, blocked, run-out-of-material, and loading/unloading states as well as the
overall states are listed. The CFSFDP has the classification accuracy of 99.0% and
88.0% for the normal-extruding and blocked states respectively, which are similar
to the ones of the k-means and PHA clustering methods. Among the three met-
hods, the CFSFDP has the highest accuracies to recognize the other three states
(semi-blocked, run-out-of-material, loading/unloading) at 68%, 98%, and 98% re-

20
Table 5: Classification accuracy and corresponding F1 score in parenthesis using unsupervised
methods.
Unsupervised methods normal-extruding semi-blocked blocked run-out-of-material loading/unloading

CFSFDP 99.0 % (82.16 %) 68.0 % (79.53 %) 88.0 % (92.15%) 98.0 % (98.49 %) 98.0 % (98.99%)

K-means clustering [8] 99.0 % (82.16 %) 63.0 % (72.00%) 88.0 % (91.19%) 94.0 % (96.91%) 97.0 % (98.48%)

PHA clustering [29] 100.0 % (57.64 %) 0.0 % (n/a) 88.0 % (92.15%) 66.0 % (79.52%) 95.0 % (97.44%)

spectively. The CFSFDP also has the highest F1 scores for all states. Specifically,
an F1 score of 79.53 % for the semi-blocked state is achieved using the CFSFDP.
Thus, the CFSFDP method is shown to be effective in identifying the machine
states from the AE hits, and at the same time computationally more efficient than
distance based clustering.

3.5.2. Comparison of feature reduction methods


In order to verify the effectiveness in feature dimension reduction, experimen-
tal results between the proposed LDA-based method and those from some traditi-
onal feature reduction methods are compared. Here, neighbourhood components
analysis (NCA) [30], locality preserving projection (LPP) [31], neighborhood pre-
serving embedding (NPE) [32], principal component analysis (PCA) [33], sparse
filtering [34] are applied to the 16 mean and standard deviation features and re-
sult in one mean and one variance similarly. The normalized feature distributions
using these feature reduction methods are shown in Figure 8. Feature points from
the five different machine states are also plotted with different symbols. From the
feature distributions in Figure 8, it is seen that the different machine states cannot
be easily classified.
Furthermore, these reduced features were further processed by the k-means
clustering method. The classification accuracies and F1 scores are compared in

21
Table 6. It can be found that the LDA-based method has higher classification
accuracies for all five machine states than the other traditional feature reduction
methods. Here, traditional feature reduction methods cannot recognize and clas-
sify the semi-blocked, blocked, and run-out-of-material states. Their highest clas-
sification accuracies are only 56.0%, 60.0% and 73.0% respectively. Among all
methods, the proposed LDA-based approach has the highest F1 scores for all sta-
tes. Its F1 scores are 72.00 %, 91.19 % and 96.91 % for the semi-blocked, blocked,
and run-out-of-material states respectively, which are higher than most of the ot-
her methods.
With classification as the purpose, the LDA method helps find a customized
linear projection operator based on data so that the classes of feature points can
be differentiated easily after reduction. In contrast, the other reduction methods
do not keep classification as the goal. For instance, NCA intends to keep neig-
hbor topology unchanged, LPP finds transformation such that distances between
neighbors are preserved, NPE preserves the local manifold structures, PCA main-
tains the global statistical information of data, whereas sparse filtering seeks to
maximize the dispersal of data after projection. In those methods, the ease of dif-
ferentiation between clusters after dimension reduction is not taken into account.
Compared to other popular linear dimensionality reduction methods, the LDA
used in this work has a better performance in processing the interested features
in order to differentiate the extruder states, which provides better classification
results.

3.5.3. Comparison with supervised classification


As a comparison of the proposed unsupervised classification approach, some
traditional supervised classification methods, including hidden Markov model

22
 

r 

r
 r


r 

r
 r
 

 

 

 
         

r r 
r r

(a) (b)
 

r 

r
 r


r 

r
 r

 

 

 

 
         

r r 
r r

(c) (d)


r 

r
 r






    

r r

(e)

Figure 8: Different feature reduction methods (a) NCA, (b) LPP, (c) NPE, (d) PCA, (e) Sparse
filtering.

23
Table 6: Classification accuracy and F1 score in parenthesis under different feature reductions.
Feature reduction normal-extruding semi-blocked blocked run-out-of-material loading/unloading

LDA [23] 99.0 % (82.16 %) 63.0 % (72.00 %) 88.0 % (91.19 %) 94.0 % (96.91 %) 97.0 % (98.48 %)

NCA [30] 68.0 % (60.18 %) 43.0 % (38.57 %) 49.0 % (57.65 %) 64.0 % (71.51 %) 99.0 % (98.02 %)

LPP [31] 72.0 % (66.36 %) 56.0 % (48.70 %) 60.0 % (72.73 %) 73.0 % (76.44 %) 95.0 % ( 96.45%)

NPE [32] 64.0 % (62.14 %) 47.0 % (42.34 %) 34.0 % (39.08 %) 49.0 % (49.75 %) 99.0 % (98.51 %)

PCA [33] 64.0 % (62.14 %) 48.0 % (43.44 %) 34.0 % (39.08 %) 50.0 % (50.51 %) 99.0 % (98.51 %)

Sparse filtering [34] 68.0 % (53.54 %) 35.0 % (34.31 %) 49.0 % (60.49 %) 53.0 % (65.84 %) 86.0 % (78.54 %)

(HMM) [35], support vector machines (SVM) [36], genetic algorithm-based back
propagation neural network (BPNN-GA) model [37], and probabilistic neural net-
work (PNN) model [38], are also applied to classify the reduced features from five
different machine states.
To select representative feature points that evenly cover the space, Kennard
and Stone algorithm [39, 40] was applied to choose training and testing data sets
from different machine states. Here, 250 training data sets were chosen from ori-
ginal 500 data sets. The system models were trained and updated by the extracted
features. The final results of the classification accuracies and F1 scores are shown
in Table 7.
Similar to the proposed CFSFDP method in this paper, the supervised clas-
sification methods also recognize and classify the normal-extruding, run-out-of-
material, and loading/unloading states with a relatively high classification accu-
racy. At the same time, the semi-blocked and blocked states still cannot be iden-
tified with the reduced features. Generally, the training results using supervised
classification methods is better than the testing data sets. For example, the tes-
ting results of the semi-blocked state using the HMM, SVM, BPNN-GA, and
PNN are 67.5%, 77.5%, 65.0%, and 67.5% respectively, which are worse than

24
Table 7: Classification accuracy and F1 score in parenthesis using supervised methods.
Supervised methods Types normal-extruding semi-blocked blocked run-out-of-material loading/unloading

Training 91.18 % (80.52 %) 80.00 % (83.48 %) 90.38 % (93.07 %) 97.10 % (97.81 %) 100.00 % (100.00 %)
HMM [35] Testing 100.00 % (88.00 %) 67.50 % (80.60 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 97.00 % (85.46 %) 75.00 % (82.42 %) 89.00 % (92.71 %) 98.00 % (98.49 %) 100.00 % (100.00 %)

Training 82.35 % (73.68 %) 85.00 % (79.69 %) 73.08 % (83.52 %) 97.10 % (97.10 %) 91.43 % (95.52 %)
SVM [36] Testing 87.88 % (87.22 %) 77.50 % (68.13 %) 72.92 % (83.33 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 86.00 % (82.30 %) 82.00 % (74.89 %) 73.00 % (83.43 %) 98.00 % (98.00 %) 97.00 % (98.48 %)

Training 97.06 % (91.67 %) 91.67 % (92.44 %) 94.23 % (96.08 %) 98.55 % (99.27 %) 100.00 % (100.00 %)
BPNN-GA [37] Testing 100.00 % (91.03 %) 65.00 % (72.22 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 99.00 % (91.24 %) 81.00 % (84.82 %) 91.00 % (94.30 %) 99.00 % (99.50 %) 100.00 % (100.00 %)

Training 100.00 % (94.44 %) 93.33 % (95.73 %) 96.15 % (97.09 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
PNN [38] Testing 100.00 % (91.03 %) 67.50 % (75.00 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 100.00 % (92.17 %) 83.00 % (87.83 %) 92.00 % (94.85 %) 100.00 % (100.00 %) 100.00 % (100.00 %)

the respective training results of 80.0%, 85.0%, 91.67%, and 93.33%. The clas-
sification results using the proposed CFSFDP method are slightly better than the
testing results of some of the supervised methods. For example, the classification
accuracy of the blocked state using the CFSFDP is about 88.0%, whereas testing
results using the HMM, SVM, BPNN-GA, and PNN are 87.50%, 72.92%, 87.5%,
and 87.5% respectively. The F1 scores using the proposed unsupervised method
are also better than some of the supervised methods. Specifically, an F1 score of
79.53 % for the semi-blocked state is achieved, which is higher than the SVM,
BPNN-GA and PNN methods.
Generally, supervised classification methods have a better ability of system
identification than the unsupervised methods. Nevertheless the model training
process causes higher computational costs. Training times in BPNN-GA, PNN,
HMM and SVM are 2.88 sec, 1.37 sec, 2.9 sec, and 1.93 sec, respectively. Thus,
trade-offs have to be made in the real-time application scenarios where there are
requirements in computational time. In addition, the decisions of how to choose
the training data sets and training methods will affect the final identification re-

25
Table 8: Classification accuracy and F1 score without considering the semi-blocked state using
different dimension of feature space.
Feature space dimension normal-extruding blocked run-out-of-material loading/unloading

2 100.0 % (95.69 %) 94.0 % (96.91 %) 100.0 % (100.00 %) 97.0 % (98.48 %)

4 100.0 % (93.02 %) 85.0 % (91.89 %) 100.0 % (100.00 %) 100.0 % (100.00 %)

6 100.0 % (100.00 %) 100.0 % (100.00 %) 100.0 % (100.00 %) 100.0 % (100.00 %)

sults. The proposed unsupervised approach can identify different machine sta-
tes without model training procedures, which will reduce computational burdens
when processing high-dimensional large-volume data sets for process monitoring
and fault diagnosis.

3.5.4. Discussion of results


The results in Tables 3 and 4 show that semi-blocked state is difficult to be
identified. Without the consideration of semi-blocked state, the proposed fault
diagnosis approach can effectively identify the four basic machine states of the
extruder as shown in Table 8. Semi-blocked state is the transition state between
normal and fully blocked states. Identifying the transition state is challenging
since it is in the margin area between the two distinctive states and overlaps with
both.
The selected eight AE features cannot be relied upon to identify the transition
state. This is because these features are most likely not fully independent. The-
refore increasing the dimension of feature space based on the same data set did
not improve the accuracy of classification. In general, the performance of classi-
fication will be affected by the reduced features since information loss can occur
with the dimension reduction and not enough information is retained for target
identification. However, in our case, the LDA method minimized the information

26
loss with customized projection operators. Nevertheless, the lack of independent
features means that there is no enough information in the original features.
The potential approach to resolve this is to increase the number of indepen-
dent features such that the dimension of feature space is effectively increased to
improve the identifiability. The possible solutions could be using additional AE
sensors so that spatial distribution information is included for diagnostics. Other
sensing modes such as optical and thermal sensors can also be added such that
sensor fusion is applied.

4. Conclusions

In this paper, an improved AE sensor-based fault diagnosis approach for the


FDM process is proposed. AE hits under different extruder states are obtained,
and the interested time- and frequency-domain features in signal segments are
extracted. The LDA-based feature dimension reduction is applied to reduce the
feature space to only two dimensions, thus the computational cost of classifica-
tion and state identification can be reduced. Further, the unsupervised CFSFDP
method is used to recognize and classify the machine states of the extruder. Ex-
periments showed that the density based clustering is an efficient classification
method. Through the comparison studies with other linear dimension reduction
techniques and supervised classification methods, the proposed fault diagnosis
approach provides a reliable and efficient framework for real-time FDM machine
monitoring, particularly the extruder’s health.
The semi-blocked state, as the transition state, is difficult to be identified with
only one AE sensor in current work. In order to resolve this, the potential approa-
ches, such as increasing the number of independent features and using additional

27
sensors, will be investigated in future work. Further, trade-offs of efficiency and
accuracy also need to made in process monitoring systems. High-dimensional fe-
ature information provides good differentiation power. However, processing large
volume of such data sets is computationally demanding for processors embedded
on board for in-situ monitoring. Reduction of feature space improves the effi-
ciency, yet the accuracy is compromised. Therefore, further research on how to
make good decisions on the trade-offs is needed.

Acknowledgements

The work was supported in part by China Scholarship Council with a Scho-
larship (No. 201606160048), the National Natural Science Foundation of China
(No. 51175208), and the U.S. National Science Foundation (CMMI 1547102).
The authors also thank Dr. Haixi Wu for the help of experimental data.

References

[1] A. R. T. Perez, D. A. Roberson, R. B. Wicker, Fracture surface analysis of


3d-printed tensile specimens of novel abs-based materials, Journal of Failure
Analysis and Prevention 14 (3) (2014) 343–353.

[2] Y. Jin, Y. Wan, B. Zhang, Z. Liu, Modeling of the chemical finishing process
for polylactic acid parts in fused deposition modeling and investigation of its
tensile properties, Journal of Materials Processing Technology 240 (2017)
233–239.

[3] W.-c. Lee, C.-c. Wei, S.-C. Chung, Development of a hybrid rapid prototy-
ping system using low-cost fused deposition modeling and five-axis machi-

28
ning, Journal of Materials Processing Technology 214 (11) (2014) 2366–
2374.

[4] A. Wang, S. Song, Q. Huang, F. Tsung, In-plane shape-deviation modeling


and compensation for fused deposition modeling processes, IEEE Transacti-
ons on Automation Science and Engineering.

[5] J. Wang, H. Xie, Z. Weng, T. Senthil, L. Wu, A novel approach to improve


mechanical properties of parts fabricated by fused deposition modeling, Ma-
terials & Design 105 (2016) 152–159.

[6] E. Pei, R. Ian Campbell, D. de Beer, Entry-level rp machines: how well can
they cope with geometric complexity?, Assembly Automation 31 (2) (2011)
153–160.

[7] H. Wu, Y. Wang, Z. Yu, In situ monitoring of fdm machine condition via
acoustic emission, The International Journal of Advanced Manufacturing
Technology 84 (5-8) (2016) 1483–1495.

[8] H. Wu, Z. Yu, Y. Wang, A new approach for online monitoring of additive
manufacturing based on acoustic emission, in: ASME 2016 11th Internatio-
nal Manufacturing Science and Engineering Conference, American Society
of Mechanical Engineers, 2016, pp. V003T08A013–V003T08A013.

[9] H. Wu, Z. Yu, Y. Wang, Real-time fdm machine condition monitoring and di-
agnosis based on acoustic emission and hidden semi-markov model, The In-
ternational Journal of Advanced Manufacturing Technology 90 (5-8) (2017)
2027–2036.

29
[10] I. T. Cummings, M. E. Bax, I. J. Fuller, A. J. Wachtor, J. D. Bernardin,
A framework for additive manufacturing process monitoring & control, in:
Topics in Modal Analysis & Testing, Volume 10, Springer, 2017, pp. 137–
146.

[11] F. Baumann, D. Roller, Vision based error detection for 3d printing proces-
ses, in: MATEC Web of Conferences, Vol. 59, EDP Sciences, 2016.

[12] G. P. Greeff, M. Schilling, Closed loop control of slippage during filament


transport in molten material extrusion, Additive Manufacturing 14 (2017)
31–38.

[13] R. B. Dinwiddie, L. J. Love, J. C. Rowe, Real-time process monitoring and


temperature mapping of a 3d polymer printing process, in: SPIE Defense,
Security, and Sensing, International Society for Optics and Photonics, 2013,
pp. 87050L–87050L.

[14] C. Kousiatza, D. Karalekas, In-situ monitoring of strain and temperature


distributions during fused deposition modeling process, Materials & Design
97 (2016) 400–406.

[15] S. Liang, D. Dornfeld, Tool wear detection using time series analysis of
acoustic emission, J. Eng. Ind.(Trans. ASME) 111 (3) (1989) 199–205.

[16] S. H. Lee, D. A. Dornfeld, Precision laser deburring and acoustic emission


feedback, TRANSACTIONS-AMERICAN SOCIETY OF MECHANICAL
ENGINEERS JOURNAL OF MANUFACTURING SCIENCE AND ENGI-
NEERING 123 (2) (2001) 356–364.

30
[17] S. Subramaniam, N. S, D. A. S, Acoustic emission–based monitoring appro-
ach for friction stir welding of aluminum alloy aa6063-t6 with different tool
pin profiles, Proceedings of the Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 227 (3) (2013) 407–416.

[18] B. Wang, Z. Liu, Acoustic emission signal analysis during chip formation
process in high speed machining of 7050-t7451 aluminum alloy and inconel
718 superalloy, Journal of Manufacturing Processes 27 (2017) 114–125.

[19] X. Li, A brief review: acoustic emission method for tool wear monitoring du-
ring turning, International Journal of Machine Tools and Manufacture 42 (2)
(2002) 157–165.

[20] H. Sadegh, A. N. Mehdi, A. Mehdi, Classification of acoustic emission sig-


nals generated from journal bearing at different lubrication conditions based
on wavelet analysis in combination with artificial neural network and genetic
algorithm, Tribology International 95 (2016) 426–434.

[21] R. Li, D. He, Rotational machine health monitoring and fault detection using
emd-based acoustic emission feature quantification, IEEE Transactions on
Instrumentation and Measurement 61 (4) (2012) 990–1001.

[22] Q. Xiao, J. Li, Z. Bai, J. Sun, N. Zhou, Z. Zeng, A small leak detection
method based on vmd adaptive de-noising and ambiguity correlation classi-
fication intended for natural gas pipelines, Sensors 16 (12) (2016) 2116.

[23] R. A. Fisher, The use of multiple measurements in taxonomic problems,


Annals of eugenics 7 (2) (1936) 179–188.

31
[24] A. Rodriguez, A. Laio, Clustering by fast search and find of density peaks,
Science 344 (6191) (2014) 1492–1496.

[25] A. Malhi, R. X. Gao, Pca-based feature selection scheme for machine de-
fect classification, IEEE Transactions on Instrumentation and Measurement
53 (6) (2004) 1517–1525.

[26] C. Yiakopoulos, K. C. Gryllias, I. A. Antoniadis, Rolling element bearing


fault detection in industrial environments based on a k-means clustering ap-
proach, Expert Systems with Applications 38 (3) (2011) 2888–2911.

[27] C. Aldrich, L. Auret, Unsupervised process monitoring and fault diagnosis


with machine learning methods, Springer, 2013.

[28] X. Jin, M. Zhao, T. W. Chow, M. Pecht, Motor bearing fault diagnosis


using trace ratio linear discriminant analysis, IEEE Transactions on Indus-
trial Electronics 61 (5) (2014) 2441–2451.

[29] Y. Lu, Y. Wan, Pha: A fast potential-based hierarchical agglomerative clus-


tering method, Pattern Recognition 46 (5) (2013) 1227–1239.

[30] J. Goldberger, G. E. Hinton, S. T. Roweis, R. R. Salakhutdinov, Neighbour-


hood components analysis, in: Advances in neural information processing
systems, 2005, pp. 513–520.

[31] X. He, P. Niyogi, Locality preserving projections, in: Advances in neural


information processing systems, 2004, pp. 153–160.

[32] X. He, D. Cai, S. Yan, H.-J. Zhang, Neighborhood preserving embedding, in:

32
Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference
on, Vol. 2, IEEE, 2005, pp. 1208–1213.

[33] I. Jolliffe, Principal component analysis, Wiley Online Library, 2002.

[34] J. Ngiam, Z. Chen, S. A. Bhaskar, P. W. Koh, A. Y. Ng, Sparse filtering, in:


Advances in neural information processing systems, 2011, pp. 1125–1133.

[35] L. Rabiner, B. Juang, An introduction to hidden markov models, ieee assp


magazine 3 (1) (1986) 4–16.

[36] A. M. Andrew, An introduction to support vector machines and other kernel-


based learning methods by nello christianini and john shawe-taylor, cam-
bridge university press, cambridge, 2000, xiii+ 189 pp., isbn 0-521-78019-5
(hbk,£ 27.50). (2000).

[37] J. Liu, Y. Hu, B. Wu, C. Jin, A hybrid health condition monitoring method
in milling operations, The International Journal of Advanced Manufacturing
Technology 92 (5-8) (2017) 2069–2080.

[38] J. Liu, B. Wu, Y. Wang, Y. Hu, An integrated condition-monitoring method


for a milling process using reduced decomposition features, Measurement
Science and Technology 28 (8) (2017) 085101.

[39] M. Daszykowski, B. Walczak, D. Massart, Representative subset selection,


Analytica chimica acta 468 (1) (2002) 91–103.

[40] R. W. Kennard, L. A. Stone, Computer aided design of experiments, Techno-


metrics 11 (1) (1969) 137–148.

33

View publication stats

You might also like