Manufacturing Feature Recognition With A 2D Convolutional Neural

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Contents lists available at ScienceDirect

CIRP Journal of Manufacturing Science and Technology


journal homepage: www.elsevier.com/locate/cirpj

Manufacturing feature recognition with a 2D convolutional neural


network
Yang Shia , Yicha Zhangb , Ramy Harika,*
a
Department of Mechanical Engineering, College of Engineering and Computing, University of South Carolina, 1000 Catawba Street, Columbia, SC 29201,
United States
b
Mechanical Engineering and Design Department, Université de Bourgogne Franche-Comté, Université de Technologie de Belfort-Montbéliard, ICB UMR CNRS
6303, 90010 Belfort Cedex, France

A R T I C L E I N F O A B S T R A C T

Article history: Feature recognition is critical to connect CAX tools in automation via the extract of significant geometric
Available online 27 June 2020 information from CAD models. However, to extract meaningful geometric information is not easy. There
are still a couple of problems, such as lack of robustness, inability to learn, limited feature types, difficult
to deal with interacting features, etc. To fix these problems, a new feature recognition method based on
Keywords: 2D convolutional neural networks (CNNs) is proposed in this paper. Firstly, a novel feature representation
Manufacturing feature recognition
scheme based on heat kernel signature is developed. Then, the feature recognition problem is transferred
Artificial neural network
into a graph learning problem by using a percentage similarity clustering and node embedding technique.
Shape signature
Feature interaction
After that, CNN models for feature recognition are trained via the use of a large dataset of manufacturing
CAX feature models. The dataset includes ten different types of isolated features and fifteen pairs of
interacting features. Finally, a set of tests for method validation are conducted. The experimental results
indicate that the proposed approach not only performs well on recognizing isolated features, but also is
effective in handling interacting features. The state-of-the-art performance of interacting features
recognition has been improved.
© 2020 CIRP.

Introduction with inherent attributes related to design intent and manufactur-


ing functions ([20,21].
Feature is an important notion in CAM. It is a higher-level The development of feature recognition methods has been
description of geometric shapes. It characterizes the engineering active for more than two decades in both industry and academic
significance of designs in term of mathematical descriptions of communities [22–24]. Various feature recognition systems had
surface or volume, which facilitate the automation of CAX been developed, such as rule-based, graph-based, volume decom-
engineering activities. Feature recognition is a process that position, hint-based, artificial neural network (ANN) and hybrid
captures and refines expert knowledge of features for downstream approaches. However, there are still drawbacks of the existing
engineering activities. It can be considered as a necessary and methods that hinder its applications, such as lack of robustness,
fundamental component to integrate design and downstream inability to learn, limited class of features, and computational
applications such as engineering analysis [1–3] reverse engineer- complexity. The most critical one is the difficulty of recognizing
ing [4], optimization [5–7] design validation [8,9] manufacturing interacting features, where boundaries of predefined features
planning [10–15] or DFX consideration ([16,17]; [18]), etc. It is changed by feature interactions. To solve these problems, a novel
inevitable in CAX systems due to the fact that CAD model cannot be feature recognition method based on 2D CNN is proposed. Instead
used directly for these engineering analysis and decision-making of defining feature as a set of connected faces or volumes, a shape
systems. CAD models lack high-level description of geometrical signature, which can describe both the topology and geometry
and topological entities [19]. Therefore, the key actions occurring characteristics of 3D shapes [25], is introduced. It goes beyond
in feature recognition are translating the low-level geometric individual geometric entities, and transforms the whole part into a
entities from the CAD models into a set of appropriate “features” heat persistence map (HPM). After clustering, the HPM can be
simplified into attributed graph to reduce the discreteness and
computational complexity. It is not only informative and
* Corresponding author. comparable, but also has other advantages, e.g. generalization,
E-mail address: HARIK@mailbox.sc.edu (R. Harik). robustness, transformation and scale invariance. To be fed into 2D

https://doi.org/10.1016/j.cirpj.2020.04.001
1755-5817/© 2020 CIRP.
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 37

CNN, the graphs are converted into learnable 2D tensors by graph feature recognition method, introducing the concept of an
node embedding technique. Finally, a 2D CNN is built to classify attributed adjacency graph (AAG). AAG captures the concave/
unknown manufacturing features by using a collection of training convex relationships of the part’s adjacent faces, and analyze the
samples. Taking advantage of the robustness of both heat adjacency graph in order to decompose it into subgraphs as
persistence signature and machine learning, the state-of-the-art features. The feature searching can be realized by subgraph
performance for recognizing interacting features has been isomorphic matching.
improved. [39] extended AAG by adding curved surface node and
The proposed feature recognition method mainly consists of developed surface-based attributed adjacency graph (SAAG).
three steps: feature representation, graph learning and feature [40–42] developed the Multi-Attributed Adjacency Graph/Matrix
classification. Before presenting the details in Section 3, a (MAAG/MAAM) to overcome the limitations of AAG. To better
comprehensive literature review on feature recognition methods support interacting features, [43] extended AAG with more
is presented in Section 2 to show the objective and position of this attributes of edge and face, naming Extended Attributed Adjacency
work. Section 4 presents the method demonstration on solving Graph (EAAG). [44] improved EAAG by adding edge node type and
feature interaction problem. Section 5 summarizes the paper with quantitative attributes such as face normal vector, face angle, and
profound discussion and perspective on future work. edge length, and named it as holistic attribute adjacency graph
(HAAG). [45] extended the graph method to freeform features by
Related works dividing the shape into different regions based on curvature and
using these regions as graph nodes, and named it as Region
The main purpose of this section is to review how existing Adjacency Graph (RAG).
feature recognition methods solve the problems and what Numerous different graph representation schemes were
limitations still exist. This article focuses on five group of developed to enhance the representation capability of graphical
approaches that attracted the most extensive research interest models for more complex features. The information stored in
and the hybrid methods or systems, combinations derived from graphs gradually became more complete and less ambiguous,
the fives methods. nevertheless, at the cost of robustness and computation complex-
ity. It can be stated that graph-based approaches are quite effective
Rule-based approach in the domain of primitive features. However, the most significant
shortcoming of all graph-based methods is the incapability of
The rule-based approach was among the earliest to be handling topology variations, because variations change the arcs
investigated due to the granted advantages of expert system, such and nodes that are indispensable for defining the feature.
as [26],[27], [28], [29] and [30]. Features are generalized as Moreover, due to the combinatorial difficulties and exponential
templates consisting of characteristic patterns, but no explicit time complexity, the complex graph with a large number of nodes
representation scheme was defined. The recognizing process is takes expensive computational costs for graph matching process.
performed by using inference rules represented as If-Then manner. Consequently, graph-based approaches have difficulties to handle
If the predefined conditions are satisfied, then the corresponding real industrial parts with small-scale variations such as fillets and
region on a part is recognized as a feature. Nonetheless, the feature chamfers.
representation is ambiguous and the rule based reasoning is
difficult to deal with new features. Moreover, it is not realistic to Volume decomposition approach
define rules for complex and interacting features.
The volume decomposition approach identifies the removal
Hint-based approach (machining) volume of stock material from the solid model and
decomposes the volume into intermediate volumes first, and then
To deal with feature interactions and be more flexible with the the features are generated by combining the intermediate volumes
feature searching, hint-based methods were developed based on based on some rules. According to the way of decomposing,
the idea that incomplete representation can be searched for, so as volume decomposition methods can be generally divided into two
to indicate the existence of certain features. Hence, a hint is sub-groups: convex-hull decomposition and cell-based volume
defined as a minimal indispensable portion of a feature's decomposition.
boundary which must be present in the part even when features The convex-hull decomposition method was introduced by [46]
intersect. Hint-based approaches use a two-step procedure for to decompose non-convex objects into a sequence of convex
feature recognition: in the first step, hints are generated by volumes with arbitrary shapes called Alternating Sum of Volumes
extraction rules based on different ways, such as geometric and (ASV) Decomposition. To solve the non-convergence problem, [47]
topologic reasoning [19,31–33], feature taxonomies ([16,34], proposed the method of Alternative Sum of Volumes with
Abouel [35]), and combined probabilities of ranking potential Partitioning (ASVP). The convex-hull decomposition method is
feature hints [36,37]. In the second step, these hints are processed effective in finding delta volumes for polyhedral parts, but has
and directly matched by applying rules ([16], [18], [31]). However, difficulty in curved surface.
since the hints are predifined for each feature class, the The essential methodology of cell-based decomposition ap-
recognition algorithm is hardcoded, so the system extensibility proach is to decompose the volume or delta volume of an object
is sacrificed. In addition, it is difficult to determine the into minimal cells with a simple shape. The cells are then
characteristics of a feature for hint definition in case of complex recombined as a larger volume that can be removed by a single
and interacting features. machining operation. Finally, the combined volume is checked for
topological and geometrical characteristics and recognized as a
Graph-based approach machining feature. Sakurai and Chin [48] proposed a representa-
tive method to generate all the possible combination sets of the
The Graph-based approach is among the most-researched minimal convex cells decomposed by face extension. [49]
methods due to the inherent advantage of a graph’s structural introduced a new maximal volume decomposition method, which
similarity with B-Rep based solid models. The approach devel- decomposes the delta volume by recursively bisecting sub-
oped by [38] can be considered as the first formal graph-based volumes into two smaller volumes.
38 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Cell-based decomposition approach is suitable for manufactur- into three types: 2D projections based ([50], Chuang 1999, [51]),
ing planning and NC coding. However, the volume feature graph-based [52–54], and face score vector [55–58].
representation is deficient with geometric and topological feature Feed-forward neural network with back-propagation is the
information, so it is not able to recognize the types of functional most often used ANN topology for feature recognition. Because of
feature and not suitable for other engineering applications. There simple structure, good learning and generalization capabilities for
are also some other disadvantages such as an expensive pattern association and pattern recognition problems, three-layer
computational load and the inability to guarantee the generation feed-forward neural network has been used as a classifier by many
of the features of interest. researchers to prove the feasibility of their methods, such as
([21,50–52,57,59]. To deal with large number of types of feature
Artificial neural network approach and complex interactions, [60] and [58] adopted a standard four-
layer FF neural network with two hidden layers due to its better
The major characteristic of ANN that makes it one of the most convergence results. [61] and [62] presented a network structure
promising feature recognition methods is its ability to derive by cascading a number of typical three-layer feed-forward back-
implicit patterns through training with examples. The other propagation neural networks to be more expandable for new
advantage that attracts interest for employing ANN in feature features.
recognition is the robustness for tolerating exceptions or incom- Although ANN-based feature recognition systems have certain
plete input patterns, so it can be possible to recognize the non- advantages over conventional methods, the limitations are
orthogonal interacting features. obvious: their feature representation schemes still have a degree
Building a perfect input representation scheme is the most of ambiguity and limited applicability, such that, only a limited
difficult part of ANN-based feature recognition method. range of features and feature intersections can be recognized; most
Manufacturing features are characterized by both topological of ANN-based systems use supervised learning, which requires a
and geometrical information derived from the CAD model, while retraining for including new feature class.
neural networks typically use numerical values as input. Therefore,
this raises the problem of how to convert a solid model to a suitable Hybrid approach
input representation for neural network, since simple numerical
representations are not always sufficient to represent solid models. Hint-based, graph-based, volume decomposition and neural
The reviewed input representation schemes can be generalized networks are four basic and promising approaches in feature

Fig. 1. Overview of the proposed approach.


Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 39

recognition field. They all have their advantages and limitations. performing only simple arithmetic operations, they can derive
Therefore, it is evident that a hybrid system adopting selective many kinds of knowledge or discover regularities through training
characteristics of these approaches could provide a constructive with input patterns that are difficult to describe adequately with
and practical solution to overcome the shortcomings of existing knowledge-based systems. Therefore, the proposed feature
feature recognition systems. recognition algorithm is independent of the features library. It is
To amend the graph-based method’s incapability of interacting expandable to any secondary features without any need to rebuild
features, [43] combined graph-based and hint-based methods. the algorithm. The general flowchart of the proposed approach is
[63] and [64] developed a similar graph-hint hybrid system to depicted in Fig. 1. At first, the input CAD models are converted into
extract subgraphs as feature hints. [65] directly connected a graph- HPM. Then, the HPM is simplified into attributed graph by
based and a hint-based system to get a hybrid system, in which the similarity clustering, and 2D histograms later by node embedding.
isolated and interacting features are processed respectively. To The histograms are finally sent to a 2D CNN for the classification.
take advantage of the strong ability of generalization and The details of our feature representation scheme, the node
robustness, ([53,54], Ding 2004, tried to integrate ANN as feature embedding strategy, and the CNN model implementation will be
classifier with inputs generated from existing graph-based introduced in following subsections.
representation schemes. [66] used the ANN to convert the non-
orthogonal feature into a number of triangular blocks for Feature representation
combining into a virtual orthogonal feature, which can be
recognized by graph-based method in the next step. The first and most important step of employing ANN is to
Due to the remarkable volumetric representation and exhaus- convert a solid model into a suitable input representation for the
tive nature of decomposition, some researchers attempted to neural network, since simple numerical representation is not
integrate graph-based and volume decomposition approach for always sufficient to represent geometry and topology informa-
better solution of interacting features. The most common type of tion stored in CAD models. A satisfactory input representation
volume-graph hybrid approach uses maximal volume decomposi- should at least have the following basic requirements: (1) it
tion to extract machining features and graph matching approach to contains all the necessary information for identifying the
recognize features, such as [67–69]. The other example is ([21]b), patterns; (2) it must be unambiguous, each class having unique
where the graph-based approach was used to extract features from representation; (3) it is implementable for computation. In this
solid models and the volume decomposition approach was proposed method, Heat Kernel Signature (HKS) is adopted for
incorporated to generate delta volumes and multiple interpreta- pattern representation.
tions of machining operation sequences. HKS is a pointwise feature descriptor for representing the
Although hybrid feature recognition systems eliminate some point's local and global geometric properties. It is based on heat
drawbacks of original approaches, there still remains limitations kernel, which is a fundamental solution to the heat equation. The
like difficulties to extend feature class, limited range of feature eigen-decomposition of heat kernel is expressed as
interaction, and high computational cost. As discussed above, most
X
1
of the existing feature recognition approaches were based on Ht ðx; yÞ ¼ eli t fi ðxÞfi ðyÞ ð3:1Þ
identification of the geometrical entities composing a feature, i¼1
together with their interrelationships, against certain predefined
where li and fi are the eigenvalues and the corresponding
set of rules or templates. Their target features were usually
eigenfunctions of the Laplace-Beltrami operator D over M. It
classified by some high-level characteristics such as function,
should be noted that heat kernel can be successfully approximated
usage, and manufacturing methods. Each type of methods has
by only a few of the eigenfunctions fi .
more or less limitations in feature recognition, especially for
Using the heat kernel, one can derive the quantity of heat
complex interacting feature recognition. To make some progress in
received by a local area Dx  M around a point x at a given time
this direction, in this article, a new method to use a shape signature
instance t. If assume that the initial heat distribution was a Dirac
to represent features and recognize them by 2D convolutional
delta distribution dx ðyÞ, i.e. only a source point x has an infinite
neural network is proposed. The next section presents details of
heat value and the other points have zero heat, and that there is no
the proposed method.
loss of the net heat, the heat value Rx received by the local area Dx
can be expressed as:
Proposed 2D CNN method
Z
In comparison to knowledge-based systems, machine learning Rx ðtÞ ¼ 1  Ht ðx; yÞdy ð3:2Þ
algorithms do not implement any logical operations explicitly. By MDx

Fig. 2. Example of heat persistence map and similarity clustering.


40 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

It is defined that the heat persistence value as the incremental neutral output format of FreeCAD files selected for the recognizer is
value of Rx , until the time where heat drops lower than a threshold *.step (IOS STEP 10303). Then these B-Rep models need to be
value m. It can be computed as the integral of the heat value Rx : tessellated for HPM computation. In order to ensure the
correspondence between the CAD models’ B-Rep representation
Zti
and meshes points, the mesh models are created by a mesh
Rv ðxÞ ¼ Rx ðtÞdt ð3:3Þ
generator named Triangle [71] instead of using CAD software.
0 In the computation experiments, the mesh size was uniform for
The threshold m is determined by the mean of persistence every model to make sure the heat persistence values are most
values of the points set at the time ti , which is the reference time comparable. The computational time cost of HPM and storage cost
when the relative standard deviation of the collection of heat of models depend on the number of vertices in the mesh model. In
persistence value RX reaches a peak value. the initial experiments, the mesh size was set at 1 mm, the number
After getting the heat persistence value for every vertex in a of vertices in a single model roughly range from 20,000 to 50,000.
mesh model, a heat persistence map can be obtained as the feature The size of 1 mm gives a good mesh resolution and ensures that
representation. However, there are normally thousands of nodes in features with small dimension have enough number of clusters to
the map, so it is too large to serve as the input of ANN. Hence, the represent the shape, and that the computational cost is acceptable.
HPM is divided into subsets by grouping adjacent vertices with Technically, when the trained model is applied to parts with large
similar heat persistence values. The clustering algorithm starts dimensions, the mesh size can be proportionally increased to
from the node having the highest heat persistence value. Then, the guarantee a fast recognition.
nodes connected to this start node will be clustered with it if their
heat persistence values are higher than the similarity limit. Feature extraction
Otherwise, the nodes will be used as new start node for another
clustering process. Fig. 2 shows an example of applying this Manufacturing features are normally represented by a set of
similarity clustering strategy. One can clearly note from Fig. 3 that, connected faces, which can be generated by material removing
as similarity percentage limit increases, the shape pattern is process. Therefore, before getting the cluster graph as the input to
clearer, but at the cost of more clusters. the neural network, the features need to be extracted from the
substrate first. Although we introduced a novel feature extraction
Feature selection method based on HKS in [25], its disadvantage of computational
complexity hinders the application to a large amount of feature
As mentioned in section 2, there are numerous feature database. Considering all of the target features are primitive
classification schemes based on the specific needs and interest features, a hint-based feature extraction method is implemented
of the researchers, but manufacturing feature is the most common on the basis of Fu’s work [16].
objective. In this article, a set of ten primitive machining features Hint-based feature extraction method is an effective way to
by reference to ISO STEP-AP 224 for university are selected. identify surface features that are of interest. The concept of hint
However, as a matter of fact, the selection of feature classes is can be defined as a pattern in the part boundary that provides a
independent of building the recognition framework. Under the trace for searching the existence of certain features. Here, the
same framework, the feature family can be easily extended to some characteristics of edge loops are utilized as hint to identify form
secondary features, such as through pocket, T-slot, tapered hole features. An edge loop is a set of connected edges that forms the
and so on. boundary of a surface. As shown in Fig. 5, an external loop is the
Ideally, the training dataset should be collected from real exterior boundary, while the internal loop is inside of the face sand
industrial parts, but the machine learning method usually needs a surrounding a depression or protrusion.
large number of training samples to ensure the generality and As shown in Fig. 6, the features are extracted by the following
robustness, so these features are generated by scripts. As shown in steps:
Fig. 4, the negative features lay in a cubic raw stock with 100 mm
side length, and the positive features are placed on a plate with 10 1 Read the geometry and topology entities from the generated
mm thick, 100 mm length and width. The size and position of each STEP file, including vertices, edges, boundary loops and faces of
feature are randomly set in a specific range. It should be noted that the solid model.
the heat kernel signature is invariant to translation, rotation and 2 Identify the hierarchical and adjacent relationship between
scale. Therefore, the substrate is fixed to origin and all faces are geometry entities.
aligned with the principal axes, and the features are only located 3 Identify the face type (internal or external) and the loop type
on the top face of the substrate. (inner, outer and hybrid).
The solid models used for feature representation are first 4 Search the feature by using the inner loop or hybrid loop in
created by a CAD modeling software called FreeCAD [70]. And the external face as hints.

Fig. 3. Examples of similarity clustering using different percentages.


Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 41

Fig. 4. A list of 10 isolated features selected in this research, and the range of their dimensions.

5 Extract the feature by collecting the faces that are directly or


indirectly connected to the hint loop.
6 Classify the feature into negative or positive by identifying the
convexity (concave, convex and hybrid) of its entrance loop.

After feature extraction, the boundary information of the


manufacturing features is obtained. By combining with the heat
persistence clusters, the manufacturing features are further
represented by only clusters and their corresponding heat
persistence values. To be processed subsequent algorithms, the
feature representation is formatted by graph structure. Let the
graph be G = (V, E), where V is a set of persistence clusters, E is the
corresponding connectivity between clusters. The cluster set V can
be described by N  F attributes vector X, where N is the number of
clusters and F is number of cluster attributes. As shown in Fig. 7, the
edges E are encoded by an N  N adjacency matrix A.

Graph learning
Fig. 5. Internal loop and external loop.

Graphs are a powerful data structure that extensively used to


model structured data within computer science and related fields.
In this article, the basic idea is to frame 3D shape feature
recognition as graph learning problem (Fig. 8). Specifically, it is to
build a machine learning framework that can learn a function to be
used for classifying unknown shape features using a collection of
sample graphs.
The key of graph learning is how to convert the discrete graph
structures into informative and learnable features that can support
effective classification. However, CNNs are typically designed for
input data structure with meaningful or consistent order. For
example, images are composed of pixels that are aligned by the
spatial coordinates. Nevertheless, graphs do not have such
underlying spatial structure and the spatial dependence features.
They and their adjacency matrices are irregular, no-Euclidean, not
associated with spatial position and the notion of Euclidean
distance. Hence, generalizing neural network to graphs is a
challenging problem.
In order to transform the graphs into ordered tensor
Fig. 6. Pseudocode of Feature Extraction. representations for CNN, the first step is determining the node
42 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

sequences to make sure that nodes with similar structural roles in fixed 2D grids, where each grid is associated with a number
the graphs are positioned similarly in their tensor representations. indicating how many nodes locate in the grid. In Fig. 10, there are
Then the tensors are assembled and normalized so that their 40 2D histograms of dome feature to visually demonstrate that the
structural information can be comparable. At last, the normalized features in the same class have similar layout of 2D histograms.
tensors can serve as receptive fields of a convolutional neural
network. 2D CNN configuration
In the proposed graph learning framework, the graph nodes are Technically, the 2D histograms can be input to any kind of
arranged in order of descending heat persistence value. Heat CNNs that are suitable for images. In this research, the
persistence value is an efficient node attribute that can be used for convolution architecture in [76] is adopted as an example,
the arrangement of node sequence. It is directly based on the because it is known to be able to provide good prediction results.
Gaussian curvature of a surface, and is also closely related to As illustrated in Fig. 11, it is basically a LeNet-5 [77] with four
diffusion maps and diffusion distances [72], which means it parallel convolutional-pooling blocks followed by a fully con-
represents both the shape and relative position of a vertex. nected layer with 128 units. The filter sizes of convolutional layers
Therefore, it provides a consistent pattern for the node sequences. are 3  3, 4  4, 5  5, and 6  6 respectively. The first layer
convolutional-pooling has 64 filters, and the second layer has 96
Graph node embedding filters. The max pooling filter is 2  2. A softmax layer was added
The graph learning problem is formulated as: let G = (V, E) be a at the end to output the prediction of feature classes. The ReLU
given cluster graph, it is undirected, unweighted without self- function was used for activation, and regularization is achieved by
loops. Each graph can be represented by a n  n adjacency matrix a dropout rate of 0.3.
A, where n is the number of nodes (clusters). It is binary and A standard 10-fold cross-validation was used and repeated each
symmetric. Moreover, a n2 column vector X is defined to denote fold 3 times in all of the experiments. The dataset was shuffled into
the node attributes: heat persistence value and quantity propor- a 90-10 split of the training-validation subsets for each fold. Batch
tion of vertices. X will be used as extra feature channel to be size was set to 32, and the learning rate and number of training
normalized and compressed with the node embedding. Finally, epochs was optimized by Adam on each fold. The learning model
each graph G in the training dataset has a corresponding label that was implemented in Python3.6 using Keras Library with tensor-
can be used for learning a model to predict the class of an unseen flow backend. All experiments were run on the same machine.
graph.
The goal of node embedding is to project A and X into low- Parameters
dimensional vectors that summarize the structure and feature First of all, experiments were conducted on datasets with
information contained in them. In the node embedding space, the different number of samples, and the validation accuracy was
Euclidean distance between two points should be proportional to improved as the number increased. Finally, the number 5,000 is
the similarity of graph structure role of the corresponding graph picked for each feature class to balance the good results and
nodes [73]. computational expense. Therefore, the entire dataset consists of 10
The problem of node embedding can be taken as a machine features and 50,000 models.
learning task independently. Fig. 9 illustrates the process of node Before giving the final results of feature recognition, the
embedding visually. The nodes are colored with respect to heat parameters of input pre-processing were tuned to find out their
persistence value. The node embedding vectors are learned by impacts on the classification results. When a target parameter was
Novd2vec [74], which is a neutral representation learning being tested, all other parameters shared the same setting. The
framework. It can learn a mapping of nodes as points to a low- performance comparison between different values of parameter
dimensional vector space. The basic idea of learning is to optimize were based on the validation accuracy of trained CNN model. In
this mapping by minimizing the information loss between addition, the tuning experiments are only employed on negative
geometric relationships in the learned space and the structure features for simplicity.
of the original graph [75]. After obtaining the embedding vectors, Similarity of clustering: the percentage similarity is used to
in order to get fixed length input vectors for downstream graph control the number of clusters in a single feature. The ideal
learning task, principal component analysis (PCA) is used to reduce condition is that the models in the same feature class have similar
the dimensions of the learned embedding. number of clusters, but the clustering process is stochastic and
The last step is to transform PCA node embedding into an the number of clusters varies. Therefore, a minimum similarity
image-like histogram. To mimic the schema of CNNs on images, and minimum number of clusters for the clustering process is set.
every two columns of the PCA node embedding to 2D plane are The minimum similarity ensures the features with large dimen-
projected and the projections are turned into a square and number sion can have a good resolution of clusters to represent the shape,

Fig. 7. Example of feature extraction and graph representation.


Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 43

Fig. 8. Graph Learning Framework.

Fig. 9. Process of node embedding.


44 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 10. Randomly picked 2D histograms of dome feature from the training dataset.

Fig. 11. 2D CNN architecture.


Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 45

while the minimum cluster number guarantees the features with feature channel, so the maximum number of histograms is limited
small size having sufficient clusters for node embedding vectors. by the minimum number of graph nodes in the dataset.
If the clustering results do not satisfy the requirement of Meanwhile, since the node embedding vectors are sorted by
minimum cluster number, the similarity will be increased by a PCA, the information contained in the histograms is less and less.
small number and redo the clustering. 3 pairs of minimum Although providing more information to the CNN may help it to
similarity and cluster number: [85%, 10], [90%, 15], [95%, 20], are classify the input features, more input channels will increase the
tried. From the results listed in Fig. 12, it is easy to conclude that number of parameters in the neural network and make it more
more clusters get better results. However, it is worth to note that, difficult to learn. In addition, more input channels also increase the
because of uniform mesh and dimension variations, it needs a high computational cost of training process. Therefore, we conduct
similarity percentage to get enough clusters for features with small experiments to determine the number of input channels according
size. Therefore, if want more than 20 clusters for every model, to the validation accuracy.
there is a need to decrease the mesh size to have more vertices in It is worthy to be noted that, since the first channel is composed
the model. of heat persistence signature and the vertices number percentage,
Node2vec: the node2vec algorithm involves a number of it is the most informative one. As shown in Fig. 14, the curve does
parameters. The conclusion of parameter performance in [74] is not substantially go up, which means the first channel may have
examined. The number of features and the node’s neighborhood sufficient information for classification purpose. This also proves
parameters (number of walks r, walk length l, and neighborhood that HPM is a great feature representation. For the rest of channels,
size k) increasingly improve the performance. Therefore, the which represents the structure of cluster graphs, they are
recommended settings: d = 128, r = 10, l = 80, k = 10, are used. For beneficial to the classification, but also could deteriorate the
the in-out parameter p and the return parameter q, which controls learning ability of CNN. In general, experiment of 5 channels gave
the sampling strategy of random walk, different sets of [p, q]: [4, the best performance, but 2 channels also worked well and had less
0.25], [2, 0.5], [1, 1], [0.5, 2], [0.25, 4], are tested with all same other computation cost. Therefore, testing with different channels is
parameters. As shown in Fig. 13:, the parameter p and q have continued in subsequent experiments.
relatively low impacts on the results of validation accuracy. It can Histogram size: the size of histogram is decided by both of the
be speculated it was because the structure of cluster graphs is size of histogram bins and the numeric range of the node
small, simple and stable. Therefore, p = q = 1 is selected for the rest embedding vectors. Furthermore, the size of bins is determined by
of the experiments. how many intervals the numeric range are divided into. Therefore,
Number of input channels: the 2D histograms are produced by the size of bins can be used to control the size of histogram. For
slicing every 2 columns of the node embedding vectors as a single example, if the vector values range from -2.9 to 2.6, and every

Fig. 12. Validation accuracy with different sets of similarity requirements.


46 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 13. Validation accuracy with different sets of in-out parameter p and the return parameter q.

single scale 1 is divided into 10 intervals, the histogram size is 55  slot and step. Because the HPM is invariant to rotation, and it
55, since j2:6  ð2:9Þj  10 ¼ 55. Intuitively, the smaller the bins, doesn’t take consideration of number of geometry entities.
the higher resolution of the histogram, and the more differentiable Therefore, a step feature also can be viewed as a V-slot feature.
the features are. However, smaller bins produce bigger receptive For positive features, it can find that the confusions are mainly
fields, such that the CNN has more input nodes and parameters, among pyramid, cone, dome. They are also similar shapes; all of
and the training could be more difficult and expensive. In order to them have decreasing cross-section area. The results discussed
select an appropriate bin size, we compare the validation accuracy above show that the proposed method is capable to recognize
of CNN with different bin sizes. Based on the results shown in shape features with high accuracy. To further validate the proposed
Fig. 15, it can be concluded that 1/10 is the best choice of bin size. method, method demonstration is presented in the following
sections via the recognition of interacting shape features, which
Results are quite difficult for existing feature recognition methods.
In the initial experiments, it is tried to train the CNN model
with a dataset including both of negative and positive features. Interacting feature recognition
However, as listed in Table 1, the result of ten features training
together was not good as separated. By analyzing the distribu- Feature interaction is the most critical issue in the development
tion of validation loss shown in Fig. 16, it can find the loss are of a new feature recognition method. It is the decisive factor that
mainly between boss and blind hole. Because they are basically whether feature recognition technology can be applied to real
the same shape with different volumetric space, so that their industrial components or not. As reviewed in section 2, for those
HPM patterns are very similar. This can be considered as a existing methods, feature interaction is still a tough problem. The
limitation of HPM. However, differentiating negative and underlying difficulty is that feature interactions destroyed the
positive features can be easily achieved by identifying the indispensable boundaries that characterize a feature, such as edges
concavity/convexity of their entrance loops in the feature and faces. However, for most feature recognition applications,
extraction step. Fig. 18 shows their convergence of loss function explicit and unique feature descriptions in topology and geometry
and accuracy with epochs. are necessary and intolerant to exceptions. Fig. 19 shows an
Two independent 10-fold trainings of negative and positive example of feature interaction between pocket feature and slot
features with the same parameter setting are experimented. As a feature. The figure explains how the topology of their face graph
preliminary research, the classification accuracy of the proposed representations changes by the feature interaction and decompo-
approach is very competitive. The distribution of validation loss is sition. The decomposed slot feature has exactly the same graph as
shown in Fig. 17. For negative features, the biggest loss is between the original pocket feature.
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 47

Fig. 14. Validation accuracy with different number of feature channels.

Cluster graph of interacting feature the same or different depths. Therefore, there could be eight types
of feature interaction. Taking example of slot and pocket feature,
As introduced above, the heat persistence method describes a Fig. 21 shows the corresponding cluster graphs of some feature
feature beyond the edges and faces. It is implicit and flexible. The interactions. To illustrate in an intuitive way, the clustering
distribution of clusters follows the shape variation. For example, on similarity is relatively low, so the number of clusters won’t be too
tip like shapes, the cluster wraps all the faces that compose the tip. much.
On bar shapes, the clusters are like rings that circle these After analyzing the feature interaction scenarios, it can be
indispensable faces. When small parts of edges and faces are noticed that the changes of HPM is mainly caused by the
missing, the clusters they belong to might be still exist. Moreover, convexity/concavity switch between new edges and the dis-
even when a face is split into multiple faces, they could still share appeared ones. For instance, when the interaction destroys part of
the same cluster node. As illustrated in Fig. 20, feature interaction a concave edge, replace by a convex one, two corners are created
may split a cluster into multiple ones, but the symmetry and between the concave and convex edges, and new clusters will be
gradualness of the entire cluster graph is constant. These patterns produced around the corner. In Fig. 22, the interactions of pocket
can be traced to compensate the missing of original boundaries. and slot features are taken as example to illustrate the effects of
Adding the robustness of ANNs to handle exceptions and interaction on HPM. In the demonstrations, the effects of
incomplete input patterns, it is promising to recognize the interactions based on faces are investigated. They can be
primitive features in the interactions. It has been known that summarized into five typical types of variations: (1) part of the
the difficulties of interacting feature recognition arise from the fact edge's concavity/convexity of is switched, (2) part of the edge/
that feature interaction lead to a vast body of heterogeneous, face is missing, (3) entire edge’s concavity/convexity of is
uncertain and inherently inconsistent topology and geometry switched, (4) face split and (5) hole in the face. As it can be
information, which limits the development of expert systems. For seen, in the first type, although there is not boundary change in
artificial intelligence, on the contrary, exponential complexity, the face, part of the edge's concavity/convexity of is switched, so
uncertainty, inconsistency, and interaction of various kinds of new clusters are created in the face. In the second type, part of the
knowledge are treated as inherent attributes of complex problems. edges is replaced by new edges with opposite concavity/
In the experiments, it assumes that the two interacting feature convexity. In the third type, entire of an edge is replaced by a
share the same entry face. The interacting relationship between new one with opposite concavity/convexity. For the fourth and
two negative features is classified into four types: touch, intersect, fifth type, four new corners are created in the face. It also should
cross and overlap. These four types of relationship can occur with be note that the clusters are dynamic, the numbers and
48 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 15. Validation accuracy with different bin size.

Table 1
Average validation accuracy of 10-fold 3-repeats training on three different datasets.

Dataset Validation accuracy

Total Slot Step Pocket Through hole Blind hole Boss Pyramid Protrusion Cone Dome
All 98.88  0.18 99.67  0.46 99.48  0.60 99.61  0.28 99.14  0.82 94.11  1.74 98.28  0.59 99.87  0.21 99.32  0.36 99.77  0.21 99.51  0.69
Negative 99.82  0.18 99.98  0.06 99.97  0.07 99.83  0.18 99.70  0.43 99.62  0.29 – – – – -
Positive 99.40  0.14 – – – – – 98.34  0.66 99.82  0.26 99.63  0.38 99.74  0.27 99.48  0.61

Fig. 16. Distribution of validation loss for the 10-fold training with 10 feature classes.
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 49

Fig. 17. Distribution of validation loss for two independent 10-fod trainings on negative and positive features.

Fig. 18. Convergence of loss and accuracy for both learning and validation.

connectivity are decided by the clustering process. But their negative features are more common and diverse. All five positive
overall pattern is always according the shape. The colors are features have only one entrance face. The interactions of positive
marked relatively by their heat persistence values. Using these features are simpler than the negative ones. On the other hand,
examples of cluster graphs, it can visually prove that the HPM positive and negative features are similar shapes but with different
characterizes the feature not by predefined boundaries by but by volumetric space. In feature interactions, all primitive features
the whole shape. only have three types of defects: edge missing, hole in the face, and
feature split.
Feature selection In Fig. 23, all the 15 pairs of feature interaction used for
experiments are listed. The size and location of each primitive
In this article, only the interactions between two negative feature are selected randomly and independently in same range as
features are used to validate the proposed idea. On one hand, the isolated features. The models without interactions are removed to
50 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 19. Feature interaction and decomposition illustrated by graph-based representations.

Fig. 20. Feature interaction and decomposition illustrated by the cluster graphs.

ensure enough samples of feature interaction. For each pair of bottom of a pocket feature, otherwise it is the bottom of a slot
features, there could be four types of relationship: touch, cross, feature.
intersect, and within. 4 Then the bottom surfaces are used as core to reconstruct the
primitive features. Angles between a bottom face and its
Feature extraction adjacent faces are determined by cross product of face normal
directions. If there is a concave edge, the corresponding adjacent
The hypothesis is that HPM make it possible to recognize face is added to the face collection of target feature.
incomplete features by artificial neural network. Therefore, before 5 The final step is to combine split features. Primitive features
the graph learning, it needs to extract and decompose the might be split if they cross each other. Instead of recognizing
interacting feature into primitive features first. In the following features separately, we combine them together as a single input.
steps, the geometric reasoning to decompose interacting features If the number of decomposed features from an interacting
is described. feature is larger than two, the features in same primitive class
are examined for combination. If their bottom surfaces are on
1 The first step is to extract the interacting feature entirely from the same plane, and they have one (step feature) or two (slot and
the raw stock following the steps in section 3.3. pocket feature) wall face on the same plane, they are combined a
2 The second step is to identify cylindrical faces from the face single decomposed feature.
collection of an interacting features. If it has more than one
entrance face, it is a through hole, otherwise it is a blind hole. If it
is a blind hole, the faces connected to the bottom of the Results
cylindrical face are combined as bottom surface of the blind
hole. The “bottom” of a cylindrical face is determined by the For the dataset of interacting features, a number of 5,000 is
distance between entrance face and the target face or edge. selected for each set of feature interaction. Hence, 150,000 samples
3 The next step is to identify bottom surface from the rest faces in are obtained totally of five decomposed features. Considering the
the collection. Then check the adjacent faces of the bottom large variations in the feature interactions, this dataset is not
surface, if it has two adjacent faces that are entrance face and extraordinary, but sufficient to validate the idea. In future works,
connected to each other, this bottom surface belongs to a step experiments will be conducted on larger dataset with more
feature. If it has no entrance face in the adjacent faces, it is the complicated interactions.
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 51

Fig. 21. Topology changes of feature interaction and decomposition illustrated by the cluster graph representations.

In the initial experiment, it attempted to classify the decom- shapes. Therefore, it is logical to conclude that the feature
posed features with the CNN model trained with isolated features. representation based on HPM is well capable of handling boundary
The average accuracy of entire interacting feature dataset is loss of interacting features. Furthermore, the performance of
52.36%. The individual accuracy of five decomposed features is slot interacting features recognition might be improved by the same
81.09%, step 33.27%, pocket 51.30%, through hole 62.10%, blind hole way as recognition of isolated features, such as finer mesh and
34.05%. From these numbers, it can conclude that the CNN model better CNN structure. Fig. 26 shows their convergence of loss
of isolated features is not able to identify the patterns of function and accuracy with epochs.
decomposed features. However, the results are still better than
randomly guessing, assumingly because some decomposed Case study
features have the similar HPM pattern as the isolated ones.
To recognize interacting features, there could be two options: In order to demonstrate the general applicability of the
(1) First attempts to classify all extracted features with CNN proposed recognition framework, it is applied on several example
models for isolated features. If the probability of classification is parts from NIST (National Institute of Standards and Technology)
not good enough, it will be classified again by another CNN model, design repository. The example parts are composed of slots, steps,
which is specifically trained by decomposed interacting features. through holes and blind holes. Most of these features interact with
(2) Using a single CNN model that trained for both interacting others and the rest few are isolated. To display the robustness of
features and isolated features to classify the extracted features. the proposed framework, these models are kept as they are, even
Ideally, the latter might be better choice, since the isolated features though they contain round edges and interactions of multiple
could serve as an intermedia among different types of feature features that are not covered by the training dataset. The STEP files
interactions. Therefore, two different training experiments are were downloaded and processed by the same framework as
employed. One is only using the 150,000 decomposed interacting introduced in previous sections. Then the decomposed features are
features, and the other is combined with 50,000 isolated features classified by the CNN models trained by the dataset including both
(10,000 for each feature). As listed in Table 2, the results of two interacting and isolated features.
datasets are almost the same. Therefore, it may conclude that the The results are shown in Fig. 27. The prototype system
number of decomposed interacting features that have the similar successfully recognized all isolated features and most of interact-
HPM pattern as isolated features is sufficient for learning ing features. The total classification accuracy is 96.88%(62/64). To
Fig. 24 and Fig. 25 demonstrate the distribution of validation be specific, in Fig. 27 (d), a step feature having three holes was not
loss for the corresponding training. Comparing with Fig. 17, it can correctly identified. For Fig. 27 (e), the blind hole interacting with
be observed that the distributions of wrong predictions for nine through holes was not successfully recognized. It should be
interacting features are similar to the loss distributions of isolated noted that, both of these two cases are interactions of three or
features’ training. The confusions are still mostly among similar more features, which are not covered in the training dataset. That
52 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 22. The effect of feature interaction on HPM pattern.


Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 53

Fig. 23. 15 pairs of feature interaction between negative features.

Table 2
Average validation accuracy of 10-fold 3-repeats training on dataset of decomposed interacting features and dataset of combining with isolated feature.

Dataset Validation accuracy

Total Slot Step Pocket Through hole Blind hole Boss Pyramid Protrusion Cone Dome
All 98.88  0.18 99.67  0.46 99.48  0.60 99.61  0.28 99.14  0.82 94.11  1.74 98.28  0.59 99.87  0.21 99.32  0.36 99.77  0.21 99.51  0.69
Negative 99.82  0.18 99.98  0.06 99.97  0.07 99.83  0.18 99.70  0.43 99.62  0.29 – – – – –
Positive 99.40  0.14 – – – – – 98.34  0.66 99.82  0.26 99.63  0.38 99.74  0.27 99.48  0.61

Fig. 24. Distribution of validation loss for the dataset of 150,000 decomposed interacting features.

means their classification are not guaranteed in the first place. be seen that the CNN trained by generated samples successfully
However, there are more cases of multi-feature interaction are recognize most features of real industrial components. The
correctly recognized. Although we cannot conclude that the performance of the proposed CNN-based feature recognition
trained CNN is able to recognize multi-feature interactions, it can methods is validated. In addition, these exceptions can be
54 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

Fig. 25. Distribution of validation loss for the dataset of combining decomposed interacting features and isolated features.

Fig. 26. Convergence of loss and accuracy for both learning and validation.

addressed by extending the range of considered interaction types interacting features without recovering the missing geometric
of multiple features. The proposed CNN-based feature recognition boundaries. The large scale of random sampled dataset verifies that
framework could easily accommodate new required interaction it has great robustness to incomplete feature representations. The
types by adding examples to the training dataset. excellent accuracy of recognizing decomposed features also make
In this section, the capability of the proposed feature the feature recognition framework outperform any existing
recognition framework verified to be able handle feature methods. For instance, in most of the existing approaches, the
interactions. It is the first one that successfully recognize geometric and topological variations especially the unseen ones
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 55

Fig. 27. Feature recognition on five complex parts from NIST design repository.

often lead to unsuccessful recognition of the test cases. Neverthe- Conclusion and future works
less, feature interaction is still a difficult problem. Since the feature
interactions may involve multiple features, the possible ways of The objective of this research was to use artificial intelligence
interaction increase factorially with adding more features to the technology to develop a feature recognition system that could
combination. Moreover, a large variety of feature interactions may overcome the drawbacks in this domain. First, a novel feature
need huge training cost. representation scheme based on heat kernel signature is
56 Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57

developed. Different from conventional representation schemes, it directly removing the vertices in mesh models instead of cutting by
goes beyond geometric entities, describes both the topology and another feature.
geometry characteristics of the shape by a concise HPM. Besides In summary, the future research goal is to build a more
informative and learnable, HPM also has advantages like robust- complete feature recognition system with a hierarchical structure
ness of topology and geometry variations, translation, rotation and to support multiple feature definitions. For instance, at the top
scale invariance. Using the technique of percentage similarity level, the features can be classified as primitive features as defined
clustering, the HPM is then simplified into cluster graph. in section 3. At the lower level, features in the same primitive class
Thereafter, the feature recognition problem is successfully framed can be identified based on the geometry variations, such as T-slot,
as a graph learning problem. In order to transform the cluster V-slot, dovetail slot, etc. Also, incorporating the freeform features
graphs into suitable input of CNN for effective classification, the is very attractive to this domain. Considering the strengths of HPM
node embedding method is used to project the cluster graphs and and CNN, it is believed that the proposed method can be applied to
their heat persistence values into low-dimensional tensors. Then the recognition of 3D real-life object. In addition, to optimize the
the tensors are assembled and normalized so that their structural algorithms and CNN structure, have more sophisticated training
information can be comparable. At last, the 2D histograms of datasets and more rigorous experiments, to continue to enhance
tensors serve as receptive fields of a convolutional neural network. the implementation to other restrictions of feature recognition,
A large dataset of CAD models including isolated and interacting and to extend the results and application to include more
features is generated randomly by scripts for training the CNN processing techniques and exploring for other possibilities in
models. The parameters involved in the preprocessing step were the CAD/CAM domain are future research intension of the authors.
determined by experiments. The details of feature extraction and
classification were presented. The results of recognizing isolated References
features have shown that the proposed method has better
[1] Gupta, S.K., 1994, Automated manufacturability analysis of machined parts..
performance than any existing ANN based approaches. The feature
[2] Regli, W.C., Gupta, S.K., Nau, D.S., 1994, Feature Recognition for Manufactur-
recognition framework offers the advantages of learning and ability Analysis..
generalization. It is independent of feature selection and could be [3] Shi, Y., Zhang, Y., Baek, S., De Backer, W., Harik, R., 2018, Manufacturability
extended to various features without any need to redesign the analysis for additive manufacturing using a novel feature recognition tech-
nique. Computer-Aided Design & Applications, 14:1–14. http://dx.doi.org/
algorithm. The results of recognizing interacting features indicate 10.1080/16864360.2018.1462574org/10.1080/16864360.2018.1462574.
that our feature representation scheme is effective in handling the [4] Xú, S., Anwer, N., Mehdi-Souzani, C., Harik, R., Qiao, L., 2016, STEP-NC based
boundary loss caused by feature interactions. The state-of-the-art reverse engineering of in-process model of NC simulation. The International
Journal of Advanced Manufacturing Technology, 86/9–12: 3267–3288.
performance of interacting features recognition has been im- [5] Allada, V., Anand, S., 1995, Feature-based modelling approaches for integrated
proved. manufacturing: state-of-the-art survey and future research directions. Inter-
Although the proposed feature recognition method shows great national Journal of Computer Integrated Manufacturing, 8/6: 411–440.
[6] Yildiz, A.R., Öztürk, N., Kaya, N., Öztürk, F., 2003, Integrated optimal topology
achievements in overcoming the drawbacks of existing methods, design and shape optimization using neural networks. Structural and Multi-
there are several limitations and issues that need further disciplinary Optimization, 25/4: 251–260.
investigation. Firstly, the heat kernel is invariant to rotation, so [7] Zhang, Y., Bernard, A., Gupta, R.K., Harik, R., 2016, Feature based building
orientation optimization for additive manufacturing. Rapid Prototyping
it cannot differentiate the features having the same shape but Journal, .
defined based on the direction, such as V-slot and step. Similarly, [8] Chandrasegaran, S.K., Ramani, K., Sriram, R.D., Horváth, I., Bernard, A., Harik, R.
since HPM only represents surfaces, it also has difficulties to make F., Gao, W., 2013, The evolution, challenges, and future of knowledge repre-
sentation in product design systems. Computer-Aided Design, 45/2: 204–228.
a distinction between features with the same shape but defined in
[9] Han, J., Kang, M., Choi, H., 2001, STEP-based feature recognition for
different volume spaces, just like pocket and protrusion. Therefore, manufacturing cost optimization. Computer-Aided Design, 33/9: 671–686.
sometimes there is a need of additional mechanisms to ensure [10] Harik, R., Capponi, V., Lombard, M., Ris, G., 2006, Enhanced functions support-
accurate recognition. For instance, the geometric reasoning (inner ing process planning for aircraft structural parts. Computational Engineering
in Systems Applications, 2:1259–1266.
loop) is adopted to differentiate negative and positive features. [11] Harik, R.F., Derigent, W.J.E., Ris, G., 2007, Spécifications de fonctions pour un
Secondly, HPM is computed based on the discretized mesh système d’aide à la génération automatique de gamme d’usinage: application
vertices. Therefore, in order to fully describe the surface, it requires aux pièces aéronautiques de structure, prototype logiciel dans le cadre du
projet RNTL USIQUICK [Université Henri Poincaré-Nancy I]. Computer-Aided
sufficient number of vertices. Since uniform mesh method is used Design and Applications, . http://dx.doi.org/10.3722/cadaps.2008.953-962.
to tessellate the solid model, if there is a small surface in the model, (Vol. 5, Issue 6).
either it can’t be well represented by HPM, or the mesh size has to [12] Harik, R.F., Derigent, W.J.E., Ris, G., 2008, Computer aided process planning in
aircraft manufacturing. Computer-Aided Design and Applications, . http://dx.
be very small. It has been proven that finer meshes has better doi.org/10.3722/cadaps.2008.953-962. (Vol. 5, Issue 6, pp. 953–962).
accuracy, but it takes more computational cost. The other possible [13] Lin, A.C., Lin, S.-Y., 1998, A volume decomposition approach to process planning
solution is to use ununiformed mesh and add cluster area to the for prismatic parts with depression and protrusion design features. Interna-
tional Journal of Computer Integrated Manufacturing, 11/6: 548–563.
node attributes. [14] Narang, R.V., 1996, An application-independent methodology of feature rec-
Thirdly, the design of convolutional neural networks is ognition with explicit representation of feature interaction. Journal of Intelli-
preliminary. It is a classical structure designed for recognizing gent Manufacturing, 7/6: 479–486.
[15] Wang, M.-T., Chamberlain, M.A., Joneja, A., Chang, T.-C., 1993, Manufacturing
hand-writing numbers. Therefore, it is possible to improve the
feature extraction and machined volume decomposition in a computer-inte-
performance of the feature recognition framework using the state- grated feature-based design and manufacturing planning environment. Com-
of-the-art machine learning algorithms. puters in Industry, 23/1–2: 75–86.
Fourthly, there is still large space for improving the solution of [16] Fu, M.W., Ong, S.-K., Lu, W.F., Lee, I.B.H., Nee, A.Y.C., 2003, An approach to
identify design and manufacturing features from a data exchanged part model.
the feature interaction problem. It has been proved that HPM is Computer-Aided Design, 35/11: 979–993.
capable to handle the boundary loss of feature interactions, but the [17] Harik, R.F., Sahmrani, N., 2010, DFMA+, a quantitative DFMA methodology.
CNN model is trained by simple interacting samples between two Computer-Aided Design and Applications, 7/5: 701–709.
[18] Abouel Nasr, E.S., Kamrani, A.K., 2006, A new methodology for extracting
negative features. Following this training method, in order to be manufacturing features from CAD system. Computers and Industrial Engineer-
more comprehensive, it needs to incorporate more complex ing, . http://dx.doi.org/10.1016/j.cie.2006.08.004.
interactions of multiple features, which could make the training [19] Dimov, S.S., Brousseau, E.B., Setchi, R., 2007, A hybrid method for feature
recognition in computer-aided design models. Proceedings of the Institution
cost unacceptable. However, since the HPM deals with the of Mechanical Engineers Part B: Journal of Engineering Manufacture, 221/1:
interactions at vertex level, it can build the training dataset by 79–96.
Y. Shi et al. / CIRP Journal of Manufacturing Science and Technology 30 (2020) 36–57 57

[20] Jones, T.J., Reidsema, C., Smith, A., 2006, Automated Feature Recognition [48] Sakurai, H., Chin, C.-W., 1994, Definition and recognition of volume features for
System for supporting conceptual engineering design. International Journal process planning.(Vol. 20, pp. 65–80) Elsevier.
of Knowledge-Based and Intelligent Engineering Systems, 10/6: 477–492. [49] Woo, Y., Sakurai, H., 2002, Recognition of maximal features by volume de-
[21] Wong, T.N., Lam, S.M., 2000, Automatic recognition of machining features from composition. CAD Computer Aided Design, . http://dx.doi.org/10.1016/S0010-
computer aided design part models. Proceedings of the Institution of Mechan- 4485(01)00080-X.
ical Engineers Part B: Journal of Engineering Manufacture, 214/6: 515–520. [50] Chen, Y.H., Lee, H.M., 1998, A neural network system for two-dimensional
[22] Han, J.H., Pratt, M., Regli, W.C., 2000, Manufacturing feature recognition from feature recognition. International Journal of Computer Integrated Manufactur-
solid models: A status report. IEEE Transactions on Robotics and Automation, . ing, 11/2: 111–117. http://dx.doi.org/10.1080/095119298130859.
http://dx.doi.org/10.1109/70.897789. [51] Jun, Y.T., Raja, V.H., 2002, Extracting geometric attributes directly from scanned
[23] Shi, Y., Zhang, Y., Xia, K., Harik, R., 2020, A critical review of feature recognition data sets for feature recognition. International Journal of Computer Integrated
techniques. Computer-Aided Design and Applications, 17/5: 861–899. http:// Manufacturing, 15/1: 50–61.
dx.doi.org/10.14733/cadaps.2020.861-899. [52] Ding, L., Yue, Y., 2004, Novel ANN-based feature recognition incorporating
[24] Verma, Arvind Kumar, Rajotia, S., 2010, A review of machining feature recog- design by features. Computers in Industry, 55/2: 197–222.
nition methodologies. International Journal of Computer Integrated [53] Nezis, K., Vosniakos, G., 1997, Recognizing 212D shape features using a neural
Manufacturing, 23/4: 353–368. network and heuristics. Computer-Aided Design, . http://dx.doi.org/10.1016/
[25] Harik, R., Shi, Y., Baek, S., 2017, Shape Terra: mechanical feature recognition S0010-4485(97)00003-1.
based on a persistent heat signature. Computer-Aided Design and Applica- [54] Prabhakar, S., Henderson, M.R., 1992, Automatic form-feature recognition
tions, 14/2: 206–218. http://dx.doi.org/10.1080/16864360.2016.1223433. using neural-network-based techniques on boundary representations of solid
[26] Hummel, K.E., 1989, Coupling rule-based and object-oriented programming models. Computer-Aided Design, . http://dx.doi.org/10.1016/0010-4485(92)
for the classification of machined features. NASA STI/Recon Technical Report 90064-H.
N, 90. [55] Hwang, J.-L., 1992, Applying the perceptron to three-dimensional feature
[27] Donaldson, I.A., Corney, J.R., 1993, Rule-based feature recognition for 2 5D recognition..
machined components. International Journal of Computer Integrated [56] Lankalapalli, K., Chatterjee, S., 1997, Feature recognition using ART2: a self-
Manufacturing, 6/1–2: 51–64. organizing neural network. Journal of Intelligent Manufacturing, 8/3: 203–214.
[28] Vosniakos, G.C., Davies, B.J., 1993, A shape feature recognition framework and [57] Marquez, M., Gill, R., White, A., 1999, Application of neural networks in feature
its application to holes in prismatic parts. The International Journal of Ad- recognition of mould reinforced plastic parts. Concurrent Engineering-Re-
vanced Manufacturing Technology, 8/6: 345–351. search and Applications, . http://dx.doi.org/10.1177/1063293X9900700204.
[29] Nasr, E.S., Abouel Khan, A.A., Alahmari, A.M., Hussein, H.M.A., 2014, A feature [58] Sunil, V.B., Pande, S.S., 2009, Automatic recognition of machining features
recognition system using geometric reasoning. Procedia CIRP, 18:238–243. using artificial neural networks. The International Journal of Advanced
http://dx.doi.org/10.1016/j.procir.2014.06.138. Manufacturing Technology, 41/9–10: 932–947. http://dx.doi.org/10.1007/
[30] Chan, A.K.W., Case, K., 1994, Process planning by recognizing and learning s00170-008-1536-z.
machining features. International Journal of Computer Integrated Manufactur- [59] Li, W.D., Ong, S.K., Nee, A.Y.C., 2000, Recognition of overlapping machining
ing, 7/2: 77–99. http://dx.doi.org/10.1080/09511929408944597. features based on hybrid artificial intelligent techniques. Journal of Engineer-
[31] Bhandarkar, M.P., Nagi, R., 2000, STEP-based feature extraction from STEP ing Manufacture, 214/8: 739–744.
geometry for Agile Manufacturing. Computers in Industry, 41/1: 3–24. http:// [60] Öztürk, N., Öztürk, F., 2001, Neural network based non-standard feature
dx.doi.org/10.1016/S0166-3615(99)00040-8. recognition to integrate CAD and CAM. Computers in Industry, . http://dx.
[32] Gao, J., Zheng, D.T., Gindy, N., 2004, Extraction of machining features for CAD/ doi.org/10.1016/S0166-3615(01)00090-2.
CAM integration. The International Journal of Advanced Manufacturing Tech- [61] Wu, M.C., Jen, S.R., 1996, A neural network approach to the classification of 3D
nology, 24/7–8: 573–581. prismatic parts. The International Journal of Advanced Manufacturing Tech-
[33] Verma, A.K., Rajotia, S., 2008, A hint-based machining feature recognition nology, 11/5: 325–335.
system for 2.5D parts. International Journal of Production Research, . http://dx. [62] Chuang, J., Wang, P., Wu, M., 1999, Automatic classification of block-shaped
doi.org/10.1080/00207540600919373. parts based on their 2D projections. Computers and Industrial Engineering, .
[34] Xu, X., Hinduja, S., 1998, Recognition of rough machining features in 212D [63] Rahmani, Keyvan, Arezoo, B., 2006, Boundary analysis and geometric comple-
components. Computer-Aided Design, 30/7: 503–516. tion for recognition of interacting machining features. CAD Computer Aided
[35] Nasr, Emad S. Abouel, Kamrani, A.K., 2006, A new methodology for extracting Design, 38/8: 845–856. http://dx.doi.org/10.1016/j.cad.2006.04.015.
manufacturing features from CAD system. Computers & Industrial Engineer- [64] Rahmani, K., Arezoo, B., 2007, A hybrid hint-based and graph-based framework
ing, 51/3: 389–415. for recognition of interacting milling features. Computers in Industry, . http://
[36] Vandenbrande, J.H., Requicha, A.A.G., 1993, Spatial reasoning for the automatic dx.doi.org/10.1016/j.compind.2006.07.001.
recognition of machinable features in solid models. IEEE Transactions on [65] Verma, Arvind Kumar, Rajotia, S., 2009, A hybrid machining Feature Recogni-
Pattern Analysis and Machine Intelligence, 15/12: 1269–1285. tion system. International Journal of Manufacturing Research, 4/3: 343–361.
[37] Han, J., Requicha, A.A.G., 1997, Integration of feature based design and feature [66] Lam, S.M., Wong, T.N., 2000, Recognition of machining features-a hybrid
recognition. Computer-Aided Design, 29/5: 393–403. approach. International Journal of Production Research, 38/17: 4301–4316.
[38] Joshi, S., Chang, T.C., 1988, Graph-based heuristics for recognition of machined [67] Tseng, Y.J., Joshi, S.B., 1994, Recognizing multiple interpretations of interacting
features from a 3D solid model. Computer-Aided Design, 20/2: 58–66. http:// machining features. Computer-Aided Design, 26/9: 667–688. http://dx.doi.
dx.doi.org/10.1016/0010-4485(88)90050-4. org/10.1016/0010-4485(94)90018-3.
[39] Laakko, T., Mäntylä, M., 1993, Feature modelling by incremental feature [68] Sakurai, H., Dave, P., 1996, Volume decomposition and feature recognition, part
recognition. Computer-Aided Design, . http://dx.doi.org/10.1016/0010-4485 II: curved objects. Computer-Aided Design, 28/6–7: 519–537. http://dx.doi.org/
(93)90079-4. 10.1016/0010-4485(95)00067-4.
[40] Venuvinod, P.K., Yuen, C.F., Merchant, M.E., 1994, Efficient Automated Geo- [69] Rameshbabu, V., Shunmugam, M.S., 2009, Hybrid feature recognition method
metric Feature Recognition through Feature Coding. CIRP Annals - Manufactur- for setup planning from STEP AP-203. Robotics and Computer-Integrated
ing Technology, . http://dx.doi.org/10.1016/S0007-8506(07)62242-2. Manufacturing, 25/2: 393–408.
[41] Venuvinod, Patri K, Wong, S.Y., 1995, A graph-based expert system approach to [70] Riegel, Juergen, Mayer, Werner, van Havre, Y., 2018, FreeCAD (0.18).https://
geometric feature recognition. Journal of Intelligent Manufacturing, 6/3: 155– www.freecadweb.org.
162. [71] Shewchuk, J.R., 1996, Triangle: Engineering a 2D quality mesh generator and
[42] Yuen, C.F., Venuvinod, P.K., 1999, Geometric feature recognition: Coping with Delaunay triangulator.Lin MM.C.C, Manocha DD., (Eds.) Applied Computation-
the complexity and infinite variety of features. International Journal of Com- al Geometry: Towards Geometric Engineering, 1148. Springer-Verlag. p.
puter Integrated Manufacturing, . http://dx.doi.org/10.1080/ pp.203–222.
095119299130173. [72] Sun, J., Ovsjanikov, M., Guibas, L., 2009, A concise and provably informative
[43] Gao, S., Shah, J.J., 1998, Automatic recognition of interacting machining fea- multi-scale signature based on heat diffusion. Eurographics Symposium on
tures based on minimal condition subgraph. CAD Computer Aided Design, . Geometry Processing, 28/5: 1383–1392. http://dx.doi.org/10.1111/j.1467-
http://dx.doi.org/10.1016/S0010-4485(98)00033-5. 8659.2009.01515.x.
[44] Li, Y.G., Ding, Y.F., Mou, W.P., Guo, H., 2010, Feature recognition technology for [73] Tixier, A.J., 2018, Notes on Deep Learning for NLP.1–20.
aircraft structural parts based on a holistic attribute adjacency graph. Proceed- [74] Grover, A., Leskovec, J., 2016, node2vec: Scalable Feature Learning for Net-
ings of the Institution of Mechanical Engineers Part B: Journal of Engineering works.. http://dx.doi.org/10.1145/2939672.2939754.
Manufacture, 224/2: 271–278. [75] Hamilton, W.L., Ying, R., Leskovec, J., 2017, Representation learning on graphs:
[45] Cai, N., Bendjebla, S., Lavernhe, S., Mehdi-Souzani, C., Anwer, N., 2018, Freeform Methods and applications. ArXiv Preprint, . ArXiv:1709.05584.
Machining Feature Recognition with Manufacturability Analysis. Procedia [76] Tixier, A.J.-P., Nikolentzos, G., Meladianos, P., Vazirgiannis, M., 2018, Graph
CIRP, . http://dx.doi.org/10.1016/j.procir.2018.03.261. Classification with 2D Convolutional Neural Networks..
[46] Woo, T.C., 1982, Feature extraction by volume decomposition. Conference on [77] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998, Gradient-based learning
CAD/CAM Technology in Mechanical Engineering, 76–94. applied to document recognition. Proceedings of the IEEE, . http://dx.doi.org/
[47] Kim, Y.S., Wilde, D.J., 1992, A convergent convex decomposition of polyhedral 10.1109/5.726791.
objects. Journal of Mechanical Design, 114/3: 468–476.

You might also like