Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Available online at www.sciencedirect.

com
Available online at www.sciencedirect.com
ScienceDirect
ScienceDirect
Available online at www.sciencedirect.com
Procedia Manufacturing 00 (2020) 000–000
www.elsevier.com/locate/procedia
ScienceDirect
Procedia Manufacturing 00 (2020) 000–000
www.elsevier.com/locate/procedia

Procedia Manufacturing 53 (2021) 359–367

49th SME North American Manufacturing Research Conference, NAMRC 49, Ohio, USA
49th SME North American Manufacturing Research Conference, NAMRC 49, Ohio, USA
A digital twin strategy for major failure detection in fused deposition
A digital twin strategy formodeling
major failure detection in fused deposition
processes
modeling processes
Christopher M. Henson*, Nathan I. Decker, Qiang Huang
Christopher M. Henson*, Nathan I. Decker, Qiang Huang
Daniel J. Epstein Department of Industrial and Systems Engineering, Andrus Gerontology Center, 3715 McClintock Ave GER 240, Los Angeles, CA 90089,
United States of America
Daniel J. Epstein Department of Industrial and Systems Engineering, Andrus Gerontology Center, 3715 McClintock Ave GER 240, Los Angeles, CA 90089,
* Corresponding author. Tel.: +1-213-740-4893; fax: +1-213-740-1120. E-mail address: cmhenson@usc.edu
United States of America
* Corresponding author. Tel.: +1-213-740-4893; fax: +1-213-740-1120. E-mail address: cmhenson@usc.edu

Abstract
Abstract
Part distortion during additive manufacturing (AM) may lead to catastrophic failure and significant waste of resources. Existing work often
focuses on identification
Part distortion and detection
during additive of individual
manufacturing (AM)rootmay causes
lead such as melt pool
to catastrophic geometries
failure or extruderwaste
and significant clogging to preventExisting
of resources. part failures.
work Since
often
the end-effect
focuses of major print
on identification failures can
and detection be the result
of individual rootofcauses
multiple
sucherror sources
as melt pool (including
geometriesunknowns), relying on
or extruder clogging to detection
prevent partof individual root
failures. Since
causes may misclassify some failed prints as successful. Instead, detecting end-effects or part distortion could provide early
the end-effect of major print failures can be the result of multiple error sources (including unknowns), relying on detection of individual root warning of major
failures regardless
causes may of potential
misclassify some error
failedsources.
prints asDistortion detection,
successful. Instead,however,
detectingcurrently involves
end-effects computationally
or part distortion couldexpensive simulation
provide early andof
warning analysis
major
of sensing data. One promising solution is to adopt digital twin strategy to quickly compare model prediction to features extracted
failures regardless of potential error sources. Distortion detection, however, currently involves computationally expensive simulation and analysis from in situ
sensing data.
of sensing This
data. study
One extendssolution
promising the digital twin
is to strategy
adopt to twin
digital majorstrategy
distortion detection
to quickly by developing
compare (1) a multi-view
model prediction optical
to features sensingfrom
extracted system for
in situ
movable print beds and (2) failure detection methods by analyzing multi-view of part images layer by layer. Since the digital twin
sensing data. This study extends the digital twin strategy to major distortion detection by developing (1) a multi-view optical sensing system for of actual prints
at specific
movable layers
print bedsare
andgenerated
(2) failureoffline, delay
detection can be
methods byreduced
analyzing tomulti-view
determine ofif part
a significant enough
images layer quality
by layer. departure
Since hastwin
the digital occurred to justify
of actual prints
termination
at of the print.
specific layers In the experimental
are generated evaluation
offline, delay can be of this approach
reduced for a FDM
to determine if a machine with
significant a moving
enough printdeparture
quality bed, failure
haswas rapidlytodetected
occurred justify
in two of theofthree
termination test prints,
the print. In the while in the remaining
experimental evaluationprint, failure
of this was successfully
approach detectedwith
for a FDM machine afteraamoving
short delay.
print bed, failure was rapidly detected
in two of the three test prints, while in the remaining print, failure was successfully detected after a short delay.

©
© 2019
2021The TheAuthors,
Authors.Published by Elsevier
Published B.V. B.V.
by Elsevier
Peer
This
© review
2019is The under
an open thePublished
responsibility
access
Authors, articleby of the
under thescientific
Elsevier CC committeelicense
B.V.BY-NC-ND of NAMRI/SME
(http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review
Peer underthe
review under responsibility
responsibilityofofthe
theScientific CommitteeofofNAMRI/SME
scientific committee the NAMRI/SME
Keywords: Shape deformation, in situ optical sensing, multi-view, failure detection, cyber-physical system
Keywords: Shape deformation, in situ optical sensing, multi-view, failure detection, cyber-physical system

1. Introduction higher-cost metal processes where returns have the potential to


1. Introduction offer greater
higher-cost value.
metal Kleszczynski
processes et al. have
where returns [13],theforpotential
instance,
to
Additive manufacturing (AM) has drastically reduced the summarized possible process errors including:
offer greater value. Kleszczynski et al. [13], for instance, recoater
lead time and
Additive cost required(AM)
manufacturing to prototype and iterate
has drastically a pre-
reduced the damage,
summarized powder
possiblecontamination,
process errorslow including:
energy input, and
recoater
production
lead time and design. Industries
cost required from healthcare
to prototype and iterate [1–5] to
a pre- incorrect slice data. They proposed measures
damage, powder contamination, low energy input, and to detect such
aerospace [6–10] have made use of AM to
production design. Industries from healthcare [1–5] to great effect. The process
incorrectbreakdowns
slice data. Theyin powder bed measures
proposed fusion (PBF) machines
to detect such
highly iterative
aerospace prototyping
[6–10] have made phase
use [11]
of AMof product
to greatdesign which
effect. The using
processa monochrome
breakdowns charged
in powder coupling device(PBF)
bed fusion (CCD)machines
camera.
used to take weeks or months can now be done in a significantly
highly iterative prototyping phase [11] of product design which One
usingexample of this charged
a monochrome is their coupling
proposeddevicedetection
(CCD) of recoater
camera.
shorter
used to period of time.
take weeks One challenge
or months can now still facedinin
be done AM though,
a significantly damage through
One example ofthe collection
this is their of powder bed
proposed imagesoffollowing
detection recoater
is the catastrophic
shorter failure
period of time. Oneof prints which
challenge leadsintoAM
still faced significant
though, recoating looking
damage through the for ridges ofparallel
collection powder to bedthe recoater
images path
following
wates of materials failure
is the catastrophic and potential
of printsdamages to machines.
which leads Fast
to significant indicating
recoating damage
looking tofor theridges
recoater mechanism.
parallel to theIn recoater
another study
path
detection
wates of and early warning
materials of major
and potential print failures
damages has been
to machines. of
Fast by the same
indicating group,toKleszczynski
damage et al. [14] proposed
the recoater mechanism. In anotherastudy
high
great interest [12].
detection and early warning of major print failures has been of resolution
by the same imaging
group,approach for detecting
Kleszczynski recoater
et al. [14] jams in
proposed PBF
a high
great interest [12]. systems
resolutioncaused
imagingby approach
super elevated regions recoater
for detecting of the part
jamscolliding
in PBF
A significant amount of work in the AM literature seeks to with
systemsthe machines
caused by superrecoating mechanism.
elevated regions of the partThey also
colliding
improve final part
A significant qualityofand
amount worktherefore reduce
in the AM materialseeks
literature waste.
to investigated the use ofrecoating
with the machines an accelerometer
mechanism. used They
to measure
also
A disproportionate
improve amountand
final part quality of therefore
this effortreduce
has been focused
material on
waste. vibrations
investigatedduring
the usethe ofpowder deposition process
an accelerometer used to in measure
order to
A disproportionate
2351-9789 © 2021 Theamount of this
Authors. effort has
Published been focused
by Elsevier B.V. on vibrations during the powder deposition process in order to
2351-9789
This is an©open
2019 The Authors,
access Published
article under bytheElsevier B.V.
CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer review
Peer-review under the
under responsibility of the scientific committee of NAMRI/SME
2351-9789 © 2019 Theresponsibility of the
Authors, Published Scientific
by Elsevier B.V.Committee of the NAMRI/SME
10.1016/j.promfg.2021.06.039
Peer review under the responsibility of the scientific committee of NAMRI/SME
360 Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367
2 Author name / Procedia Manufacturing 00 (2019) 000–000

determine if a process failure had occurred. Grasso et al. [15] and take action to cancel the print.
proposed a K-means clustering approach for automatic
detection of missing material defects, most common in Finally, one promising methodology for detecting print
complex geometries, using in-situ thermal-image data. They errors on a delta configuration FDM 3D printer was proposed
were able to successfully demonstrate this through the by Nuchitprasitchai et al. [24]. Their approach first generated
identification of zones that exhibit a low cooling rate indicative simulated images as a ground truth against which recorded
of overheating. Khanzadeh et al. [16] developed a methodology images could be compared. As the print progressed, they
for detecting changes in process parameters through tensor captured images of the part with a single camera, which were
decomposition of thermal image streams. Though these present then processed to segment regions of the captured image that
an interesting case study, they are more instructive in their contained the printed part vs. the background. Regions of the
overarching application than in their implementation due to the image containing the part were then aligned and compared with
significant differences between PBF processes and fused the simulated ground truth images. Errors were detected when
deposition modelling (FDM). the two images deviated from each other significantly. One
disadvantage of this proposed hardware setup is that it was
A separate body of work exists studying the detection of a designed to be compatible with a delta configuration (6-axis 3D
variety of defects in consumer grade 3D printers. Many newer printer) in which the printed part remains completely stationary
machines, including the Prusa MK3S, have been outfitted by during the print. The vast majority of FDM printers use a
the manufacturer with sensors which can detect clogged cartesian coordinate system layout (3-axis), and thus move the
extruder and/or temperature sensors for automated tracking of part being printed throughout the process. This significantly
print bed/extruder temperature. However, these systems are complicates efforts to capture and monitor the progress of a
incapable of detecting any defects which occur while the print as the part being printed becomes a moving target. A
printer is operating nominally. One common example of this is system that can account for and overcome this movement
material slumping due to a lack of support and difficult during the print is needed in order for this approach to
overhangs. Liu et al. [17] devised a method of error detection generalize and apply to the majority of FDM printers. An
involving two digital microscopes which are mounted to additional feature that is needed is the ability to monitor
observe newly deposited material to monitor for a change in the multiple views of a print simultaneously as this greatly
process parameters through the detection of surface defects. improves the effectiveness of the system overall. Finally, it
Both these approaches focus on the material just after it has would be desirable to automate the cancelation of a print once
been deposited, any defect which may occur later in the print a failure has been detected.
could, therefore, remain undetected. An example of such a
defect is the shrinkage or warping that the material undergoes There are varying limitations that can be found in the
over the course of time following deposition, a defect their literature, including the inability to detect small errors
approach would fail to detect Baumann et al. [18] developed a occurring during nominal operation, a limited focus on only
methodology for capturing and segmenting images of a given one error mode, lack of in situ detection, and an inability to
part during the orienting process to catch detachment, missing handle a moving printbed. The approach we propose in this
material flow, deformed object, and surface errors. Their paper is capable of online failure detection under nominal
approach achieved a respectable detection rate of 60-80% but operating conditions of a 3D printer with a moving print bed.
had a similarly high false positive rate of 60-80%. Acoustic Our proposed approach makes use of a PRUSA MK3S, three
emissions have been demonstrated as an effective way to HD webcams which are situated surrounding the print bed, and
characterize failure modes [19] as well as successfully adapt finally a computer running MATLAB to perform all the
this capability for detection of those failure modes in real time necessary analysis (We also make use of MeshMixer for
[20]. Their methodology, however, is geared towards the modifying STL files).
detection of defects which are accompanied by a detectable
change in sound which narrows the scope of what can be In this paper we will introduce the developed methodology
detected. For example, geometric deformation due to which consists of three components: digital twin image
insufficient support structure would likely not be detectable by simulation, in-situ data collection, finally image comparison
analysing acoustic emissions. Straub [21] proposed a for failure detection. We will discuss the results of our study
multicamera approach to detecting significant departures in in detecting the catastrophic failures which occurred in the
quality stemming from dry printing (when filament has been three selected prints. We will conclude with a summary of our
exhausted) and early job termination. This approach is work and a brief discussion of future research we hope to
instructive as it uses a multi-sensor approach but is limited to conduct based on this research.
the detection of large amounts of missing material that result
when the printer runs out of filament. An augmented reality- 2. Methodology
based method has also been presented as a useful tool for error
detection by Ceruti et al. [22] whereby a digital version of the Any failure during printing that will result in a deformation
printed part can be rendered at various stages of completion and of the part is eventually propagated to the part, and evident in
superimposed over the actual part being printed. Through the that part. We propose to evaluate the success or failure of a
use of Speeded Up Robust Features, [23] discrepancies given print therefore, based on how the printed object compares
between the digital model and the manufactured part can be to the corresponding design object. By repeating this evaluation
detected. This approach is very interesting as it presents an process throughout the course of the print we can monitor for
application of a digital twin for the purpose of defect detection, defects and terminate the print upon detecting a departure from
however the detection of a defect still requires a user to monitor
Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367 361
Author name / Procedia Manufacturing 00 (2020) 000–000 3

the design. This approach consists of three major components The triangular mesh is specifically re-meshed so that each
which can be described as follows: 1) the generation of a triangle has an approximate edge length of 0.5 mm, this is done
ground-truth expectation in the form of 2D images which to ensure an even distribution of vertices about the surface of
simulates the part as it would be seen by our three cameras, see the mesh.
figure 1 for image of camera setup, at a given stage of
completion depending on what layer has been completed. 2)
The identification of which pixels from a captured image
belong to the printed part and which pixels belong to the static
background. And 3) comparing the resulting matrices from
parts (1) and (2) in order to determine if and when a defect
occurs.

Fig. 1. Prusa MK3S and accompanying sensors.

With this functionality in place, a print can be monitored in


real-time for departures in quality. Figure 2 shows how this
Fig. 2. Methodology Flow Chart.
process can be used to make decisions regarding the
cancelation of a print. Further, we propose a method by which
The vertices from the triangular mesh are then stored as a
the G-code of the print is modified in order to effectively matrix
address collection issues caused by the motion of the part
during the print. 𝑥𝑥1 𝑦𝑦1 𝑧𝑧1
𝑽𝑽 = [ ⋮ ⋮ ⋮] (1)
𝑥𝑥𝑁𝑁 𝑦𝑦𝑁𝑁 𝑧𝑧𝑁𝑁
2.1 Simulating Digital Twin Images
where 𝒗𝒗𝑛𝑛 = 𝑽𝑽[𝑛𝑛,: ] constitutes a single point in ℝ3 on the
The goal of this procedure is to generate a ground-truth value mesh. The STL file of an example part re-meshed in
that an optical measurement of a 3D printed part can be MeshMixer according to this process is shown in Figure 3, with
compared against at various stages of the printing process. Our vertices highlighted in white. It should be noted that in order
approach differs from [24] in that the ground truth images are to best mesh the parts according to the given constraints, the
automatically generated by the integrated capture/monitoring vertices of the mesh are not arranged in a grid or linear fashion.
software using the pinhole model of a camera. The first stage
of the proposed process is prerendering, which is done before Once the mesh is processed, it is analysed in order to
the print is initiated. In this stage, the 3D object is rendered determine which vertices will be visible to each of the three
based on the known position and orientation of the three cameras. It should also be noted that the process of determining
cameras monitoring the print-bed. which vertices are visible to the camera is not strictly necessary
for the purpose of detecting defects in the images, however, it
First, the 3D shape as defined by a triangular mesh stored as is essential for associating points in pixel space with points in
an STL file is re-meshed so as to generate a dense set of points cartesian 3-space. This step could be removed to lower the
that are roughly evenly spaced across the surface. This can be computational burden at the cost of doing further analysis of
performed using one of many open-source software packages. the collected data. The pinhole model of a camera is utilized
here, which makes the simplifying assumption that all light
362 Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367
4 Author name / Procedia Manufacturing 00 (2019) 000–000

entering into a camera converges to a single point defined for a The Möller-Trumbore algorithm is a quick computational
given camera. In this work, three separate cameras 𝒄𝒄1 , 𝒄𝒄2 , and method for evaluating this condition. It is proposed in [25] and
𝒄𝒄3 are used. A point 𝒗𝒗𝑛𝑛 that is visible to a given camera 𝒄𝒄𝑘𝑘 is implemented by [26]. This algorithm was utilized to create
𝒗𝒗𝒏𝒏 𝒄𝒄𝒌𝒌 that connects the point to
one for which the line segment ̅̅̅̅̅̅ three sets 𝑽𝑽𝒄𝒄𝟏𝟏 , 𝑽𝑽𝒄𝒄𝟐𝟐 , and 𝑽𝑽𝒄𝒄𝟑𝟑 containing all of the points that are
the pinhole location does not intersect the shape’s mesh at any theoretically visible to each camera. For the camera perspective
location between the two endpoints. shown in Figure 4, application of the algorithm generates the
points shown in Figure 5, which is shown at a slightly different
angle to clarify which points are included, and which are not.
A set of visible points is generated for each camera in the
system. Next, each of these sets is rendered in order to produce
a 2D image. The first step in this process is to project each 3D
point in a given set into a 2D point falling on an image plane
simulating the cameras. This is done according to the pinhole
model illustrated in Figure 6.

Fig. 3. Example of a re-meshed STL file.

Fig. 6. Pinhole model of a camera.

A point on the triangular mesh 𝒗𝒗𝒏𝒏 is first modified into


Fig. 4. Sample camera perspective.
homogeneous coordinates so that:

̃𝒏𝒏 = [𝑥𝑥𝑛𝑛 , 𝑦𝑦𝑛𝑛 , 𝑧𝑧𝑛𝑛 , 1]𝑻𝑻


𝒗𝒗 (2)

The object is to transform this point into an equivalent point in


2D space on the image plane, shown in Figure 3.

̃′𝒏𝒏 = [𝑢𝑢𝑛𝑛 , 𝑤𝑤𝑛𝑛 , 1]𝑻𝑻


𝒗𝒗 (3)

which is done according to the relation

̃′𝒏𝒏 = 𝐊𝐊 [ 𝐑𝐑 | 𝐭𝐭 ] 𝒗𝒗
𝑠𝑠 𝒗𝒗 ̃𝒏𝒏 (4)

where s is an optional constant used for scaling, K is a matrix


containing parameters describing the camera’s focal lengths in
each dimension as well as the relative positioning of the image
plane,

𝑓𝑓𝑥𝑥 0 𝑢𝑢0
𝐊𝐊 = [ 0 𝑓𝑓𝑦𝑦 𝑤𝑤0 ] (5)
0 0 1
Fig. 5. Visible points identified by the Möller-Trumbore algorithm. and [ 𝐑𝐑 | 𝐭𝐭 ] is a standard joint rotation-translation matrix
describing how the vertices of the object have been
Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367 363
Author name / Procedia Manufacturing 00 (2020) 000–000 5

rotated/translated in relation to the camera. determined, the simulated image matrix can be generated
according to the following procedure. First, a matrix of zeros is
𝒓𝒓𝟏𝟏𝟏𝟏 𝒓𝒓𝟏𝟏𝟏𝟏 𝒓𝒓𝟏𝟏𝟏𝟏 𝒕𝒕𝟏𝟏 created with the same dimensions (I, J) of the measured images.
[ 𝐑𝐑 | 𝐭𝐭 ] = [𝒓𝒓𝟐𝟐𝟐𝟐 𝒓𝒓𝟐𝟐𝟐𝟐 𝒓𝒓𝟐𝟐𝟐𝟐 𝒕𝒕𝟐𝟐 ] (6) Then, functions are defined to translate a given point 𝒗𝒗′𝑎𝑎 =
𝒓𝒓𝟑𝟑𝟑𝟑 𝒓𝒓𝟑𝟑𝟑𝟑 𝒓𝒓𝟑𝟑𝟑𝟑 𝒕𝒕𝟑𝟑 [𝑢𝑢𝑎𝑎 , 𝑤𝑤𝑎𝑎 ]𝑻𝑻 in 2D space to a pixel in matrix space (𝑖𝑖𝑎𝑎 , 𝑗𝑗𝑎𝑎 ). These
could be, for example:
This process can be completed quickly using the method
utilized in [27] and implemented in [28]. It results in three 1
𝑖𝑖𝑎𝑎 = ⌊𝑝𝑝𝑤𝑤 − 𝑤𝑤𝑎𝑎 ∙ 𝑏𝑏 + ⌋ (9)
separate sets of 2D points that correspond to each set of vertices 2
as seen by each camera 𝑽𝑽′𝒄𝒄𝟏𝟏 , 𝑽𝑽′𝒄𝒄𝟐𝟐 , and 𝑽𝑽′𝒄𝒄𝟑𝟑 . For example, the
set of 𝑁𝑁𝒄𝒄𝟏𝟏 2D vertices visible to Camera 1 can be expressed as 𝑗𝑗𝑎𝑎 = ⌊𝑝𝑝𝑢𝑢 + 𝑢𝑢𝑎𝑎 ∙ 𝑏𝑏 +
1
⌋ (10)
2
𝑥𝑥1 𝑦𝑦1
𝑽𝑽′𝒄𝒄𝟏𝟏 = [ ⋮ ⋮ ] (7) where (𝑝𝑝𝑤𝑤 , 𝑝𝑝𝑢𝑢 ) is the location in pixel space of the origin of the
𝑥𝑥𝑁𝑁𝒄𝒄 𝑦𝑦𝑁𝑁𝒄𝒄 u and w coordinate system, and b is a scaling factor with
𝟏𝟏 𝟏𝟏
dimensions of pixels per unit distance. Then for each pixel
An example of the resulting set of 2D vertices is shown in coordinate that is obtained from the 2D point cloud, the set of
Figure 7. Once this is established, it is necessary to turn each pixel points
2D point set into a set of black and white binary image
matrices for use in analysis. Because we want to monitor the (𝑖𝑖, 𝑗𝑗) ∶ (𝑖𝑖, 𝑗𝑗) ∈ [𝑖𝑖 − 0.25 ∙ 𝑏𝑏, 𝑖𝑖 + 0.25 ∙ 𝑏𝑏]
status of the 3D print as it is being constructed, it is necessary
to construct a set of ground truth images for each camera × [𝑗𝑗 − 0.25 ∙ 𝑏𝑏, 𝑗𝑗 + 0.25 ∙ 𝑏𝑏] (11)
depicting what the object should look like after each layer is
printed. If L layers are printed, then there should be L are made equal to 1 in the simulated image matrix S. Figure 8
simulated images in each set. Because the part will be illustrates the result of this process. It should be noted that
progressively constructed from the ground up, each image while the difference between the CAD file and the collected in-
will contain a greater amount of the printed object. One situ print shape prior to cooling is on the order of 100m, we
simplifying factor in this effort is the fact that the height of are interested in deformations which are on the order of 1mm
each layer is known, allowing for the determination of which or greater taking into consideration the size and geometry of
points will be visible after a given layer l. the parts. For our study, deformation due to thermal expansion
was not necessary to consider but could be necessary for higher
resolution cameras or for different parts.

Fig. 8. Ground-truth binary-image following completion of the 91st layer.

2.2. In-Situ Data Collection

In order to demonstrate the proposed methodology, an


experimental system was developed using three 1080p
webcams with manual focus and a Prusa MK3S 3D printer.
Each of the webcams were placed around the print bed and
positioned using a custom 3D printed mounting system. This
setup is illustrated in Figure 1. Each of these three cameras are
linked to MATLAB for live capture and analysis using the
MATLAB Support Package for USB Webcams. One challenge
common to FDM printers that utilize a cartesian coordinate
Fig. 7. Rendered 2D points simulating a camera’s view.
system (as opposed to the far less common delta configuration
which is what was used in [24]) is that the object is often
The points that are printed, and thus theoretically visible moving as it is printed. The direction of motion in the case of
after a given layer are: the Prusa MK3S (used for this study) is along the length (y-
axis). This poses a major issue for image acquisition and
{ 𝒗𝒗𝑛𝑛 ∶ 𝒗𝒗𝑛𝑛 [3] < 𝑙𝑙 ∙ ℎ } (8) analysis due to the need of precisely tracking position and
orientation relative to the cameras. One possible approach to
where h is the printer’s layer height setting. Once the set of solving this problem is the modification of the printer’s G-code
visible and printed points for each camera and each layer is in order to move the printed part to a known location with an
364 Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367
6 Author name / Procedia Manufacturing 00 (2019) 000–000

unobstructed view. In our experimental setup, this motion was order to make the segmentation more consistent, each of these
introduced into the G-code following the completion of each images is automatically cropped according to the extents of the
layer. The G-code was further modified to pass a message over ground truth images, leaving enough room around the part in
USB to the software controlling image acquisition that would case of defects.
indicate when the part had reached the required location. Once
this message is received by the software, an image is captured
by each of the cameras, stored, processed, and analyzed. It is
important to note that this can only be done using a printer that
supports open source hardware control. Some closed systems,
such as MakerBot printers, do not allow for this level of custom
access. After image acquisition is complete, the remainder of
the print can continue. This process is illustrated in Figure 9.
The top part of the figure shows the part as it is being printed,
while the bottom section illustrates how the part is moved into
the view of the cameras after a layer is completed.

Fig. 10. Top: Raw image captures following completion of the 91 st layer.
Bottom: Cropped images for minimization of part misidentification.

Simply collecting and cropping the images, however, is not


enough for a comparison with the ground-truth binary-images
which have been created prior to the print and data collection.
These cropped images must also be converted to binary
classifications in order to do a proper comparison. These
images are collected and stored as .jpg files, this means that
each image is made up of a 3D matrix which has the
dimensions: (image height (I) x image width (J) x 3). The three
slices correspond to the red, green, and blue color matrices
respectively, meaning each pixel in the image is described by
three values in the range 0 to 255 to describe color.

Because the filament color is dark grey and the background


is white, that means that the difference between the background
pixel color and the part pixel color will not disproportionately
favour any of the individual RGB matrices. Therefore, a sum
of the three matrices accentuates what is a relatively small
difference in each individual color matrix creating a more
noticeable gap in values between the part and the static
background.
𝑀𝑀𝑖𝑖𝑖𝑖 = ∑3𝑘𝑘=1 𝑅𝑅𝑅𝑅𝑅𝑅𝑖𝑖𝑖𝑖𝑖𝑖 (12)

The result of the sum is a 2D matrix M with dimensions: (image


height (I) x image width (J)) with each pixel containing a value
from 0 to 765. The threshold range used to segment these
images was identified by taking a series of pixels known to
belong to the part and another series of pixels known to belong
to the background. This resulted in two relatively narrow
ranges of values, the larger values (550~765) belong to the
static background while smaller values (0~300) belong to the
Fig. 9. Diagram of print capture setup (Top: during print, Bottom: during part itself. The threshold value of t = 400 was chosen to reduce
capture period). the chance of a false positive or negative that may result from
any inconsistency in lighting over the course of a given print.
A further advantage of this approach is that cameras can be The threshold value (t) will not need to be updated so long as
placed significantly closer to the known location as they don’t filament color, background, and lighting are not altered.
have to encompass the entirety of the possible locations the
moving part will visit. This significantly increases the number Finally, converting the matrix S into binary classifications
of pixels on target, thus increasing the sensitivity of the method. can be accomplished using the thresholding value that has been
selected empirically based on the color of the filament used:
Once photographs are taken, it is necessary to segment out
the background so only those pixels which belong to the part 0 𝑆𝑆𝑖𝑖𝑖𝑖 > 𝑡𝑡
can be identified. Figure 10 shows what the raw images 𝑡𝑡 = 400, 𝑀𝑀𝑖𝑖𝑖𝑖 = { (13)
1 𝑆𝑆𝑖𝑖𝑖𝑖 ≤ 𝑡𝑡
captured by each of the cameras looks like for a given layer. In
Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367 365
Author name / Procedia Manufacturing 00 (2020) 000–000 7

The final result of this process can be seen in Figure 11. One If 𝐶𝐶𝑖𝑖𝑖𝑖 = 1, this is considered an event. The result of this
advantage of using a simplistic approach like this as opposed comparison can be seen in Figure 12. In order for a failure to
to a more complicated algorithm is that results are be declared at a given layer, a criterion should be established
computationally cheap to acquire and can be obtained directly based on the prevalence of events. For this demonstration,
following the collection of the images. failure is detected when there are at least 50 event detections
across at least two of the three images. We chose 50 event
detections empirically based on the resolution of the cameras
and the geometry of the parts we decided to test. Once failure
has been detected; the printer can be sent a command to
terminate the print. For reference, Figure 13 shows what the
cropped unsegmented images look like, in other words what the
Fig. 11. Cropped binary-image following completion of the 91st layer. part actually looked like after the 100th layer.

This means that the comparison with the ground-truth


images, which were created prior to the beginning of the print,
can be done almost immediately following the initial image
capture.

2.3 Image Comparison for Failure Detection


Fig. 13. Cropped unsegmented image following completion of the 100 th layer
Once the ground-truth images are created and the print
begins, with the completion of each new layer comes a set of 3. Results
three images which are cropped and then segmented such that
both the set of ground-truth images and the set of in situ images In order to evaluate the effectiveness of the proposed
match in dimension and format. Each binary ground-truth approach in determining if a substantial defect has occurred, a
image can be compared to its binary in situ counterpart for a test was developed in which our methodology was applied to
given layer (Figures 7 and 11 show these images respectively). three separate parts. For each of the three prints we used this
Because each of these images is a matrix of 1s and 0s, it is approach to classify each set of layer images as pass or fail. In
possible to identify where one matrix differs from the other by addition, each print was monitored by a user who determined
subtracting one from the other in order to create a comparison in advance on what layer each print failed. Part #1 is the half
matrix C. arch which has served to illustrate this approach. Part #2 and
Part #3 can be seen in Figure 14 along with Part #1.

Fig. 14. Left: Part #1. Middle: Part #2. Right: Part #3.

Part #1 contains 107 layers while Parts #2 and #3 each


contain 106 layers.

We ignore the first 20 layers due to the part skirt which is


Fig. 12. Part #1 included by the printer for quality purposes but results in false
Top: Ground-truth image 100th layer. event detections as the skirt is not part of the design. The part
Middle: In-situ image 100th layer. skirt is removable in the Prusa G-code settings so in the future
Bottom: Comparison image 100th layer.
we will be able to consider all layers without the possibility of
an event detection being due to anything except a part failure.
Since S represents the matrix which was created from the The results of this experiment can be seen in table 1.
ground-truth image and M represents the matrix which was
obtained from the cameras during the print The lone misclassification occurred with the classification of
the 93rd layer of Part 1, which should have been classified as a
𝐶𝐶𝑖𝑖𝑖𝑖 = 𝑀𝑀𝑖𝑖𝑖𝑖 − 𝑆𝑆𝑖𝑖𝑖𝑖 (14) failure, However the required criteria were not met for that
failure detection. Instead, the print was classified as a failure
1, 𝑖𝑖𝑖𝑖 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑤𝑤ℎ𝑒𝑒𝑒𝑒 𝑖𝑖𝑖𝑖 𝑠𝑠ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑛𝑛𝑛𝑛𝑛𝑛
one layer later on layer 94. Figures 15 and 16 show the 100th
𝐶𝐶𝑖𝑖𝑖𝑖 = {0, 𝑖𝑖𝑖𝑖 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑖𝑖𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑜𝑜𝑜𝑜 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎
−1, 𝑖𝑖𝑖𝑖 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑑𝑑𝑑𝑑𝑑𝑑𝑠𝑠 𝑛𝑛𝑛𝑛𝑛𝑛 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑡𝑡ℎ𝑒𝑒 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑤𝑤ℎ𝑒𝑒𝑒𝑒 𝑖𝑖𝑖𝑖 𝑠𝑠ℎ𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 layer ground-truth images, in-situ images, and comparison
images of parts #2 and #3 respectively.
366 Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367
8 Author name / Procedia Manufacturing 00 (2019) 000–000

4. Conclusion

Table 1: User layer classification vs Auto layer classification performed using This study develops a multi-camera sensing system and
the proposed approach.
approach for the detection of catastrophic failures during
Classification Accuracy printing. This approach builds upon the digital twin concept by
Class. User Class. Auto Class. comparing in-situ images of the print bed with images
Part 1
Pass 21-92 21-93 generated from the design mesh file. We focus on optical
Fail 93-107 94-107 information, which is accessible and easy to collect, for the
Pass 21-93 21-93
Part 2
Fail 94-106 94-106
purpose of quickly detecting significant departures in quality.
Pass 21-93 21-93
Part 3 The methodology presented begins with the pre-print
Fail 94-106 94-106
simulation of ground truth images, followed by processing of
images of the print which are captured during the printing
process, and finally the real-time comparison of the ground-
truth images with the captured images to determine if the print
should be terminated or not. Preliminary accuracy results
suggest that this approach is very promising although more
testing is needed for a more thorough assessment. We were able
to demonstrate a system which can monitor a print from
multiple view angles as well as also accommodate a process
where the object being monitored is moving within the viewing
space. One significant limitation of this approach, however, has
to do with part concavity; any part geometry which is hidden
from camera view cannot be monitored using this approach
potentially leading to undetected deformations of the part. We
plan to address this issue in subsequent work.

Future research will focus on investigating more


complicated geometry as well as robust classification of
catastrophic failures. Additionally, we plan to explore more
generalized methods of identifying part failure which can be
Fig. 15. Part #2 implemented regardless of part geometry or sensor fidelity.
Top: Captured images 100th layer.
Top-Middle: Ground-truth image 100th layer.
Bottom-Middle: In-situ image 100th layer. Acknowledgements
Bottom: Comparison image 100th layer.
This research was supported by Honeywell Aerospace
through CESMII grant and and by a graduate research
fellowship from the Rose Hills Foundation. We particularly
thank great feedback, comments, suggestions, and support
from Greg Colvin, Raj Bharadwaj, Pat Hinke, Artie Dins, and
other engineers at Honeywell.

References

[1] Meng X, Shi L, Yao L, Zhang Y, Cui L. The rise of 3D Printing


entangled with smart computer aided design during COVID-19 era.
Colloids Surfaces A Physicochem Eng Asp 2020:124658.
https://doi.org/10.1016/j.jmsy.2020.10.009.
[2] Thompson A, McNally D, Maskery I, Leach RK. X-ray computed
tomography and additive manufacturing in medicine: A review. Int
J Metrol Qual Eng 2017;8. https://doi.org/10.1051/ijmqe/2017015.
[3] Javaid M, Haleem A. Additive manufacturing applications in
medical cases: A literature based review. Alexandria J Med
2018;54:411–22. https://doi.org/10.1016/j.ajme.2017.09.003.
Fig. 16. Part #3 [4] Salmi M, Paloheimo KS, Tuomi J, Wolff J, Mäkitie A. Accuracy of
Top: Captured Images 100th layer. medical models made by additive manufacturing (rapid
Top-Middle: Ground-truth image 100th layer. manufacturing). J Cranio-Maxillofacial Surg 2013;41:603–9.
Bottom-Middle: In-situ image 100th layer.
Bottom: Comparison image 100th layer. https://doi.org/10.1016/j.jcms.2012.11.041.
[5] Tuomi J, Paloheimo KS, Vehviläinen J, Björkstrand R, Salmi M,
Huotilainen E, et al. A novel classification and online platform for
planning and documentation of medical applications of additive
Christopher M. Henson et al. / Procedia Manufacturing 53 (2021) 359–367 367
Author name / Procedia Manufacturing 00 (2020) 000–000 9

manufacturing. Surg Innov 2014;21:553–9. https://doi.org/10.1016/j.addma.2018.08.014.


https://doi.org/10.1177/1553350614524838. [17] Liu C, Law ACC, Roberson D, Kong Z (James). Image analysis-
[6] Gisario A, Kazarian M, Martina F, Mehrpouya M. Metal additive based closed loop quality control for additive manufacturing with
manufacturing in the commercial aviation industry: A review. J fused filament fabrication. J Manuf Syst 2019;51:75–86.
Manuf Syst 2019;53:124–49. https://doi.org/10.1016/j.jmsy.2019.04.002.
https://doi.org/10.1016/j.jmsy.2019.08.005. [18] Baumann F, Roller D. Vision based error detection for 3D printing
[7] Uriondo A, Esperon-Miguez M, Perinpanayagam S. The present processes. MATEC Web Conf 2016;59:3–9.
and future of additive manufacturing in the aerospace sector: A https://doi.org/10.1051/matecconf/20165906003.
review of important aspects. Proc Inst Mech Eng Part G J Aerosp [19] Wu H, Wang Y, Yu Z. In situ monitoring of FDM machine
Eng 2015;229:2132–47. condition via acoustic emission. Int J Adv Manuf Technol
https://doi.org/10.1177/0954410014568797. 2016;84:1483–95. https://doi.org/10.1007/s00170-015-7809-4.
[8] Kobryn PA, Ontko N. R, Perkins L. P, Tiley JS. Additive [20] Wu H, Yu Z, Wang Y. Real-time FDM machine condition
Manufacturing of Aerospace Alloys for Aircraft Structures. AVT- monitoring and diagnosis based on acoustic emission and hidden
139 Spec Meet Amsterdam 2006;139:1–14. semi-Markov model. Int J Adv Manuf Technol 2017;90:2027–36.
[9] Beyer C. Strategic Implications of Current Trends in Additive https://doi.org/10.1007/s00170-016-9548-6.
Manufacturing. J Manuf Sci Eng Trans ASME 2014;136:1–8. [21] Straub J. Initial work on the characterization of additive
https://doi.org/10.1115/1.4028599. manufacturing (3D printing) using software image analysis.
[10] Shapiro AA, Borgonia JP, Chen QN, Dillon RP, McEnerney B, Machines 2015;3:55–71. https://doi.org/10.3390/machines3020055.
Polit-Casillas R, et al. Additive manufacturing for aerospace flight [22] Ceruti A, Liverani A, Bombardi T. Augmented vision and
applications. J Spacecr Rockets 2016;53:952–9. interactive monitoring in 3D printing process. Int J Interact Des
https://doi.org/10.2514/1.A33544. Manuf 2017;11:385–95. https://doi.org/10.1007/s12008-016-0347-
[11] Campbell I, Bourell D, Gibson I. Additive manufacturing: rapid y.
prototyping comes of age. Rapid Prototyp J 2012;18:255–8. [23] Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-Up Robust
https://doi.org/10.1108/13552541211231563. Features (SURF). Comput Vis Image Underst 2008;110:346–59.
[12] Jafari-Marandi R, Khanzadeh M, Tian W, Smith B, Bian L. From https://doi.org/10.1016/j.cviu.2007.09.014.
in-situ monitoring toward high-throughput process control: cost- [24] Nuchitprasitchai S, Roggemann M, Pearce JM. Factors effecting
driven decision-making framework for laser-based additive real-time optical monitoring of fused filament 3D printing. Prog
manufacturing. J Manuf Syst 2019;51:29–41. Addit Manuf 2017;2:133–49. https://doi.org/10.1007/s40964-017-
https://doi.org/10.1016/j.jmsy.2019.02.005. 0027-x.
[13] Kleszczynski S, Zur Jacobsmühlen J, Sehrt JT, Witt G. Error [25] Möller T, Trumbore B. Fast, minimum storage ray/triangle
detection in laser beam melting systems by high resolution imaging. intersection. ACM SIGGRAPH 2005 Courses, SIGGRAPH 2005
23rd Annu Int Solid Free Fabr Symp - An Addit Manuf Conf SFF 2005:1–7. https://doi.org/10.1145/1198555.1198746.
2012 2012:975–87. [26] Jaroslaw Tuszynski. Triangle/Ray Intersection. MATLAB Cent File
[14] Kleszczynski S, zur Jacobsmühlen J, Reinarz B, Sehrt JT, Witt G, Exch 2020. https://www.mathworks.com/matlabcentral/fileexchan
Merhof D. Improving Process Stability of Laser Beam Melting (accessed August 15, 2020).
Systems. Fraunhofer Direct Digit Manuf Conf 2014:1–6. [27] Taylor Z, Nieto J. Motion-based calibration of multimodal sensor
[15] Grasso M, Laguzza V, Semeraro Q, Colosimo BM. In-Process arrays. Proc - IEEE Int Conf Robot Autom 2015;2015-June:4843–
Monitoring of Selective Laser Melting: Spatial Detection of Defects 50. https://doi.org/10.1109/ICRA.2015.7139872.
Via Image Data Analysis. J Manuf Sci Eng Trans ASME [28] Taylor Z. Project 3D into 2D image coordinates using a camera
2017;139:1–16. https://doi.org/10.1115/1.4034715. model. MATLAB Cent File Exch 2020.
[16] Khanzadeh M, Tian W, Yadollahi A, Doude HR, Tschopp MA, https://www.mathworks.com/matlabcentral/fileexchange/48752-
Bian L. Dual process monitoring of metal-based additive project-3d-into-2d-image-coordinates-using-a-camera-
manufacturing using tensor decomposition of thermal image model?s_tid=srchtitle (accessed August 16, 2020).
streams. Addit Manuf 2018;23:443–56.

You might also like