Development of A Robotic System For Automated Decaking of 3D-Printed Parts

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/344982260

Development of a Robotic System for Automated Decaking of 3D-Printed Parts

Conference Paper · May 2020


DOI: 10.1109/ICRA40945.2020.9197110

CITATIONS READS
2 44

6 authors, including:

Huy Nguyen
Nanyang Technological University
7 PUBLICATIONS   25 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Large Scale Selective Laser Melting View project

Industrial Robot Calibration View project

All content following this page was uploaded by Quang Cuong Pham on 03 November 2020.

The user has requested enhancement of the downloaded file.


2020 IEEE International Conference on Robotics and Automation (ICRA)
31 May - 31 August, 2020. Paris, France

Development of a Robotic System for


Automated Decaking of 3D-Printed Parts
Huy Nguyen1,2* , Nicholas Adrian1,2 , Joyce Lim Xin Yan1,2 , Jonathan M. Salfity1,3 , William Allen1,3 and Quang-Cuong Pham1,2
1
HP-NTU Digital Manufacturing Corporate Lab, Nanyang Technological University, Singapore.
2
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore.
3
HP Labs, HP Inc.
*
Corresponding author. Email: huy.nguyendinh09@gmail.com

Abstract—With the rapid rise of 3D-printing as a competitive


mass manufacturing method, manual “decaking” – i.e. removing
the residual powder that sticks to a 3D-printed part – has
become a significant bottleneck. Here, we introduce, for the
first time to our knowledge, a robotic system for automated
decaking of 3D-printed parts. Combining Deep Learning for
3D perception, smart mechanical design, motion planning, and
force control for industrial robots, we developed a system
that can automatically decake parts in a fast and efficient
way. Through a series of decaking experiments performed on
parts printed by a Multi Jet Fusion printer, we demonstrated
the feasibility of robotic decaking for 3D-printing-based mass
manufacturing.
Index Terms—deep learning, manipulation, system design,
3D-printing, decaking

I. I NTRODUCTION
With the rapid rise of 3D-printing as a competitive mass
manufacturing method, the automated processing of 3D-
printed parts has become a critical and urgent need. Post-
processing includes handling, polishing, painting, assembling
the parts that have been previously 3D-printed. In 3D-printing
processes that involve a powder bed, such as binder jet [1],
[2], Selective Laser Sintering, Selective Laser Melting [3] or
HP Multi Jet Fusion (MJF) [4], a crucial post-processing step
is decaking. After printing, some non-bound/non-melted/non-
Figure 1. Top: An operator removing powder from a 3D-printed part
sintered/non-fused powder usually sticks to the 3D-printed (manual decaking). Bottom: Our proposed robotic system for automated
part, forming together a “cake”. Decaking consists of remov- decaking. A video of the actual robotic decaking process can be found with
ing that residual powder from the part, which can then be fed the accompanying material of this paper.
to downstream post-processing stages.
1) Pick a caked part from the origin container with a
Decaking is currently done mostly by hand: human op-
suction cup;
erators remove the residual powder manually with brushes,
2) Clean the underside of the caked part by rubbing it on
as shown in Figure 1, top. Recent 3D-printing technologies,
a brush;
such as HP MJF, enable printing hundreds, or even thou-
3) Flip the caked part;
sands, of parts in a single batch, making manual decaking
4) Clean the other side of the caked part;
a major bottleneck in 3D-printing-based mass manufacturing
5) Place the cleaned part into the destination container.
processes. There are some automated approaches to decaking,
using for instance tumblers [5], but such approaches are Note that those steps are very general and can be applied to
unpractical if the powder is moderately sticky or if the parts other 3D-printing processes and to most parts that are nearly
are fragile. flat.
Here, we introduce, for the first time to our knowledge, a Step 1 is essentially a bin-picking task, which has recently
robotic system for automated decaking of 3D-printed parts, drawn significant attention from the community. However,
see Figure 1, bottom. Specifically, our system performs the the task at hand presents unique difficulties: (i) the caked
following steps: parts contain unpredictable amount of residual powder, over-
978-1-7281-7395-5/20/$31.00 ©2020 IEEE 8202
lap each other, and are mostly white-colored (the color of the Since the Amazon Picking Challenge 2015, a variety of
powder), making part detection and localization particularly approaches to a more general bin-picking problem had been
challenging; (ii) the residual powder and the parts have dif- proposed [17]–[19]. In [20], authors discussed their obser-
ferent physical properties, making caked part manipulation, vations and lessons drawn from a survey conducted among
especially by a position-controlled industrial robot 1 , diffi- participating teams in the challenge. Based on these lessons,
cult. We address these challenges by leveraging respectively we identified 3D perception, which recognizes objects and
(i) recent advances in Deep Learning for 2D/3D vision; and determining their 3D poses in a working space, as a key
(ii) smart mechanical design and force control. component to build a robust bin-picking system. Histori-
Steps 2, 3 and 4 are specific to the decaking problem at cally, most approaches to conduct instance object recognition
hand. A particular challenge is the control of the contacts be- are based on local descriptors around points of interest in
tween the industrial robot, the parts, and the brushing system. images [21]. While these features are invariant to rotation,
We address this challenge by emphasizing smart mechanical scale, and partial occlusions, they are not very robust to
design whenever possible (cleaning station, flipping station), illumination changes or partial deformations of the object.
and by using force control [6] to perform compliant actions Furthermore, since they depend on interest points and local
when necessary. Our pipeline can be applied to all parts with features, they only work well for objects with texture. Recent
nearly flat shapes, such as shoe insoles2 , which, to date, are advances in machine learning using convolutional neural
popular in 3D-printing-based mass manufacturing processes. networks (CNNs) have rapidly improved object detection
The remainder of the paper is organized as follows. In and segmentation results [22]–[25]. Our detection module is
Section II, we review related work in robotic cleaning, bin- based on a Mask R-CNN model, which was originally open-
picking, 3D perception, and force control. In Section III, sourced by Matterport Inc., and implemented in TensorFlow
we present our hardware system design. In Section IV, and Keras [23]. The Mask R-CNN model is an extension of
we describe our software stack. In Section V, we evalu- the Faster R-CNN [22] model that achieved rapid object clas-
ate the system performance in number of actual decaking sification and bounding-box regression. Mask R-CNN adopts
experiments, and demonstrate the feasibility of automated its two-stage procedure from Faster R-CNN. The first stage,
decaking. Finally, in Section VI, we conclude and sketch called Regional Proposal Network (RPN), proposes candidate
some directions for future research. regions in which there might be an object in the input image.
The second stage, in parallel, performs classification, refines
II. R ELATED WORK the bounding-box and generates a segmentation mask of the
Recent years have seen a growing interest in robotic sur- object on each proposal region. Both stages are connected to
face/part cleaning. Advances in perception, control, planning a backbone structure.
and learning have been applied to the problem [7]–[10]. Most Another important conclusion is that system integration
research used robot manipulators to clean particles, powder and development are fundamental challenges to build task-
or erase marker ink [7], [9]–[11], some considered mobile specific, robust integrated systems. It is desirable to study
robots to vacuum dust, or objects lying on the floor, such as integrated solutions which include all component technolo-
papers and clips [12], [13]. In fact, factors like variation in gies, such as 3D object pose estimation, control, motion
type of dust, shape of object, material, and environmental planning, grasping, etc. Besides, it is also highlighted that
constraints require different approaches and techniques to good mechanical designs can sidestep challenging problems
tackle the problem. In this work, we are interested in the in control, motion planning and object manipulation. In line
problem of cleaning 3D-printed parts which are randomly with this idea, our approach combines 3D perception using
placed in a bin, and each part is covered by an unknown Deep Learning, smart mechanical design, motion planning,
amount of residual or loose powder. Hence, we confine our and force control for industrial robots, to develop a system
review of prior work as follows: that automatically decakes parts in a fast and efficient way.
As mentioned previously, the first step of our task is In addition, another challenging problem that is unique
essentially a bin-picking task, which is one of the classical to our decaking task is force control. To achieve compliant
problems in robotics and have been investigated by many motion control, we take a position-controlled manipulator
research groups, e.g. [14]–[16]. In these works, simplified as a baseline system and make necessary modifications by
conditions are often exploited, e.g. parts with simple geo- following the pipeline in [26], [27] (thanks to its robustness).
metric primitives, parts with holes to ease grasping action, This enables safe and sufficient cleaning operations without
parts with discriminative features. the risk of damaging the parts. For more specific surveys on
force control, interested readers can refer to [28], [29].
1 Most robots currently used in the industry are position-controlled, that is,
they achieve highly accurate control in position and velocity, at the expense III. H ARDWARE SYSTEM OVERVIEW
of poor, or no, control in force and torque. Yet, force or compliant control is
crucial for contact tasks, such as part manipulation and decaking. A number We design our system while keeping in mind a scalable
of compliant robots have been developed in recent years but, compared to solution that could eventually tackle the problem of post-
existing industrial robots, they are still significantly more expensive, less processing 3D-printed parts in a real scenario, at an advanta-
robust and more difficult to maintain.
2 The authors have been granted permission by Footwork Podiatry Labo- geous cost. The robotic platform used in this work is, there-
ratory to use their shoe insole design and images in this paper. fore, characterized by cost-efficient, off-the-shelf components
8203
Figure 2. Our hardware system design

combined with a classical position-control industrial manip-


ulator. In particular, the main components of our platform
Figure 3. Cleaning station, comprising of a fan, a brush rack, and a vacuum
are outlet.
• 1 Denso VS060: Six-axis industrial manipulator;
• 1 ATI Gamma Force-Torque (F/T) sensor;
D. Flipping station
• 1 Ensenso 3D camera N35-802-16-BL;
• 1 suction system powered by a Karcher NT 70/2 vacuum
machine;
• 1 cleaning station;
• 1 flipping station.
All control and computations are done on a computer with
an Intel Xeon E5-2630v3, 64 GB RAM.
A. Suction system
For grasping the parts, we decided to used a suction cup.
This choice was motivated by the versatility and performance
of suction-based system for bin-picking [20], and by fact that
most recovered powders are recyclable. Our suction system is
designed to generate both high vacuum and high air flow rate.
Figure 4. Flipping station.
This is to provide sufficient force to lift parts and to maintain
a firm hold of parts during brushing. Air flow is guided
To change the part orientation, we implement a passive
from a suction cup on the tip of the end effector through
flipping station (i.e. no actuator is required to perform
a flexible hose into the vacuum machine. The vacuum itself
flipping). As illustrated in Figure 4, the part is dropped from
is generated by a 2400W vacuum cleaner. For binary on/off
the top of the station and moves along the guiding sliders.
control, we customize a 12V control output from the DENSO
Upon reaching the bottom of the slider, the part is flipped
RC8 controller.
and is ready to be picked up by the robot.
B. Camera This design is simple but works well with relatively
Robust perception is a one of the key components of flat parts such as shoe insoles, flat boxes, etc. The next
the system. We manually optimized the selection of camera improvement of the station will accommodate general part
and camera location to maximize the view angles; avoid geometry and provide more reorientation options.
occlusions from robot arm, end-effector and cleaning, pick- IV. S OFTWARE SYSTEM OVERVIEW
ing, dropping stations; avoid collisions with objects and
environment; and safety of the device. Our software system is composed of a state machine and
For the sake of robustness, our system employs Ensenso a series of modules. The communcation between different
3D camera N35-802-16-BL and combines the use of both 2D hardware and software components is based on the Robot
grayscale image and depth information. Operating System (ROS) [30].

C. Cleaning station A. State machine


To clean the parts, a brush rack is installed at the bottom The state machine is responsible for selecting the appro-
of the cleaning station. We also integrate a dust management priate module to execute at each point in time, see Figure 5.
system to collect excess powder removed during brushing. The solid arrows show the direction to next states depending
Throughout the cleaning process, a blowing fan is automati- on the result of current states. The state machine has access
cally activated to direct the powder stream frontward into the to all essential information of the system, including types,
vacuum suction hose, see Figure 3. poses, geometries and cleanliness, etc. of all objects detected
8204
Y
Part
Y
segmentation Both sides  Caked parts
Initialization Picking Cleaning Stowing
and cleaned? remaining?
localization
N N

Flipping End

Figure 5. The system is composed of a state machine and a series of modules to perform perception and different types of action.

Mask R-CNN Pointcloud extraction

Figure 6. Example result of the object detection module based on Mask R-CNN. The estimated bounding boxes and part segmentations are depicted in
different colors and labelled with the identification proposal and the confidence. We reject the detection with confidence lower than 95%

in the scene. Each module can query this information to combination of depth and RGB to be essential for a robust
realize its behavior. As a result, this design is general and perception solution.
can be adapted to many more types of 3D-printed parts.
For example, the system can choose and execute a suitable First, a deep neural network based on Mask R-CNN
cleaning strategy based on the type of part, its pose, and its classifies the objects in the RGB image and performs instance
prioritized list of actions; and decide whether to reattempt segmentation, which provides pixel-wise object classification.
with a different strategy or skip that part based on the The developers had provided a Mask R-CNN model that was
outcome. In our implementation, the modules were fine- pre-trained on large image classification datasets (Microsoft
tuned for each type of object based on previous experiments COCO dataset [31]). Hence, we applied transfer learning
to obtain a high success rate. It is desirable to have a on the pre-trained model to save training time, by training
systematic approach to optimize the algorithm which decides the network to be able to classify a new object class. In
the best cleaning strategy to use for each type of part (see our experiment, this network requires from 75-100 labelled
Section V-C). images of real occurrences of typical 3D-printed parts. The
The state machine is also responsible for selecting the most result was at very high detection rate of all parts presenting
feasible part to be decaked next. Since the parts are often in the bin, as shown in Figure 6. On average, our network
cluttered or stacked, we generate the cleaning sequence by has a recall of 0.967 and a precision of 0.975.
prioritizing parts with large top surfaces and are on top of
Second, pose estimation of the parts is done by estimating
the clutter. This is to perform suction easier and to de-stack
the bounding boxes and computing the centroids of the
parts in cluttered configurations.
segmented pointclouds. The pointcloud of each object is
B. Perception refined (i.e. statistical outlier removal, normal smoothing,
etc.) and used to verify if the object can be picked by suction
The perception task is to identify and localize visible (i.e. exposed surfaces must be larger than suction cup area).
objects in the working space. However, this is a challenging The 3D information of the bin is also used later during
problem, due to heavy occlusions (i.e. self-occlusion and the motion planning for collision avoidance. We included
occlusion by powder) and the poor contrast between the heuristics to increase the robustness of the estimations by
freshly printed objects and their background. Our approach specifying physical constraints inside the bin e.g. parts must
is to separate the perception into two stages: object detection, lie within the bin’s walls and cannot be floating within the
segmentation and 3D pose estimation. At the first stage, we bin.
utilize a state-of-the-art deep-learning network to perform in-
stance detection and segmentation. The second stage extracts We also note that the Ensenso camera cannot capture both
the 3D points of each object using the segmentation mask 2D and 3D images simultaneously as the high-intensity IR
and estimate the object pose. This pipeline allow us to exploit light tends to blind the 2D camera. This module is, therefore,
the strength of deep-learning in processing 2D images and also in charge of turning on and off the high-intensity IR
depth information in estimating object pose. We believe such strobe to prevent it from affecting 2D images.
8205
C. Motion primitives
Some modules, such as “Picking” and “Cleaning”, are
themselves composed of a series of motion primitives.
Picking (suction-down) motion primitives: These primi-
tives are useful for picking parts with exposed nearly flat
surfaces. We limit our implementation to top-down direc-
tions. The process is as follows:
1) The robot moves the suction cup above the centroid of
the part and lowers the cup. Compliant force control is
enabled and tells the robot when to stop the moving-
down motion.
2) Check whether the height at which the suction cup was
Figure 7. We use a combination of spiral and rectircle paths for cleaning
stopped matches the expected height (from the point motions. An example of spiral paths is shown in red. The yellow dot denotes
cloud with some tolerances). the centroid of the parts at beginning of the motion. We modify the spiral
3) The suction cup lifts, and the system constantly checks paths so that they continue to circle around this yellow dot after reaching a
maximum radius. An example of rectircle path is shown in blue. Rectircle
the force torque sensor to determine whether there is path’s parameters include width, height and its direction in XY plan.
any collision (which often results large drag forces).
Using suction is typically superior for parts with nearly flat 3) Removing residual powder from the underside of the
and wide surfaces. However, even when the suction cup could part,
not form a perfect seal with porous parts, high air flow often 4) Flipping the part,
maintains sufficient force on the part to lift it. Moreover, the 5) Re-localizing parts in the container bin and in the
suction cup is versatile and can be replaced to fit many types flipping collection area,
of object. 6) Picking flipped part,
Cleaning motion primitives: These primitives remove 7) Removing residual powder from the other side of the
residual powder and debris from the printed parts. Our current part,
implementation of these primitives is limited to parts that are 8) Placing the part into the collection bin and continuing
nearly flat (e.g. shoe insoles). with any remaining parts (Step 2-8).
First, the robot positions the part above the brush rack in
the cleaning station. Then, the compliant force control mode
is enabled to move the robot until contact with the brushes. B. Performance
Next, the cleaning trajectories are performed. To maintain
constant contact between the brushes and the part, we apply The sytem performed the cleaning task on 10 freshly 3D-
a hybrid position/force control scheme: force is regulated in printed shoe insoles. Two aspects of the proposed system
the direction normal to the brushes’ surfaces, while position were experimentally investigated. First, we evaluated the
is regulated in the tangential directions. The force thresholds cleaning quality by weighing the parts before and after
are determined through a number of trial-error experiments. cleaning. Second, we reported the actual running time of the
The cleaning trajectories are planned following two patterns: proposed system in a realistic setting. As the baseline for
spiral and rectircle (see Figure 7). While the spiral motion comparison, we also considered the same task performed by
is well-suited for cleanning nearly flat surfaces, the rectircle skilled human operators. The results of this experiment are
motion aids with removing powder in concave areas. summarized in Table I.
D. Motion planning and control As expected, the performance of the system yielded a
promising result as a first prototype. Two test runs were con-
All collision-free movements are planned online using
ducted and successfully executed by the proposed system. We
the Bi-directional Rapidly-Exploring Random Trees (RRTs)
noted that a perfectly cleaned printed shoe insoles weighed
algorithm [32] available in OpenRAVE [33].
22.4 ± 2g on average. The proposed system could remove
V. S YSTEM PERFORMANCE AND DISCUSSION powder from a dirty part on an average of 50.1 ± 10.9s and
parts weighed 37.6 ± 6.4g on average after cleaning, which
A. Experimental setup
meant 42.0% ± 24.4% excess powder has been removed
Parts considered in the experiments were 3D-printed shoe from the parts. In comparison, a skilled human operator
insoles. At the current stage of development, we assumed could clean 72.0% ± 12.1% of excess powder (parts weighed
all parts to be unpacked from printer powderbed and placed 29.8±3.2g after cleaning in 41.2±1.9s on average). Although
randomly in container bin. The task of the system was to our system performance regarding the cleaning quality was
remove powder from the parts, which naturally consisted of 1.7 times less than that of a human operator, this was a
several key steps as follows: potential outcome as our robot system is supposed to work
1) Initial localization of parts in container bin, significantly longer. This however raised questions how task
2) Picking a part out of the container bin, efficiency could be further improved.
8206
C. Discussion Table I
P ERFORMANCE OF THE PROPOSED SYSTEM , AS COMPARED TO THAT OF
We observed that our system performed actual brushing SKILLED HUMAN OPERATORS .
actions in only 40% of the execution time. In comparison,
human spent more than 95% execution time on brushing. Avg per part Our system Human (no Human (20s
time-limit) brushing)
This was attributed to human superior skills in performing
Mass Before (g) 48.6 ± 10.9 48.8 ± 7.8 48.9 ± 8.0
sensing and dexterous manipulations. However, when we Mass After (g) 37.6 ± 6.4 29.8 ± 3.2 34.2 ± 4.5
limited brushing time to 20s, cleaning quality was reduced as Cycle Time (s) 50.1 ± 2.1 41.2 ± 1.9 21.2 ± 1.1
parts weighed 34.2±4.5g on average after cleaning (removed Brushing Time (s) 20 40 20
55.5%±17.9% excess powder). This suggested that one could
improve cleaning quality by prolonging the brushing duration
and upgrading the cleaning station (e.g. spinning brushes to
speed up cleaning action; compliant brushes to clean concave
areas, etc.).
We also noted that humans provided more consistent re-
sults of cleaned parts (i.e. parts cleaned by human operators
had weight variation much smaller than that of parts cleaned
by our system). This was due to the fact that human operators
Figure 8. Average time-line representation of all actions used for the
actively adjusted their cleaning motions based on vision feed- cleaning task.
back. Hence, incorporating a cleanliness evaluation module
into the current system would improve the cleaning quality goal is not to pick, but to change the configuration of an
performance. For this purpose, another 3D camera could be object that cannot be picked by the other primitives. It is
used to observe the object after each cleaning section. The called in particular when the exposed face of an object is
module would retrieve a list of all dirty locations on object too narrow. In this scenario it may be possible to topple the
surfaces. This information would allow the system to plan object into a new configuration that will allow the object to
better cleaning motions and could also be utilized to validate be picked by suction. Finally, air flow can be measured by
cleaning results. using a pitot tube inside the suction cup/hose. This is used to
Moreover, though this experiment featured a batch of detect whether a part was successfully grasped or lost during
nearly the same 3D-printed shoe insoles, our ultimate goal arm motions.
would be to extend this robotic system to a various types
of 3D-printed parts. For example, in many cases, a batch of
3D-printed parts may contain a number of different parts.
The cleaning routines, hence, would have to be altered to
cater to the different geometries of each parts. To achieve VI. C ONCLUSION
that, the state machine can be encoded with specific cleaning
instructions for each type of part. Much in the same way In this paper, we have presented an automated robotic
human operators change cleaning routines for different parts, system for decaking 3D-printed parts. We have combined
our robotic system may recognize different parts, look up a Deep Learning for 3D perception, smart mechanical design,
prior defined handling instructions, perform the cleanliness motion planning and force control for industrial robots to
evaluation and execute accordingly. develop a system that can perform decaking task in a fast and
To investigate the computational cost of our approach, efficient manner. Through a series of experiments performed
Figure 8 also provides information on the average times of on parts printed by a HP Multi Jet Fusion printer, we
each action used when performing the task. We noted that our demonstrated the feasibility of automatizing such tasks. The
robot ran at 50% max speed and all motions were planned achievements of this first prototype are promising and lay
online. Hence, the sytem performance could be further en- the groundwork for multiple possible extensions: to other
hanced by optimizing these modules (e.g. off-line planning types of parts, to other powder-based 3D-printing processes
for tight-space motions, working space optimization, etc.). (SLM, SLS, binder jet, etc.) In future work, we will focus on
Moreover, our perception module was running on a CPU, validating the current system further and expanding it to more
implementations of better computing hardware would thus general scenarios e.g. incorporating a cleanliness evaluation
improve the perception speed. module, optimizing task efficiency and adapting the system
Furthermore, we are also working forward improving our to work on more general parts.
end-effector design. The current system is only capable of
performing suction-down motion primitives. Though it can ACKNOWLEDGMENT
accommodate for a very large tilting angle, our system does
not have enough maneuverability to perform side picking. A This research was conducted in collaboration with HP Inc.
possible approach would be to design a more generic suction and partially supported by the Singapore Government through
gripper to perform the task. Another solution would be to add the Industry Alignment Fund -Industry Collaboration Projects
another assisting motion primitive, which is Toppling whose Grant.
8207
R EFERENCES [17] C. Eppner, S. Höfer, R. Jonschkowski, R. Martı́n-Martı́n, A. Sieverling,
V. Wall, and O. Brock, “Lessons from the amazon picking challenge:
[1] E. M. Sachs, J. S. Haggerty, M. J. Cima, and P. A. Williams, “Three- Four aspects of building robotic systems.” in Robotics: Science and
dimensional printing techniques,” Apr. 20 1993, US Patent 5,204,055. Systems, 2016.
[2] Y. Bai and C. B. Williams, “An exploration of binder jetting of copper,” [18] C. Hernandez, M. Bharatheesha et al., “Team delft’s robot winner of
Rapid Prototyping Journal, vol. 21, no. 2, pp. 177–185, 2015. the amazon picking challenge 2016,” in Robot World Cup. Springer,
[3] E. O. Olakanmi, R. Cochrane, and K. Dalgarno, “A review on selective 2016, pp. 613–624.
laser sintering/melting (SLS/SLM) of aluminium alloy powders: Pro- [19] K.-T. Yu, N. Fazeli, N. Chavan-Dafle, O. Taylor, E. Donlon, G. D.
cessing, microstructure, and properties,” Progress in Materials Science, Lankenau, and A. Rodriguez, “A summary of team MIT’s approach to
vol. 74, pp. 401–477, 2015. the amazon picking challenge 2015,” arXiv preprint arXiv:1604.03639,
[4] hp.com. (2019) HP multi jet fusion technology. [Online]. 2016.
Available: https://www8.hp.com/sg/en/printers/3d-printers/products/ [20] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser,
multi-jet-technology.html K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman, “Lessons
[5] A. Ju, A. Fitzhugh, J. Jun, and M. Baker, “Improving aesthetics through from the amazon picking challenge,” arXiv preprint arXiv:1601.05484,
post-processing for 3D printed parts,” Electronic Imaging, vol. 2019, vol. 3, 2016.
no. 6, pp. 480–1, 2019. [21] D. Lowe, “Object recognition from local scale-invariant features.” in
[6] G. Zeng and A. Hemami, “An overview of robot force control,” International Conference on Computer Vision, vol. 99, no. 2. IEEE,
Robotica, vol. 15, no. 5, pp. 473–482, 1997. 1999, pp. 1150–1157.
[7] F. Sato, T. Nishii, J. Takahashi, Y. Yoshida, M. Mitsuhashi, and [22] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-
D. Nenchev, “Experimental evaluation of a trajectory/force tracking time object detection with region proposal networks,” in Advances in
controller for a humanoid robot cleaning a vertical surface,” in 2011 neural information processing systems, 2015, pp. 91–99.
IEEE/RSJ international conference on intelligent robots and systems, [23] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in
2011, pp. 3179–3184. Proceedings of the IEEE international conference on computer vision,
[8] F. Nagata, T. Hase, Z. Haga, M. Omoto, and K. Watanabe, 2017, pp. 2961–2969.
“CAD/CAM-based position/force controller for a mold polishing [24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu,
robot,” Mechatronics, vol. 17, no. 4-5, pp. 207–216, 2007. and A. C. Berg, “SSD: Single shot multibox detector,” in European
[9] J. Hess, G. D. Tipaldi, and W. Burgard, “Null space optimization for conference on computer vision. Springer, 2016, pp. 21–37.
effective coverage of 3d surfaces using redundant manipulators,” in [25] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems, Proceedings of the IEEE conference on computer vision and pattern
2012, pp. 1923–1928. recognition, 2017, pp. 7263–7271.
[10] C. Eppner, J. Sturm, M. Bennewitz, C. Stachniss, and W. Burgard, [26] F. Suárez-Ruiz, X. Zhou, and Q.-C. Pham, “Can robots assemble an
“Imitation learning with generalized task descriptions,” in International IKEA chair?” Science Robotics, vol. 3, no. 17, p. eaat6385, 2018.
Conference on Robotics and Automation. IEEE, 2009, pp. 3968–3974. [27] C. Ott, R. Mukherjee, and Y. Nakamura, “Unified impedance and
[11] J. Hess, M. Beinhofer, and W. Burgard, “A probabilistic approach to admittance control,” in International Conference on Robotics and
high-confidence cleaning guarantees for low-cost cleaning robots,” in Automation. IEEE, 2010, pp. 554–561.
International conference on robotics and automation (ICRA). IEEE, [28] T. Lefebvre, J. Xiao, H. Bruyninckx, and G. De Gersem, “Active
2014, pp. 5600–5605. compliant motion: a survey,” Advanced Robotics, vol. 19, no. 5, pp.
[12] G. Lawitzky, “A navigation system for cleaning robots,” Autonomous 479–499, 2005.
Robots, vol. 9, no. 3, pp. 255–260, 2000. [29] A. Calanca, R. Muradore, and P. Fiorini, “A review of algorithms for
[13] J. L. Jones, “Robots at the tipping point: the road to irobot roomba,” compliant control of stiff and fixed-compliance robots,” IEEE/ASME
IEEE Robotics & Automation Magazine, vol. 13, no. 1, pp. 76–78, Transactions on Mechatronics, vol. 21, no. 2, pp. 613–624, 2015.
2006. [30] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs,
[14] B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operating
locally: Efficient and robust 3d object recognition,” in Computer system,” in ICRA workshop on open source software, vol. 3, no. 3.2.
society conference on computer vision and pattern recognition. IEEE, Kobe, Japan, 2009, p. 5.
2010, pp. 998–1005. [31] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
[15] D. Buchholz, D. Kubus, I. Weidauer, A. Scholz, and F. M. Wahl, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in
“Combining visual and inertial features for efficient grasping and bin- context,” in European conference on computer vision. Springer, 2014,
picking,” in International Conference on Robotics and Automation pp. 740–755.
(ICRA). IEEE, 2014, pp. 875–882. [32] J. J. Kuffner Jr and S. M. LaValle, “RRT-connect: an efficient ap-
[16] A. Pretto, S. Tonello, and E. Menegatti, “Flexible 3D localization proach to single-query path planning,” in International Conference on
of planar objects for industrial bin-picking with monocamera vision Robotics and Automation, vol. 2. IEEE, 2000.
system,” in International Conference on Automation Science and [33] R. Diankov, “Automated construction of robotic manipulation pro-
Engineering (CASE). IEEE, 2013, pp. 168–175. grams,” Ph.D. dissertation, Carnegie Mellon University, 2010.

8208

View publication stats

You might also like