Draft Towards Remote Teleoperation of A Mobile Manipulator System in Machine Loading and Unloading Tasks

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337586346

Towards Remote Teleoperation of a Semi-Autonomous Mobile Manipulator


System in Machine Tending Tasks

Conference Paper · June 2019


DOI: 10.1115/MSEC2019-3027

CITATIONS READS
21 1,534

4 authors, including:

Pradeep Rajendran Shantanu Thakar


University of Southern California University of Southern California
18 PUBLICATIONS 350 CITATIONS 27 PUBLICATIONS 434 CITATIONS

SEE PROFILE SEE PROFILE

Satyandra K Gupta
University of Southern California
480 PUBLICATIONS 8,219 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Intelligent Assembly View project

Planning for Unmanned Surface Vehicles View project

All content following this page was uploaded by Shantanu Thakar on 04 February 2020.

The user has requested enhancement of the downloaded file.


Proceedings of ASMEs 14th Manufacturing Science and Engineering Conference
MSEC 2019
June 10-June 14, 2019, Erie, PA, USA

MSEC2019-3027

TOWARDS REMOTE TELEOPERATION OF A SEMI-AUTONOMOUS MOBILE


MANIPULATOR SYSTEM IN MACHINE TENDING TASKS

Vivek Annem1 , Pradeep Rajendran1 , Shantanu Thakar1 , Satyandra K. Gupta1∗


1 Center for Advanced Manufacturing, University of Southern California, Los Angeles, CA-90089

ABSTRACT ume operations require a significant amount of manual material


Increasing the level of automation in material handling tasks handling operations. Many manufacturing processing operations
in small volume production operations can improve the human such as CNC machining and 3D printing are highly automated
productivity and overall manufacturing system performance. In in small production volume applications. Humans have to still
this paper, we present a teleoperated mobile manipulator system perform manual loading and unloading of machines, and transfer
that can be used for tending machines and transporting parts in workpieces between machines. High variety of parts has made
manufacturing applications. The remotely located human oper- it very challenging to automate material handling in small pro-
ator can interact with the semi-autonomous mobile manipulator duction volume applications. Ensuring safe automated operation
by giving it high level instructions. We have incorporated sev- requires sophisticated sensing to ensure that machines and parts
eral sensors on the system to ensure safe teleoperation where the do not get damaged. Complete material handling automation is
operator gives only high level motion goals to the mobile manip- not likely to be cost effective in the near term in small produc-
ulator, such as waypoints for mobile base motion, and interactive tion volume applications. We need to figure out how to increase
marker poses for the manipulator motion. The point clouds from human productivity in material handling operations by utilizing
multiple depth cameras are used for mapping the environment. automation.
The robot plans for autonomous motions between the given way-
We take inspiration from recent advances in warehouse au-
points ensuring that the resulting motions are collision-free. We
tomation. Warehouses handle a wide variety of products and
have conducted case studies with two different type of parts to
need to maintain high throughput rates. Main tasks at the ware-
be extracted from a 3D printer. The system is tested by multiple
house can be divided into three categories: (1) transportation
users. They were successful in completing tasks in a reasonable
of goods from shelves to packing stations (or from unloading
amount of time using our interface.
stations to shelves), (2) inspection of goods, and (3) manipu-
lation tasks to perform packing and unpacking. Recent efforts
1 INTRODUCTION in the warehouse automation have largely focused on using mo-
Material handling is highly automated in large production vol- bile robots for transportation of shelves [2–4]. Robots bring
ume applications. Automated conveyors, automated guided vehi- shelves containing goods to humans. Humans retrieve items
cles, and machine loading/unloading robots are commonly used from shelves or deposit items to shelves. This division of la-
in high volume production lines [1]. This helps in ensuring high bor between humans and robots enables the overall system to
throughput rates and reducing the possibility of human errors. exploit strengths of humans and robots. Humans are able to per-
This also eliminates the need for humans to perform ergonomi- form challenging inspection and manipulation tasks. Humans do
cally challenging tasks. On the other hand, small production vol- not have to waste time traveling to shelves. Since humans do not
need to travel to the storage area, the storage area can be made
more space efficient. This enables the overall system to operate
∗ Address all correspondence to this author. Email: guptask@usc.edu
1 Copyright c 2019 by ASME
efficiently and increase the productivity of human operators. preliminary experiments to demonstrate the technical feasibility
The advent of mobile manipulators is enabling new ways to of our approach.
perform material handling operations in small production vol- We have compared our system with the approach presented
ume application. Safety will be a very important criterion for in [6], where the human operator has complete control of the
deploying mobile manipulators in such applications. The mobile manipulator using a joystick that maps to the end-effector veloc-
base should not collide with machines and other vehicles dur- ities. We have implemented a similar system but with teleop-
ing the transport operation. During machine loading/unloading eration using a keyboard instead. Essentially, we want to com-
operations, the robot manipulator should come in contact with pare our remotely teleoperated semi-autonomous system with a
machines. It should handle the part carefully and not damage complete end-effector control teleoperation system for ease of
it. Fully autonomous mobile manipulators like in [5] are ca- use and safety. We call this model as fully teleoperated system.
pable of performing the entire task which we use our system Moreover, we have a similar remote operation for the mobile
to perform. However, to perform these tasks, the robot has to base as well, where the user has full control of the robot for-
navigate through expansive and changing environments. A fully ward and turning velocities. In both, the mobile base and the
autonomous system must account for every small detail during manipulator, we provide a live video-stream and a 3D map of
the mapping of the environment as the scope for error is small the environment to the user. The details of the fully teleoperated
due of the cost of the machines and equipment involved. This system are presented in Sec. 6.
might not be possible because of the constantly changing envi-
ronments. Moreover, the parts to be handled may not be easy to
detect and grasp autonomously. Hence, it is imperative that a hu- 2 RELATED WORK
man be in-the-loop while tasks are being performed by the robot. Robots have been used in factory and warehouse settings for
Therefore, we believe that teleoperated semi-autonomous mobile transportation of objects and parts like the KIVA system [2–4].
manipulator system will be a good system to ensure task safety Mobile manipulators have been used for tasks like assembly
and increase human productivity. In this system, humans will not and production in manufacturing settings [7–10]. Their op-
perform tedious and ergonomically challenging tasks with the eration modes can be either autonomous [10–13] or teleoper-
robot like individual robot joint motions or mobile robot remote ated [14–16].
controlled motions. However, remotely located human operators Mobile Manipulators like the Little Helper [17] have been
will be able to provide high level instructions to enable the mo- developed to make automation flexible in industrial settings.
bile manipulator to perform the material handling task safely. In There are various sensors like laser scanners, ultrasonic sen-
addition, humans will be able to verify the robot motion before it sors, Kinects and wheel encoders. With these sensors, the robots
is executed for interactions with expensive machines. This will can navigate autonomously avoiding obstacles and humans. Re-
result in safe operations as well as increase productivity. Fur- searchers have also assessed the feasibility and use of multiple
thermore, semi-autonomous remote operation system will allow such mobile manipulators in industrial production facilities [18].
a human operator to manage multiple machines simultaneously. Various warehouse robots have been developed for automating
In this paper, we will focus on machine loading/unloading order picking for various kinds of products. In particular, a dual
tasks performed by a teleoperated semi-autonomous system. To arm robot has been developed in [19] for transporting large parts
perform this task, we need a number of functional capabilities. like cartons. Such robots add to the flexibility of the operations
The first capability is focused on low level planning. We do not that can be automated.
want human operators to teleoperate individual joints. Therefore, Our focus in this work is having a teleoperated semi-
the robot needs to be able to generate collision-free trajectories autonomous system for the entire mobile manipulator for intu-
of the mobile base and robot arm for the motion specified by the itive motion planning of the mobile robot and manipulator mo-
human. The second capability requires the robot to be able to tions. For teleoperation of mobile robots [20] previous work in-
perform sensing at an adequate rate and resolution to ensure that cludes using virtual reality for simulating motion [21] and haptic
it will not collide with objects in the workspace. The third capa- feedback during actuation [22].
bility requires the robots to be able to update the remotely located Teleoperation of high-DOF systems present challenges as
human operator about the workspace state and the task progress. there are many different ways it can be actuated. Controlling
This enables the human operator to select the appropriate motion each joint to position the end-effector is not intuitive. Position-
goals. The final capability requires the robot to be able to get ing and actuation using a GUI has been presented in [23], where
task input from the human operator. The human should be able an interface is presented to perform manipulation tasks in sim-
to provide motion goals and task constraints using an easy-to-use ulation followed with the real robot execution. The challenge
interface. This capability should ensure that there is no ambigu- presented with using a similar technology with mobile manipula-
ity in the command issued by the human to the robot. This paper tors is that the scene changes as the mobile robot moves. Hence,
describes our approach for realizing these four capabilities and mapping obstacles in the scene becomes necessary for the manip-

2 Copyright c 2019 by ASME


ulator motion. Moreover, the final configuration of the arm is not Motor controllers, each connected to 2 motors make
visualized in [23], which is important for a mobile manipulator the base a highly responsive mobile system. This con-
to avoid collisions. figuration enables us to teleoperate the robot at high
In this work, we use a 6-DOF positioning ring and arrow speeds. Moreover, the payload carrying capacity of the
representation like in RViz [24] or Robot Web Tools ros3djs [25] SuperMegaBot is more than 100 Kgs, which enables
for simulating grasping of the part. The marker can be moved us to mount the UR5, its controller, batteries, the com-
freely in space and also rotated along the euler angles as shown in puter as well as all the sensors without significantly
its applications for home assistant [26, 27]. Moreover, we make affecting its motion.
use of the MoveIt! [28] along with OMPL [29] for determining a
2. Sensors For Localization: ADAMMS has a sensor suite
collision-free trajectory of the manipulator.
comprising of a laser scanner, high-speed camera, multi-
The motion of the mobile base is planned by the user by
ple Kinects, and wheel encoders. This allows us to imple-
looking into a video stream as presented in [30]. We have inte-
ment high speed localization (10-20 FPS) of the mobile base
grated a laser scanner and a point cloud from multiple Kinects
with an accuracy of about 3-5 cms. We use 4 AMT 10-V
to map the environment and build a map using the RTAB-Map
encoders to perform wheel odometry. The high resolution
package [31]. This gives us the flexibility to teleoperate the mo-
2048 PPR (Pulses Per Revolution) encoders provide accu-
bile robot while looking into the live feed of the camera for pick-
rate wheel odometry data to help localize the system.
ing up a part.
We also use a Flir Blackfly S camera to perform ARUCO
marker based pose estimation. This camera is used for its
high FPS image stream over USB 3.1. The camera is pointed
3 THE MOBILE MANIPULATOR SYSTEM
to the ceiling and ArUco (Augmented Reality) [32] markers
The mobile manipulator system is composed of a differentially are placed across the ceiling. We use the ArUco library to
driven mobile robot, a 6DOF manipulator arm and a two fin- perform pose estimation of the mobile base and produce a
gered gripper. The system comprises multiple sensors for the second source of odometry for localization.
localization and motion planning of the mobile base as well as We also use a Microsoft Kinect 360 and a SICK LMS 151
the manipulator. LIDAR to perform visual odometry. The Microsoft Kinect
360 provides RGB-D data of the surroundings at 30 FPS.
We use the RTAB-Map [31] library to process the data and
3.1 Hardware output visual odometry. As shown in Fig. 3, all the sources
As shown in Fig. 1, is the Agile Dexterous Autonomous Mobile of odometry are linked to the Kalman filter to perform local-
Manipulation System (ADAMMS). Figure 2 shows ADAMMS ization.
performing a manipulation task. Although the system can be 3. Sensors To Augment User-Experience Of Teleoperation: A
used for autonomous operations, in this work, we present only good teleoperation system needs an additional sensor system
the teleoperated semi-autonomous system of the robot. The com- to smoothen the user experience and provide as many visual
ponents of ADAMMS are described hereon. inputs necessary to perform desired tasks.
We use a Microsoft Kinect 360 to make use of its RGB-D
1. Mobile Manipulation System: The system consists of 3 main data. An interface is built using which the real-time image
robotic hardware components. The manipulator, the gripper stream is interleaved in a 3D augmented reality scene for
and the mobile base. the user to perform the desired manipulation task remotely
comfortably. Kinect’s 2D image stream is also displayed in
(a) Manipulator: UR5 robot from Universal Robots is
the GUI to provide a real-time view of the surroundings to
used as the manipulator. The structure of UR5 is such
the operator throughout the task.
that the first three joints namely the base, the shoulder,
and the elbow joints, provide for large motions for the 3.2 Software
manipulator, whereas the three joints at the end pro- For the operation of the system, we extensively use ROS (Robot
vide wrist like motions. This makes UR5 an ideal robot Operating System) [24]. ROS enables us to use multiple hard-
for pick-up operations because the end-effector orien- ware components together and keep them synchronized. More-
tations can be fine-tuned by motions of just the last few over, a virtual environment taking data from the real world is
joints. generated so that we can take actions in the virtual world and
(b) Mobile Base: We use a SuperMegaBot from Inspector- check the motions of the system without moving the real robots.
Bots as our mobile base. The SuperMegaBot is built We have built a Graphical User Interface (GUI) to operate
with 4 high torque motors which help it to move ro- ADAMMS. Figure 5 shows the overview of all the buttons of the
bustly at speeds as high as 12 mph. Two Roboteq GUI. Figure 6 shows the complete view of the GUI. This consists

3 Copyright c 2019 by ASME


FIGURE 1: Components of the mobile manipulator system developed in our lab

of a live feed from the Kinects on board the robot and a virtual 5. Select desired motion goal for the manipulator
map of the environment for the mobile base and the manipulator. 6. Initiate trajectory generation for the manipulator
Detailed usage of the GUI is explained in Sec. 4. The mobile 7. Verify the system generated simulated manipulator motion
base and manipulator planners individually plan for the low-level 8. Execute robot motion
motion and control. 9. Perform desired grasping action
The software infrastructure is presented in Fig. 4. The base
All these operations are completed in the virtual environment
planner takes in the base motion goal and the scene information,
first and then executed for the real robot. The system is built
and generates a trajectory. The specifics of the trajectory plan-
to perform certain manipulation tasks on the factory floor which
ner are explained in Sec. 5. The trajectory is implemented by
can be guided by the operator at a remote location. The GUI
the base controller by sending the respective command velocity
presents a very convenient interface for the user to navigate and
goals to the motor controller of the mobile base. The manipula-
perform different tasks on the mobile manipulator system.
tor planner makes use of the scene information from the Kinects
The application performs 2 types of tasks. (1) Mobile Base
and joint state information from the manipulator, and generates a
Motion task, (2) Manipulator Motion task. Both tasks have 2
trajectory to the corresponding motion goal given by the user.
windows and task-specific buttons.

4 GRAPHICAL USER INTERFACE 4.1 Mobile Base Motion Mode


We have designed an intuitive GUI for the teleoperated semi- This task has 2 windows and 1 button.
autonomous system to let the user perform high-level tasks eas-
1. Real-time World View: This window shows a real-time video
ily. To make sure of that and to make the system flexible, we
stream of the system in motion from Kinect A(denoted in
have incorporated the following utilities in it. The following is a
Fig. 1). With the help of this, the user can always know
list of steps to perform generic mobile manipulation tasks.
where things are and compare it with the map in the second
1. Select desired motion goal for the mobile base in the avail- window before providing a motion goal for the mobile base.
able map 2. Virtual Map View: This window shows a virtual map in
2. Initiate trajectory generation for the mobile base RVIZ which contains a simulated model of the system. The
3. Verify the system generated trajectory for the mobile base localization data from the sensor system is used to situate
4. Set desired view in the augmented reality scene window the model in the correct location on the map. To provide

4 Copyright c 2019 by ASME


FIGURE 3: The data streaming from each of the sensors is given as in-
put to an Extended Kalman Filter (EKF) to obtain the best pose estimate
of the mobile base

FIGURE 4: The software architecture of the system in terms of the ROS


node diagram

FIGURE 2: ADAMMS performing part extraction from a 3D printer


a motion goal to the mobile base, the user can click on the
2DNavGoal button, select a desired point on the map, and
orient the arrow in the desired direction. Using the localiza-
tion and trajectory planner system, the mobile manipulator
system follows a planner-generated path which can be seen
in this window. An example of this is shown in Fig. 8.
The user can always double check the motion by comparing
it with the Real-time World View window, and if the path
planned is not desired by the user, the user can click the Stop
button from Base Actions (denoted in Fig. 5). This gives a FIGURE 5: The GUI and the corresponding hardware component mo-
high level of flexibility to the user. tion connected to the buttons on the GUI
a Kinect B (denoted by Fig. 1) placed such that the user
can see the UR5 arm and the surroundings around which
the user is performing the manipulation tasks. The user can
4.2 Manipulator Motion Mode use this as a reference when providing motion goals for the
This mode has 2 windows and 7 buttons. manipulator.
1. Real-time World View: This window shows a real-time video 2. Augmented Manipulation View: This window shows a vir-
stream of UR5 manipulator. The image stream comes from tual environment in RVIZ which contains the simulated

5 Copyright c 2019 by ASME


FIGURE 6: The GUI during the operation of the mobile base action for mobile base trajectory generation

model of the system on which a real-time depth cloud im- 5 TRAJECTORY PLANNING
age of the current robot is projected. An example of this The trajectory planning and motion generation for mobile manip-
is shown in Fig. 7. The depth cloud image comes from ulator is implemented in three stages; first and the third stages are
the Kinect B placed such that the user can make use of the for the mobile base. In these stages, only the mobile base is tele-
depth data of the surrounding effectively. The user can navi- operated semi-autonomously, while the manipulator stays in its
gate in the 3D environment and understand the surroundings configuration such that its footprint is within the footprint of the
to perform the desired manipulation action conveniently. To mobile base. Here, we assume that the mapping of the environ-
provide a motion goal to the manipulator, the user can reach ment is done beforehand and the map is available for localization
the desired view, click on the interactive marker at the end- of the mobile base. The second stage is for the trajectory plan-
effector of the simulated arm, and move it to the desired ning and motion generation for the manipulator arm. This stage
location. The user can then click the Plan button (denoted in is implemented after the mobile base is stationed near the ma-
Fig. 5) to see the simulated action of the robot. If the simu- chine that needs tending.
lated motion is not desired, the user can change the motion
goal, or even move the base to a better location, or orient
the system in a better way such that the manipulation task 5.1 Mobile Base Trajectory Generation
is easier for the arm. The user can also simulate the grasp The planning for the mobile base is based on the Dynamic Win-
to check if the grasp is possible by clicking the Close button dow Approach (DWA) [33] algorithm. The algorithm initially
(denoted in Fig. 5) in the Simulated Gripper Actions. Once samples discretely in the robot’s control space, which in our case
the desired plan is simulated, the user can click the Execute is the velocity of the right and the left wheels. For each sam-
button (denoted in Fig. 5) to execute the motion. The Close pled velocity, a forward simulation is performed from the mobile
button (denoted in Fig. 5) in the Real Gripper Actions can base’s current state to predict the state (pose (x, y, φ ) ) after a
be used to perform the grasp action on the part. The user short period of time. For each of these forward simulated tra-
can always double check the motion by comparing it to the jectories, a score is evaluated based on the proximity to obsta-
real-time world view window and if the path planned is not cles, distance to the goal and the proximity to the speed limits.
desired by the user, the user can click on the Stop button. Trajectories which result in collision or violation of speed limits
This gives additional flexibility to the user. are discarded. Further, the trajectory which receives the high-

6 Copyright c 2019 by ASME


FIGURE 7: The GUI during the operation of the manipulator mode for manipulator motion generation

FIGURE 8: [Left]Waypoint being given to the mobile base planner.


[Right]Corresponding path generated
est score is implemented by sending the associated velocity as
the target velocity to the mobile base controller. This approach
is repeated until the mobile base reaches the commanded way-
point or the goal location. During the execution of the trajectory,
the localization of the mobile base is done by performing sensor
fusion using the EKF. The collision avoidance scheme utilizes
Flexible Collision Library (FCL) [34] which uses mesh-to-mesh
intersections to determine the distance to the obstacles. This cost FIGURE 9: The 6-DOF marker for the manipulator planing for com-
function for collision avoidance in DWA uses this distance to manding the next waypoint for the gripper
compute a collision score. mobile base can reach the machine to load or unload. The user
can make use of the video stream from Kinect A, which provides
This planning and control are implemented using the
the user cues on where to place the next waypoint on the map.
move base package from the ROS framework. The user provides
This helps in avoiding dynamic obstacles. Moreover, the user
waypoints on the map of the environment that is available. The
can stop the mobile base while in motion between two waypoints
DWA planner is implemented between each waypoint, and the

7 Copyright c 2019 by ASME


to compute a plan in cluttered environments we plan to operate.
The virtual motion of the manipulator is first implemented and
checked for collisions by the user. This gives the confidence to
execute the motion on the real manipulator, which is thereon ex-
ecuted. This process is followed until the pick-up operation is
completed and the manipulator footprint is once again inside the
mobile base footprint. The UR5 manipulator motion is executed
using the low level joint motor control in each joint by the inbuilt
controllers of the robot.
After the manipulator motion for part pick-up is completed,
the third stage starts for the motion of the mobile base to transport
the part to the final location.

6 EXPERIMENTS AND RESULTS


To demonstrate the full capabilities of our system, we have con-
ducted user tests of a generalized mobile manipulation task such
as pick and place task using our semi-autonomous teleoperated
system. The task has 3 components. The first component in-
volves navigating the system from an initial location to reach
FIGURE 10: An example of a set of waypoints given for base motion a 3D printer. After reaching the destination, the user needs to
components of the task extract the part printed in the 3D printer using the manipulator
if he sees an unforeseen dynamic obstacle in front of the robot. which is the second component. The user then needs to navigate
This method is suitable for a warehouse or a factory-like setting, the system back to the initial location which is the third and final
where the environment is unstructured and changing. Figure 8 component of the task. The entire task is performed using only
shows the waypoints being commanded on the map, and the cor- the teleoperated semi-autonomous system. Figure 10 shows the
responding path for the mobile base. map of the environment and the intermediate motion goals given
The low level control of the mobile robot is implemented as by the user to the mobile base. The top row of images of Fig. 11
a PI control for the velocity for each wheel using encoder feed- show the system reaching those locations in the virtual environ-
back. This controller is implemented by the Roboteq Controller ment. The bottom row of images of Fig. 11 show the Kinect A
as mentioned in Sec. 3.1. The target velocity for each is set by view using which the user can monitor the system. After reach-
the DWA planner. ing the 3D printer, the user is tasked with picking up the printed
part from the printer. Figures 12 and 13 show the user provid-
ing intermediate motion goals to the robotic arm for picking up
5.2 Manipulator Trajectory Generation different parts. After picking up the part, the user then navigates
Once the mobile base reaches the machine which it needs to tend back to the initial location.
to, the manipulator motion can be implemented. The scene for Figures 11, 12 and 13 demonstrate the first major functional
the manipulator is generated using the point cloud from Kinect B. capability of the system which is low-level planning. The user
The 6-DOF marker that can be controlled by the user is moved is only tasked with providing high level goals to both the mobile
so as to give waypoints for the end-effector, i.e. the gripper as base and the manipulator. The map and the simulated model of
shown in Fig. 9. The waypoints are executed till the final pick- the system represent sensing which is the second functional ca-
up pose for the gripper is commanded. The manipulator then pability of the system. The sensor suite used ensures the system
positions itself at the grasping configuration. The gripper is then does not malfunction at any point of the usage of the system. The
closed using the buttons in the GUI, and the part is picked up. model also updates the user with the current status of the system
The planner for the manipulator is implemented us- and the surroundings thereby demonstrating the third major func-
ing OMPL [29] from the manipulator planning framework tional capability of the system. The GUI and virtual environment
Moveit [28]. The point cloud rendered by the Kinect B results in provide an intuitive interface for the user to interact with the sys-
recording of the obstacles. The planner finds a path through these tem to the fullest, which demonstrate the final major functional
obstacles. OMPL offers multiple motion planners for manipula- capability of the system.
tors. In our experience, testing with the available 3D printers and We have also implemented a fully teleoperated system for
a CNC machine, we noted that the RRT-connect algorithm [35] the same task and present the results in Tab. 1. As opposed to
works best for this purpose. This planner takes about 1-3 seconds the teleoperated semi-autonomous system, the fully teleoperated

8 Copyright c 2019 by ASME


FIGURE 11: [Top]Virtual map views of a set of waypoints during mobile base motion component of the task. [Bottom]Real-time world views of the
corresponding virtual map view

FIGURE 12: [Top]Augmented manipulation views of a set of waypoints during manipulation motion component of the task for a cylindrical part.
[Bottom]Real-time world views of the corresponding virtual map view

system needs user to provide input in lot more detail. For first TABLE 1: Comparison of average time taken by 5 users for different
and third components of the task, the user provides command components of the task using teleoperated semi-autonomous system and
velocities to the base using a set of keys on the keyboard. The fully teleoperated system
Real-time World View, Virtual Map View, and Augmented Ma- Average time taken for task
Components of the task complete (mm : ss)
nipulation View, discussed in Sec. 4, are used for visualization of
the system. Velocity limits set are the same as the teleoperated Teleoperated Semi- Fully Teleoperated
Autonomous system system
semi-autonomous system. For the second component of the task,
the user sends velocity command in one of the three axes at a 1. Going from the initial 03:15 03:53
time using a different set of keys on the keyboard. Similarly, the location to the 3D printer
user can also change the orientation of the end-effector by pro- 2. Picking up the part 03:43 04:25
viding angular motions using another set of keys. This method is 3. Going from the 3D 03:08 03:31
used to provide the most amount of control for teleoperation in a printer back to the initial
simple way for the user. location

In order to quantify the ease of use of the system, the times taken by the users to complete the three components of the task
taken by the user to perform the task in both modes are noted. in both systems. Average total time taken for task completion in
We asked 5 users, who have not interacted with the system be- teleoperated semi-autonomous system is 10:06 minutes and fully
fore, to perform the tasks. We gave a demonstration of usage teleoperated system is 11:49 minutes.
of system in both systems. The Tab. 1 shows the average time In the first and third components, the users using teleoper-

9 Copyright c 2019 by ASME


FIGURE 13: [Top]Augmented manipulation views of a set of waypoints during manipulation motion component of the task for a cuboidal part.
[Bottom]Real-time world views of the corresponding virtual map view

TABLE 2: NASA TLX Results very different than the current configuration. This on a few oc-
Type of workload Teleoperated Fully Teleoperated casions results in a large swinging motions for the manipulators.
Semi-Autonomous system Even for small changes in end-effector pose, the required ma-
system
nipulator motions might be large. This adds to the frustration
Mental Demand 1.67 3.11 occasionally. However, this issue does not arise in the fully tele-
Physical Demand 0.67 0.89 operated system where the user moves the end-effector wherever
Temporal Demand 2.17 2.89 desired. This freedom is however limited by several other con-
straints therein.
Performance 1.34 1.34
During the experiments using the fully teleoperated system,
Effort 1.67 2.0 we observed several constraints which our teleoperated semi-
Frustration 2.13 1.55 autonomous system overcame. One constraint was the joint limit
Total Workload 10.5 11.77 constraint which is inherently handled by the motion planner in
ated semi-autonomous system interacted with the system by only our approach. Moreover, there were several configurations in
providing a point on the map and the direction for the system to which the robot was unable to move in the desired direction due
face, whereas, when using fully teleoperated system, the users to proximity to singularities. Furthermore, collision avoidance
had to make sure to avoid obstacles and use all the cameras to get is inherently handled in the planners as the point cloud of the
to the destination. In the second component of the task, the users environment is available to the planners.
using fully teleoperated system averaged 1.5 emergency stop-
pages due to either singular configurations or joint limits. Due to
these reasons, users when using teleoperated semi-autonomous 7 CONCLUSIONS
system took 1:43 minutes less for task completion. In this paper, we have shown many functional capabilities of the
After each task completion, the users are asked to take the mobile manipulation system that we developed. The first capa-
NASA TLX [36] form to measure the workload of the task. The bility is focused on low level planning and results in human oper-
results of the survey are shown in Tab. 2. Mental workload ators give high level commands for motion. The robot generates
is a significant difference between the two systems. The users collision-free trajectories of the mobile base and robot arm for
have noted that the mental demand in fully teleoperated system the motion specified by the human. The second capability that
is higher due to additional input and attention that is required we have implemented is that the robot avoids obstacles. We have
to prevent damage to the system and surroundings. Moreover, at ensured this by integrating state-of-the-art motion planners into
certain places, due to limited number of cameras it is not possible the system. The third capability that we have presented shows
to guarantee that the motions commands result in collision free that the user is given live video feedback hence enabling him/her
paths. We observe that the Frustration score in Tab. 2, is higher to select the appropriate motion goals. The final capability is that
for the teleoperated semi-autonomous system as compared to the the user is able to provide motion goals and task constraints using
fully teleoperated system. The reason we identified after speak- an easy-to-use interface. Moreover, we have ensured that there is
ing with the users was that giving waypoints to the end-effector no ambiguity in the command issued by the human to the robot.
of the manipulator can result in target joint configurations to be We have conducted user tests of the system which have shown

10 Copyright c 2019 by ASME


that the proposed model of teleoperated semi-autonomous sys- tion (aimm): from research to industry. In Proceedings of
tem of a mobile manipulation system is faster and requires low the 42nd International Symposium on Robotics, pages 1–9,
mental demand to operate compared to fully teleoperated system 2011.
of the same mobile manipulation system. [9] Mads Hvilshøj, Simon Bøgh, Oluf Skov Nielsen, and
In the future we plan to have a customized planner and Ole Madsen. Autonomous industrial mobile manipulation
marker which results in lesser user frustration when it comes to (aimm): past, present and future. Industrial Robot: An In-
giving a desired joint configuration. We also plan to integrate ternational Journal, 39(2):120–135, 2012.
grasp location and pose suggestions where the user is given op- [10] Ole Madsen, Simon Bgh, Casper Schou, Rasmus Andersen,
tions on how to approach the part and where to grasp it with what Jens Skov Damgaard, Mikkel Rath Pedersen, and Volker
orientation. This will greatly enhance the usability of the system Krger. Integration of mobile manipulators in an industrial
and decrease the operation time significantly. production. Industrial Robot: An International Journal,
The future work also includes making use of advanced tech- 42:11–18, 01 2015.
nology such as Virtual Reality headset for the teleoperation, [11] Máximo A Roa, Dmitry Berenson, and Wes Huang. Mo-
which gives the user a better sense of the virtual environment, bile manipulation: toward smart manufacturing [tc spot-
and makes the system more intuitive. We also plan to integrate light]. IEEE Robotics & Automation Magazine, 22(4):14–
haptic feedback sensor to the gripper which gives feedback to the 15, 2015.
user on how well the part has been grasped. Moreover, we plan to [12] S. Thakar, L Fang, B. C. Shah, and S. K. Gupta. Towards
integrate proximity sensors on the mobile base, which will result time-optimal trajectory planning for pick-and-transport op-
in more feedback to the user especially in tight spaces. This will eration with a mobile manipulator. In IEEE Interna-
further enhance the ease of operation as well as make the system tional Conference on Automation Science and Engineering
safer. (CASE), Munich, Germany, Aug 2018.
[13] Shantanu Thakar, Pradeep Rajendran, Vivek Annem,
Ariyan Kabir, and Satyandra Gupta. Accounting for part
REFERENCES pose estimation uncertainties during trajectory generation
[1] Alexander Helleboogh, Tom Holvoet, and Yolande Berbers. for part pick-up using mobile manipulators. In IEEE Inter-
Testing agvs in dynamic warehouse environments. In Inter- national Conference on Robotics and Automation (ICRA),
national Workshop on Environments for Multi-Agent Sys- Montreal, Canada, May 2019.
tems, pages 270–290. Springer, 2005. [14] Amaren P Das, SK Saha, DN Badodkar, and S Bhasin. De-
[2] Peter R Wurman, Raffaello D’Andrea, and Mick Mountz. sign of a teleoperated mobile manipulator for inspection of
Coordinating hundreds of cooperative, autonomous vehi- cyclotron vault. In Machines, Mechanism and Robotics,
cles in warehouses. AI magazine, 29(1):9, 2008. pages 529–540. Springer, 2018.
[3] John Enright and Peter R Wurman. Optimization and coor- [15] Kai-Tai Song, Sin-Yi Jiang, and Ming-Han Lin. Interac-
dinated autonomy in mobile fulfillment systems. In Au- tive teleoperation of a mobile manipulator using a shared-
tomated action planning for autonomous mobile robots, control approach. IEEE Transactions on Human-Machine
pages 33–38, 2011. Systems, 46(6):834–845, 2016.
[4] Raffaello D’Andrea and Peter Wurman. Future challenges [16] ChangSu Ha, Sangyul Park, Jongbeom Her, Inyoung Jang,
of coordinating hundreds of autonomous vehicles in distri- Yongseok Lee, Gun Rae Cho, Hyoung Il Son, and Dongjun
bution facilities. In Technologies for Practical Robot Appli- Lee. Whole-body multi-modal semi-autonomous teleoper-
cations, 2008. TePRA 2008. IEEE International Conference ation of mobile manipulator systems. In Robotics and Au-
on, pages 80–83. IEEE, 2008. tomation (ICRA), 2015 IEEE International Conference on,
[5] Jan Carius, Martin Wermelinger, Balasubramanian Ra- pages 164–170. IEEE, 2015.
jasekaran, Kai Holtmann, and Marco Hutter. Deployment [17] Mads Hvilshøj and Simon Bøgh. little helperan au-
of an autonomous mobile manipulator at mbzirc. Journal tonomous industrial mobile manipulator concept. Interna-
of Field Robotics, 35(8):1342–1357, 2018. tional Journal of Advanced Robotic Systems, 8(2):15, 2011.
[6] Bjørn Heber Skumsnes. Teleoperation of mobile robot [18] Simon Bogh, Casper Schou, Thomas Rühr, Yevgen Ko-
manipulators. Master’s thesis, Institutt for teknisk kyber- gan, Andreas Dömel, Manuel Brucker, Christof Eberst,
netikk, 2012. Riccardo Tornese, Christoph Sprunk, Gian Diego Tipaldi,
[7] Brad Hamner, Seth Koterba, Jane Shi, Reid Simmons, and et al. Integration and assessment of multiple mobile ma-
Sanjiv Singh. An autonomous mobile manipulator for as- nipulators in a real-world industrial production facility. In
sembly tasks. Autonomous Robots, 28(1):131, 2010. ISR/Robotik 41st International Symposium on Robotics,
[8] Simon Bøgh, Mads Hvilshøj, Morten Kristiansen, and pages 1–8. VDE, 2014.
Ole Madsen. Autonomous industrial mobile manipula- [19] N. Kimura, K. Ito, T. Fuji, K. Fujimoto, K. Esaki,

11 Copyright c 2019 by ASME


F. Beniyama, and T. Moriya. Mobile dual-arm robot for au- open-source lidar and visual simultaneous localization and
tomated order picking system in warehouse containing var- mapping library for large-scale and long-term online oper-
ious kinds of products. In IEEE/SICE International Sym- ation. Journal of Field Robotics.
posium on System Integration (SII), pages 332–338, Dec [32] Sergio Garrido-Jurado, Rafael Muñoz-Salinas, Fran-
2015. cisco José Madrid-Cuevas, and Manuel Jesús Marı́n-
[20] F Monteiro, P Rocha, P Menezes, A Silva, and J Dias. Tele- Jiménez. Automatic generation and detection of highly reli-
operating a mobile robot. In IEEE/ISIE, 1997. able fiducial markers under occlusion. Pattern Recognition,
[21] Jarosław Jankowski and Andrzej Grabowski. Usabil- 47(6):2280–2292, 2014.
ity evaluation of vr interface for mobile robot teleopera- [33] Dieter Fox, Wolfram Burgard, and Sebastian Thrun. The
tion. International Journal of Human-Computer Interac- dynamic window approach to collision avoidance. IEEE
tion, 31(12):882–889, 2015. Robotics & Automation Magazine, 4(1):23–33, 1997.
[22] Ondrej Linda and Milos Manic. Self-organizing fuzzy hap- [34] Jia Pan, Sachin Chitta, and Dinesh Manocha. Fcl: A gen-
tic teleoperation of mobile robot using sparse sonar data. eral purpose library for collision and proximity queries. In
IEEE transactions on industrial electronics, 58(8):3187– Robotics and Automation (ICRA), 2012 IEEE International
3195, 2011. Conference on, pages 3859–3866. IEEE, 2012.
[23] David Kent, Carl Saldanha, and Sonia Chernova. A com- [35] James J Kuffner and Steven M LaValle. Rrt-connect: An ef-
parison of remote robot teleoperation interfaces for gen- ficient approach to single-query path planning. In Robotics
eral object manipulation. In Proceedings of the 2017 and Automation, 2000. Proceedings. ICRA’00. IEEE Inter-
ACM/IEEE International Conference on Human-Robot In- national Conference on, volume 2, pages 995–1001. IEEE,
teraction, pages 371–379. ACM, 2017. 2000.
[24] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, [36] Sandra G Hart and Lowell E Staveland. Development of
Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y nasa-tlx (task load index): Results of empirical and theo-
Ng. Ros: an open-source robot operating system. In retical research. In Advances in psychology, volume 52,
ICRA workshop on open source software, volume 3, page 5. pages 139–183. Elsevier, 1988.
Kobe, Japan, 2009.
[25] Russell Toris, Julius Kammerl, David V Lu, Jihoon Lee,
Odest Chadwicke Jenkins, Sarah Osentoski, Mitchell Wills,
and Sonia Chernova. Robot web tools: Efficient messaging
for cloud robotics. In IROS, pages 4530–4537, 2015.
[26] Matei Ciocarlie, Kaijen Hsiao, Adam Leeper, and David
Gossow. Mobile manipulation through an assistive home
robot. In Intelligent Robots and Systems (IROS), 2012
IEEE/RSJ International Conference on, pages 5313–5320.
IEEE, 2012.
[27] Tiffany L Chen, Matei Ciocarlie, Steve Cousins, Phillip M
Grice, Kelsey Hawkins, Kaijen Hsiao, Charles C Kemp,
Chih-Hung King, Daniel A Lazewatsky, Hai Nguyen, et al.
Robots for humanity: A case study in assistive mobile ma-
nipulation. 2013.
[28] Sachin Chitta, Ioan Sucan, and Steve Cousins. Moveit![ros
topics]. IEEE Robotics & Automation Magazine, 19(1):18–
19, 2012.
[29] Ioan A. Şucan, Mark Moll, and Lydia E. Kavraki. The
Open Motion Planning Library. IEEE Robotics & Au-
tomation Magazine, 19(4):72–82, December 2012. http:
//ompl.kavrakilab.org.
[30] Dawei Wang, Jianqiang Yi, Dongbin Zhao, and Guosheng
Yang. Teleoperation system of the internet-based omni-
directional mobile robot with a mounted manipulator. In
IEEE International Conference on Mechatronics and Au-
tomation (ICMA), pages 1799–1804, 2007.
[31] Mathieu Labbé and François Michaud. Rtab-map as an

12 Copyright c 2019 by ASME

View publication stats

You might also like