Professional Documents
Culture Documents
Draft Towards Remote Teleoperation of A Mobile Manipulator System in Machine Loading and Unloading Tasks
Draft Towards Remote Teleoperation of A Mobile Manipulator System in Machine Loading and Unloading Tasks
Draft Towards Remote Teleoperation of A Mobile Manipulator System in Machine Loading and Unloading Tasks
net/publication/337586346
CITATIONS READS
21 1,534
4 authors, including:
Satyandra K Gupta
University of Southern California
480 PUBLICATIONS 8,219 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Shantanu Thakar on 04 February 2020.
MSEC2019-3027
of a live feed from the Kinects on board the robot and a virtual 5. Select desired motion goal for the manipulator
map of the environment for the mobile base and the manipulator. 6. Initiate trajectory generation for the manipulator
Detailed usage of the GUI is explained in Sec. 4. The mobile 7. Verify the system generated simulated manipulator motion
base and manipulator planners individually plan for the low-level 8. Execute robot motion
motion and control. 9. Perform desired grasping action
The software infrastructure is presented in Fig. 4. The base
All these operations are completed in the virtual environment
planner takes in the base motion goal and the scene information,
first and then executed for the real robot. The system is built
and generates a trajectory. The specifics of the trajectory plan-
to perform certain manipulation tasks on the factory floor which
ner are explained in Sec. 5. The trajectory is implemented by
can be guided by the operator at a remote location. The GUI
the base controller by sending the respective command velocity
presents a very convenient interface for the user to navigate and
goals to the motor controller of the mobile base. The manipula-
perform different tasks on the mobile manipulator system.
tor planner makes use of the scene information from the Kinects
The application performs 2 types of tasks. (1) Mobile Base
and joint state information from the manipulator, and generates a
Motion task, (2) Manipulator Motion task. Both tasks have 2
trajectory to the corresponding motion goal given by the user.
windows and task-specific buttons.
model of the system on which a real-time depth cloud im- 5 TRAJECTORY PLANNING
age of the current robot is projected. An example of this The trajectory planning and motion generation for mobile manip-
is shown in Fig. 7. The depth cloud image comes from ulator is implemented in three stages; first and the third stages are
the Kinect B placed such that the user can make use of the for the mobile base. In these stages, only the mobile base is tele-
depth data of the surrounding effectively. The user can navi- operated semi-autonomously, while the manipulator stays in its
gate in the 3D environment and understand the surroundings configuration such that its footprint is within the footprint of the
to perform the desired manipulation action conveniently. To mobile base. Here, we assume that the mapping of the environ-
provide a motion goal to the manipulator, the user can reach ment is done beforehand and the map is available for localization
the desired view, click on the interactive marker at the end- of the mobile base. The second stage is for the trajectory plan-
effector of the simulated arm, and move it to the desired ning and motion generation for the manipulator arm. This stage
location. The user can then click the Plan button (denoted in is implemented after the mobile base is stationed near the ma-
Fig. 5) to see the simulated action of the robot. If the simu- chine that needs tending.
lated motion is not desired, the user can change the motion
goal, or even move the base to a better location, or orient
the system in a better way such that the manipulation task 5.1 Mobile Base Trajectory Generation
is easier for the arm. The user can also simulate the grasp The planning for the mobile base is based on the Dynamic Win-
to check if the grasp is possible by clicking the Close button dow Approach (DWA) [33] algorithm. The algorithm initially
(denoted in Fig. 5) in the Simulated Gripper Actions. Once samples discretely in the robot’s control space, which in our case
the desired plan is simulated, the user can click the Execute is the velocity of the right and the left wheels. For each sam-
button (denoted in Fig. 5) to execute the motion. The Close pled velocity, a forward simulation is performed from the mobile
button (denoted in Fig. 5) in the Real Gripper Actions can base’s current state to predict the state (pose (x, y, φ ) ) after a
be used to perform the grasp action on the part. The user short period of time. For each of these forward simulated tra-
can always double check the motion by comparing it to the jectories, a score is evaluated based on the proximity to obsta-
real-time world view window and if the path planned is not cles, distance to the goal and the proximity to the speed limits.
desired by the user, the user can click on the Stop button. Trajectories which result in collision or violation of speed limits
This gives additional flexibility to the user. are discarded. Further, the trajectory which receives the high-
FIGURE 12: [Top]Augmented manipulation views of a set of waypoints during manipulation motion component of the task for a cylindrical part.
[Bottom]Real-time world views of the corresponding virtual map view
system needs user to provide input in lot more detail. For first TABLE 1: Comparison of average time taken by 5 users for different
and third components of the task, the user provides command components of the task using teleoperated semi-autonomous system and
velocities to the base using a set of keys on the keyboard. The fully teleoperated system
Real-time World View, Virtual Map View, and Augmented Ma- Average time taken for task
Components of the task complete (mm : ss)
nipulation View, discussed in Sec. 4, are used for visualization of
the system. Velocity limits set are the same as the teleoperated Teleoperated Semi- Fully Teleoperated
Autonomous system system
semi-autonomous system. For the second component of the task,
the user sends velocity command in one of the three axes at a 1. Going from the initial 03:15 03:53
time using a different set of keys on the keyboard. Similarly, the location to the 3D printer
user can also change the orientation of the end-effector by pro- 2. Picking up the part 03:43 04:25
viding angular motions using another set of keys. This method is 3. Going from the 3D 03:08 03:31
used to provide the most amount of control for teleoperation in a printer back to the initial
simple way for the user. location
In order to quantify the ease of use of the system, the times taken by the users to complete the three components of the task
taken by the user to perform the task in both modes are noted. in both systems. Average total time taken for task completion in
We asked 5 users, who have not interacted with the system be- teleoperated semi-autonomous system is 10:06 minutes and fully
fore, to perform the tasks. We gave a demonstration of usage teleoperated system is 11:49 minutes.
of system in both systems. The Tab. 1 shows the average time In the first and third components, the users using teleoper-
TABLE 2: NASA TLX Results very different than the current configuration. This on a few oc-
Type of workload Teleoperated Fully Teleoperated casions results in a large swinging motions for the manipulators.
Semi-Autonomous system Even for small changes in end-effector pose, the required ma-
system
nipulator motions might be large. This adds to the frustration
Mental Demand 1.67 3.11 occasionally. However, this issue does not arise in the fully tele-
Physical Demand 0.67 0.89 operated system where the user moves the end-effector wherever
Temporal Demand 2.17 2.89 desired. This freedom is however limited by several other con-
straints therein.
Performance 1.34 1.34
During the experiments using the fully teleoperated system,
Effort 1.67 2.0 we observed several constraints which our teleoperated semi-
Frustration 2.13 1.55 autonomous system overcame. One constraint was the joint limit
Total Workload 10.5 11.77 constraint which is inherently handled by the motion planner in
ated semi-autonomous system interacted with the system by only our approach. Moreover, there were several configurations in
providing a point on the map and the direction for the system to which the robot was unable to move in the desired direction due
face, whereas, when using fully teleoperated system, the users to proximity to singularities. Furthermore, collision avoidance
had to make sure to avoid obstacles and use all the cameras to get is inherently handled in the planners as the point cloud of the
to the destination. In the second component of the task, the users environment is available to the planners.
using fully teleoperated system averaged 1.5 emergency stop-
pages due to either singular configurations or joint limits. Due to
these reasons, users when using teleoperated semi-autonomous 7 CONCLUSIONS
system took 1:43 minutes less for task completion. In this paper, we have shown many functional capabilities of the
After each task completion, the users are asked to take the mobile manipulation system that we developed. The first capa-
NASA TLX [36] form to measure the workload of the task. The bility is focused on low level planning and results in human oper-
results of the survey are shown in Tab. 2. Mental workload ators give high level commands for motion. The robot generates
is a significant difference between the two systems. The users collision-free trajectories of the mobile base and robot arm for
have noted that the mental demand in fully teleoperated system the motion specified by the human. The second capability that
is higher due to additional input and attention that is required we have implemented is that the robot avoids obstacles. We have
to prevent damage to the system and surroundings. Moreover, at ensured this by integrating state-of-the-art motion planners into
certain places, due to limited number of cameras it is not possible the system. The third capability that we have presented shows
to guarantee that the motions commands result in collision free that the user is given live video feedback hence enabling him/her
paths. We observe that the Frustration score in Tab. 2, is higher to select the appropriate motion goals. The final capability is that
for the teleoperated semi-autonomous system as compared to the the user is able to provide motion goals and task constraints using
fully teleoperated system. The reason we identified after speak- an easy-to-use interface. Moreover, we have ensured that there is
ing with the users was that giving waypoints to the end-effector no ambiguity in the command issued by the human to the robot.
of the manipulator can result in target joint configurations to be We have conducted user tests of the system which have shown