Abstract Writing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Robotics and Autonomous Systems 159 (2023) 104311

Contents lists available at ScienceDirect

Robotics and Autonomous Systems


journal homepage: www.elsevier.com/locate/robot

Human–robot handover with prior-to-pass soft/rigid object


classification via tactile glove

Ayan Mazhitov, Togzhan Syrymova, Zhanat Kappassov, Matteo Rubagotti
Department of Robotics and Mechatronics, School of Engineering and Digital Sciences, Nazarbayev University,
53 Kabanbay Batyr Ave, Astana, 010000, Kazakhstan

article info a b s t r a c t

Article history: Human–robot handovers constitute a challenging and fundamental aspect of physical human–robot
Received 3 February 2022 interaction. This paper describes the design and implementation of a human–robot handover pipeline
Received in revised form 7 October 2022 in the case in which both soft and rigid objects are passed by the human to the robot. These objects
Accepted 6 November 2022
require different profiles of grasping torques by the robot hand fingers, so as to avoid damaging them.
Available online 13 November 2022
As a viable solution to this problem, a tactile glove worn by the human is used to provide real-time
Keywords: information to a deep neural network, which classifies each object as soft or rigid in the pre-handover
Tactile sensors phase: this information is passed to the robot, which applies the grasping torque profile suitable for
Human–robot handover the specific type of object. The proposed method is designed and validated based on experiments with
Human–robot interaction eight human participants and 24 objects. The outcomes of these experiments regarding classification
Deep learning
accuracy, force and torque profiles, and evaluation of the subjective experiences via questionnaires,
are described and discussed.
© 2022 Elsevier B.V. All rights reserved.

1. Introduction rigid bodies) rather than on the possible issues deriving from
differences in deformability. However, a closer analysis of the
A handover, which can be described as a sequence of co- possible issues deriving from the physical interaction of robots
ordinated joint actions aimed at directly transferring an object with soft objects has been conducted in recent years [5,6]. Indeed,
from a passer to a receiver [1], is a fundamental tool for sev- especially when deformable objects are involved, the ability to
eral human–human collaboration tasks. While humans intuitively recognize a soft object is a key feature in HRH. If this is not
perform handovers multiple times every day, replacing either done and the wrong grasping force is applied, some objects
passer or receiver with an autonomous robot (which would open could be damaged; this is shown in Fig. 2, which highlights the
up tremendous opportunities in the field of robotic assistance) consequences of soft/rigid misclassification in our approach, with
is a challenging task, object of current research [2]; indeed, no consequent application of an excessively small grasping force for
human–robot handover (HRH) solutions have been proposed so rigid objects (which can slip), and of an excessively large grasping
far that can match the simplicity and performance of human– force for soft objects (which can be deformed).
human handovers (HHH). HRH (which is the object of this paper, A wide range of sensory inputs allows humans to distinguish
see Fig. 1) is a complex task, as it requires triggering the handover, deformable and fragile objects from rigid ones. Accordingly, hu-
continuously estimating the human hand pose, deriving the loca- mans adapt their grasping force to avoid damage and, at the same
tion and time of object transfer, generating a suitable robot mo- time, slippage. Therefore, in order to design a robot assistant
tion, and managing the physical interaction during grasping. After that can pass or receive soft objects as well as rigid ones, one
integrating these system components, one has to achieve a suffi- should equip the system with sensors that can detect the intrinsic
ciently high success rate (i.e., the passed objects should rarely be properties of the objects, as stated in [7]. A possibility would be to
damaged), at the same time generating a motion that is perceived use tactile sensors on the robot gripper, as done in [8,9] outside
by the human operator as safe, fluent, and comfortable [2]. the HRH field. The use of such an approach within HRH would
In the HRH literature, the robot grasping is typically executed require adding a phase in which the human holds the object
without accounting for the rigidity of each handled object [3,4]. while the robot explores it, before the actual handover happens.
The focus of [3,4], further analyzed in Section 2, was in determin- This would lead to an increased execution time of the HRH task,
ing the best grasping configuration on the object (using mostly and to an increased risk of damaging deformable objects during
the exploration phase. Considering that the human has to grasp
∗ Corresponding author. the object before passing it to the robot, and that the ability of
E-mail address: matteo.rubagotti@nu.edu.kz (M. Rubagotti). humans to handle different objects without damaging them is

https://doi.org/10.1016/j.robot.2022.104311
0921-8890/© 2022 Elsevier B.V. All rights reserved.
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 1. Implemented experimental setup for human–robot handover with prior-to-pass soft/rigid object classification. The passer is a participant wearing a tactile
glove, while the receiver is a Franka Emika robot arm integrated with an Allegro robot hand.

much more developed than in robots, our idea is to perform the 2. State of the art
soft/rigid classification at the moment when the human grasps
the object. In order to make this possible, we propose to at- This section provides an overview of the state of the art
tach tactile sensors to a fabric-made glove worn by the operator in methods for the design of the pre-handover phase and for
(giver), which (via a deep-learning-based classification algorithm) tactile object detection, as these two aspects are central for the
informs the robot (receiver) about the softness of the object. In development of our method. As claimed in [1], to properly design
turn, the robot gripper will use two different grasping forces to the behavior of the robot in an HRH task, it is important to
prevent deformation of soft objects and slippage of rigid objects understand how people act in corresponding HHH activities. A
(cf. [10]). The same tactile glove allows us to detect when the detailed description of how HHH has been analyzed to provide
human releases the object, so as to determine when the robot guidance in the design of HRH systems (in terms of motion flu-
can start moving again to place it at a given location. Should ency, modulation of grip forces, vision and tactile sensor abilities,
the proposed method for soft/rigid object classification be used and design of robot arm controllers) is out of the scope of this
within a more realistic task, for instance in a manufacturing paper, but the interested reader is referred to [1,13–17] for a more
scenario, then the features of the glove could be employed for in-depth discussion on this topic.
other purposes, such as object slippage detection from the human
hand (see, e.g., [11]) or texture recognition (see, e.g., [12]).
2.1. The pre-handover phase
The main contribution of our paper is to answer to the follow-
ing two related research questions:
The handover phase in HRH is always preceded by a so-called
Is it possible to execute HRH with different robot grasping force
pre-handover phase, which includes a signal of intention by the
for soft and rigid objects, without previous knowledge of the object
operator to start the handover action, the human hand pose
characteristics? If yes, with what success rate?
estimation, and the derivation of the place and time of object
Indeed, the existing handover approaches in the HRH field
transfer.
were never specifically designed to provide a different grasp-
ing force for soft and rigid objects, as it is done in our work. Two main approaches are available for pre-handover. The
More in detail, the proposed HRH pipeline presents the following online approach aims at tracking the human hand (or the object)
features which, to the best of the authors’ knowledge, were position and to estimate the transfer location, using either a
never proposed in the literature, and thus constitute innova- vision system [3,4], or wearable sensing devices [18], or IMU
tive contributions: (i) use of a wearable tactile glove to classify sensors [19,20]. Instead, the offline approach requires the presence
rigid/deformable objects prior to handover; (ii) adaptation of the of robotic touch sensors, which detect the time instant when the
robot grasping force for HRH based on the above-mentioned human places the object on a predefined location within the robot
classification; (iii) use of the tactile glove to determine the end palm [21,22].
of the passing phase. In recent years, researchers included object classification into
The remainder of this paper is organized as follows. After a the pre-handover phase (e.g., [3,4]), using RGB-D cameras to-
brief state-of-the-art review in Section 2, our HRH pipeline with gether with deep learning (DL) techniques, so as to detect an
prior-to-pass object classification phase is presented in object within the human hand. These vision sensors had been
Section 3. Details of the experimental platform are given in used to select the grasping points with the highest likelihood
Section 4. Section 5 describes the training and testing of the deep of success via Generative Grasping Convolutional Neural Net-
neural network used for soft/rigid object classification, while the works [3] and 6-DOF GraspNet [4]. In these works, in which both
experimental evaluation of our pipeline with different human rigid and soft objects were used, the grasping was executed by
participants is described and discussed in Section 6. Finally, applying the same gripper forces to all objects, as their deforma-
conclusions are drawn in Section 7. bility was not crucial for the considered case studies. However,
All experimental protocols described in this paper were ap- if such a classification were needed (for example due to the
proved by the Nazarbayev University Institutional Research Ethics presence of fragile objects), vision sensors could not be used to
Committee, and written informed consent was obtained for all estimate deformability; instead, the best type of sensors for this
participants. task are tactile sensors [5].
2
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 2. Successful and failed handovers in the implemented experimental setup. Soft object (plastic cup) detected correctly (a) and misclassified (b). Rigid object
(bottle) detected correctly (c) and misclassified (d). When misclassified, the plastic cup was deformed and damaged when grasped by the robot hand, while the
bottle slipped from it.

2.2. Tactile gloves in deformable object classification 3.1. Object classification

In our proposed approach, in order to be able to obtain in- The pre-handover phase starts with the supervised-learning-
formation from tactile sensors (see, e.g., [22–24]), the latter are based classification of the object grasped by the giver, in which
placed on a custom-made glove built in our laboratory, which our classifier recognizes if the object picked up by the human is
is worn by the human operator. In general, the aim of the tac- soft or rigid. In order to train this classifier, a dataset has to be
tile glove is to provide information, via its sensors, on different generated: several participants wear the tactile glove and grasp
properties of the grasped objects, such as softness, texture, mass objects, which are already labeled as soft/rigid, and raw signals
and shape. A tactile glove can incorporate more than one sensing from the tactile glove are recorded for each object; then, a deep
modality, and be stretchable and flexible, so as to precisely cover neural network is trained based on these data (one-dimensional
arrays representing the signals) and labels, to output the object
the curved surface of the human hand [25,26].
class (soft/rigid).
In the robotics literature, tactile gloves have been used for
providing a tactile image of a contact area or for tracking human 3.2. Pose estimation
limb motion [27]. For example, the authors of [16] used a motion
capture data glove (Cyber Glove) with force sensitive resistors After the object is classified, the next stage consists of deter-
(TekSan patches) to monitor human intentions during handovers. mining the handover location, which is not a-priori fixed, but
Tactile gloves were also used in [28] to identify an object and changes based on the human hand position (i.e., we apply an
estimate its weight, independently of the type of applied grasp; online method, as described in Section 2). Indeed, in real-world
this was achieved by acquiring information from pressure sensors applications with possibly dynamic environments, it is desirable
on the glove via DL, feeding a tactile image as a frame to a con- not to fix a priori the handover location [2], and, thus, this has
volutional neural network (CNN). The authors of [29] proposed to be estimated. A possible approach, used in our work, is to
a tactile glove (MemGlove) incorporating resistive and fluidic incorporate a marker into the tactile glove, so as to estimate
sensing to estimate the stiffness of an object via DL; the method its location using a camera system (e.g., a visual motion track-
was tested to recognize the stiffness of six geometrically identical ing system made of multiple cameras detecting a simple visual
deformable objects (3D-printed materials, EcoFlex silicones, and marker, or an RGB-D camera detecting an ArUco marker). The
foams) using the same hand position and grasping force for all time evolution of this location, defined in a fixed reference frame
objects. centered at the base of the manipulator, can be used to estimate
To the best of our knowledge, the use of tactile gloves for the hand speed. Using this information, it is possible to detect
binary classification of soft and rigid objects during HRH (which when the hand stops: the corresponding coordinates are then
is a contribution of our approach) has never been proposed in the passed to the motion planning software.
literature.
3.3. Trajectory calculation
3. The proposed HRH pipeline Once the handover location has been determined based on the
human hand position, the manipulator has to plan a trajectory
Fig. 3 illustrates the concise block diagram of the proposed from its current configuration to a point close to the hand position
HRH pipeline, with the yellow envelope indicating the phases (according to a given offset), with a suitable final orientation.
that specifically rely on our contributions. These consist of the The robot motion has to be planned in real time, and thus the
following: (a) in the pre-handover phase, of the soft/rigid object planning routine should be executed as fast as possible to obtain a
classification via tactile glove; (b) in the handover phase, of the smooth handover process. At the same time, the planned motion
use of this classification to modulate the grasping force, and of has to be smooth and to avoid possible obstacles in the robot
the use of tactile glove information to end the passing phase. The workspace. To achieve all of this, one possibility is to use state-
following subsections describe each stage of the block diagram in of-the-art sampling-based motion planning algorithms, such as
Fig. 3. rapidly-exploring random trees (RRT) [30].
3
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

3.5. Retreat

During this last phase, the robot calculates the trajectory to


return to its default configuration by means of a motion planning
algorithm as done during the stage described in Section 3.3. After
that, the robot moves its end effector above a box and releases
the object: two boxes are present, for soft and rigid objects
respectively.

4. Experimental setup

The experimental platform used for the evaluation of HRH


with the proposed prior-to-grasp classification mainly consists of
two separate systems, in addition to the passed objects: robot
arm with the hand and related vision sensors, and tactile glove
(see Fig. 1). In the following, we describe the hardware and
software components of the experimental setup of our case study.
The proposed pipeline is not dependent on these components
and, with suitable technical modifications, can be applied using
a different robot manipulator/hand, a different vision system, or
a different tactile glove.

4.1. Robot and vision sensors


Fig. 3. Block diagram of the proposed HRH pipeline. (For interpretation of the
references to color in this figure legend, the reader is referred to the web version
of this article.) As manipulator, we used a torque-controlled Franka Emika
Panda, placed on a metal table. This robot is particularly suitable
for safe physical human–robot interaction, due to its ability to
3.4. Handover stop whenever an unwanted contact is detected by its torque sen-
sors. The arm controller was interfaced with a computer (Z4 HP
When the robot hand has reached the object transfer location, workstation with 32 GB DDR4, Intel Core i9, NVIDIA RTX2080Ti,
the grasping phase begins. As mentioned in [13], the physical and Linux operating system with real-time Ubuntu kernel) via
transfer occurs when the receiver makes initial contact with the a licensed Franka Control Interface (FCI) with a 1 kHz sampling
object. In our case, we make the assumption that the passer is rate. We used the MoveIt plugin with the RRT-Connect motion
responsible for the initial contact, by inserting the object into the planning library to steer the end effector to its goal configuration.
robot’s hand. This event is detected by monitoring the derivative Using the MoveIt plugin, we obtained the Cartesian path between
of the sum of the torques of the manipulator: if its value is higher two points and estimated a collision-free trajectory of the robot
than an empirically derived threshold c1 , then the algorithm can motion.
conclude that contact has been made, and the robot can grasp As end effector, we used a 16 degrees-of-freedom (DoF)
the object. It is crucial to stress that not all robotic manipulators torque-controlled robot hand (Allegro hand, WonikRobotics), in-
are well equipped to detect the presence of external loads when terfaced with the above-mentioned computer via CAN bus and
these cause only slight perturbations of the joint torques. If this
PEAK PCAN-USB adapter at 333 Hz. The hand weights 1.08 kg and
had to be an issue, a possibility would be to avoid monitoring
can grasp objects up to 5 kg in mass. As, without end effector, the
joint torques, and instead use an image processing pipeline to
maximum payload of the robot arm would be 3 kg, the objects
identify if the object is within the robotic hand, and consequently
start grasping it. Alternatively, one could use a tactile sensor used in our experiments could have a mass up to 1.92 kg. The
(e.g., FSR) attached to the robot hand, for the same purpose. robot hand employed the Envelop Grasp algorithm, which was
The actual grasping by the robot is then executed by lim- used to wrap each object with all four fingers.
iting the maximum torque of each joint of its hand, choosing An Intel RealSense D435 RGB-D camera, which streams at 60
the correct threshold based on the above-mentioned soft/rigid fps with a 940 × 522 resolution, was mounted apart from the
classification result. robot arm to acquire an isometric view of both human operator
The tactile glove is also used to detect when the participant and robot (Fig. 4). The camera was fixed on a cubic structure with
releases the object. According to [13], the human can release the side length of 1 m, made of Bosch Rexroth aluminium profiles.
object only after the robot hand has achieved a stable grasp: in All the components of the platform communicated via Robot
our framework, this is determined by the human, who has to Operating System (ROS) with Melodic distribution. The camera
assess when the object has been safely grasped by the robot. At detects an ArUco marker placed on the human wrist (Fig. 4) to
this point, the human simply releases the object, and this event track its position and to determine the handover location (as
is detected via the tactile sensors on the glove. More precisely, explained in Section 3.2 for the general case).
the variation of the total pressure applied by the human operator
to the grasped object is estimated by summing the derivatives
of the values of the signals of all pressure sensors on the glove: 4.2. Tactile glove
when the absolute value of this sum is above a threshold c2 , also
determined experimentally, the retreat stage will be initiated. The A fabric-based wearable tactile glove, designed and fabricated
derivative of the sum of the pressures is used rather than the sum in our laboratory, was interfaced with the above-mentioned com-
itself, in order not to be affected by possible biases that can often puter via ROS. The glove is capable of measuring pressure distri-
occur in pressure sensors readings. bution and mechanical vibrations at the points of contact.
4
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 4. Human hand pose estimation: (4(a)) camera view with the detected human hand via ArUco marker and (4(b)) visualized frames (in ROS RViz). The green
cube represents the human hand pose with respect to the base frame of the manipulator. (For interpretation of the references to color in this figure legend, the
reader is referred to the web version of this article.)

4.2.1. Pressure distribution age range, and referred to as Group A. Each participant grasped
The pressure sensing module is composed of 27 pressure the same 18 objects (9 rigid and 9 soft, with names written in
sensors distributed over the palm and fingers (Fig. 1). These black in Fig. 5) that will be simply indicated as old objects. Each
sensors consist of five arrays of barometers molded within sili- object was grasped ten times using two different grasping modes:
cone (Takkstrip2, RightHand labs). In turn, each Takkstrip2 array five times using the prismatic 4-finger-thumb grasp (Fig. 6(a))
consists of three separate barometers and one solid 3 × 1 array of and five times using the power grasp (Fig. 6(b)). In both cases,
barometers. Single barometers were each placed on the distal, in- the object was grasped with all fingertips: the difference was
termediate, and proximal phalanx of each finger. Four solid 3 × 1 that the palm was in contact with the object when using the
arrays of barometers were mounted in the middle of the palm, at power grasp, and detached from it in case the prismatic 4-finger-
the location of the metacarpal bones. The data acquisition from thumb grasp was employed. For a detailed description of these
all 27 sensors was synchronized and sampled at 100 Hz with a modes, the reader is referred to [31,32]. These types of grasps are
multiplexer (TakkFast interface board, I2C protocol) connected to very common, especially when dealing with object shapes that
the robot computer. are approximately spherical or cylindrical, as are the objects in
our case study. The participants were instructed to grasp each
4.2.2. Mechanical vibrations object at a specific location, making sure that all relevant pressure
The vibration sensing module of the tactile glove consists sensors on the tactile glove were in contact with the object for the
of two MEMS accelerometers (Analog Devices, ADXL303) with relevant time interval. In total, there were 360 trials (9 objects ×
analog outputs, attached onto the fingertip of thumb and index 4 people × 10 grasps) per class.
finger. The circuitry around the sensor was designed to pro- The output signals of the pressure sensors on the tactile glove
vide a physical bandwidth of 1 kHz. For data acquisition, we were recorded for 4.5 s in each experimental trial using the
used an ARM Cortex M4 based STM32f3discovery microcontroller rosbag function in ROS with a sampling frequency of 100 Hz
board and two custom-made analog-to-digital converter modules (both values were chosen via trial and error in preliminary ex-
(Analog Devices, AD7685). These were connected with the micro- periments). To limit the complexity of the DL models, only 17
controller via serial bus in a cascaded configuration [11], while out of the available 27 barometers were used; indeed, the sensors
the microcontroller was connected with the computer through on intermediate and proximal phalanges did not seem to provide
its sound card, which can sample the input data stream with a useful information for classification. A possible reason for it is
rate of up to 8 kHz. that our participants would always use their fingertips and (pos-
sibly) their palm for grasping, and thus the corresponding sensors
4.3. Objects would almost always be in contact with the objects. On the
other hand, the sensors on intermediate and proximal phalanges
To preserve the variety of objects used in daily life, we selected could be in contact or not, and this was mainly determined by
12 deformable objects and 12 stiff objects of different size, ge- the object shape (in particular for large rigid objects): thus, the
ometry and mass (the latter up to 1.92 kg to meet the payload output of these sensors would be heavily influenced by the object
limitations of the robot arm). These are shown in Fig. 5. We shape rather than by its stiffness. As a result, each experimental
used 18 out of 24 objects to generate the training dataset; the trial (one participant grasping one object) resulted in a one-
remaining six objects (whose names are shown in red in Fig. 5) dimensional array of size 7650 (17 × 450). After all data were
were used to test the approach on arbitrary (unknown) objects recorded, the data were normalized on a [0, 1] scale.
during HRH. To test the DL models described in the following on data
from different distributions, we invited four new participants,
5. DL for tactile object classification who were also 20–24 years old Nazarbayev University students,
referred to in the remainder of the paper as Group B. They were
In this section, we provide a description for our case study asked to grasp all objects shown in Fig. 5, with the names written
of the training and testing of DL models for soft/rigid object in red color (referred to as new objects) five times in the same
classification, already described from a general perspective in manner as described for the participants in Group A.
Section 3.1.
5.2. DL models and Classification Accuracy
5.1. Experimental procedure
The obtained training dataset (‘‘Group A - old objects’’) was
To obtain data, experiments were conducted with four healthy split into two separate subsets, for training and development
participants, all students at Nazarbayev University in the 20–24 (dev). The training subset was used to tune the hyperparameters
5
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 5. List of objects used in the experiments. The objects with red names are those that were not used in the collection of the dataset. (For interpretation of the
references to color in this figure legend, the reader is referred to the web version of this article.)

Table 1
Sensing modality performance in deformable object classification. V - vibration
sensors. P - pressure sensors. CNN - convolutional neural network. FNN -
feed-forward neural network.
ID Model Input feature Dev set Test set
V 77.5% 58.8%
1 CNN
P + V 93.5% 66.7%
P 97.7% 75.5%
2 FNN P 93.5% 62.2%

V100-SXM3-32 GB GPU on a Nvidia DGX machine was used to


train all models. As mentioned at the beginning of this subsection,
different hyperparameters have been tested before determining
the ones with best performance: for example, for the CNN we
varied the kernel size from 1 × 8 to 1 × 16, the size of the first
fully-connected layer from 128 to 1024 units, and the drop rate
from 0.0 to 0.4.
Fig. 6. Prismatic 4-finger-thumb grasp and power grasps. The results of our training are shown in Table 1, in which one
can see that the CNN outperformed the FNN in accuracy both in
the dev set (97.7% vs. 93.5%) and in the test set (75.5% vs. 62.2%).
of the DL models described in the remainder of this section, As a consequence, we decided to use the trained CNN model for
while the dev subset was used for selecting the hyperparam- HRH experiments.
eters that resulted in the highest accuracy. Finally, the best-
performing parameters were tested using the test set (‘‘Group B - 5.3. Sensing modalities
new objects’’).
As DL models we deployed a feed-forward neural network
As described in Section 2.2, the tactile glove incorporates both
(FNN) and a CNN to solve the binary classification problem. Both
the FNN and CNN models, with input layers size of 7650, were mechanical vibration and pressure sensors, while the description
implemented using the Pytorch library. The FNN model consists of the DL models in the previous part of this section only referred
of two hidden layers, each of them incorporating 512 units with to pressure sensors. This is due to the fact that, in contrast to our
ReLu activation function. In the output layer, a softmax activation previous work in which both pressure and mechanical vibration
function is instead used to calculate the probability distribution sensors were placed on a robot hand for granular objects clas-
over the predicted output classes. The CNN model consists of sification [33], we did not expect the use of vibration sensors
three convolutional layers followed by two fully-connected lay- to increase the classification accuracy. Indeed, squeeze-induced
ers. The convolutional layers incorporate four filters with kernel vibrations are produced by the collision of small particles within
size of 1 × 8 and with a stride of 4. The first fully-connected layer a granular object when the latter is squeezed, but our HRH task
consists of 512 units followed by ReLu activation function, while was predominantly executed with smooth/uniform objects. In
the output layer operates a projection with a softmax activation order to test this assumption, a CNN model similar to the one
function. Both models were trained using the Adam algorithm described above, but with inputs given by vibration sensors only
with a learning rate α = 10−5 , regularized via dropout (with or by both pressure and vibration sensors, was built. As expected,
drop rate equal to 0.3). The models were trained with 104 epochs the performance in the case of pressure sensors only was still
and tested on the dev subset after each epoch. A single Tesla superior, as can be seen in Table 1.
6
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Table 2
Percentage of objects that either slipped or got deformed throughout experimental runs with various torque thresholds.
Case \Torque 0.1 N m 0.2 N m 0.3 N m 0.4 N m 0.5 N m 0.6 N m 0.7 N m
Slippage - Rigid 88.89% 77.78% 28.89% 15.56% 4.44% 4.44% 4.44%
Deformation - Rigid 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
Slippage - Soft 82.22% 37.78% 11.11% 0.00% 0.00% 0.00% 0.00%
Deformation - Soft 0.00% 0.00% 0.00% 24.44% 53.33% 66.67% 77.78%

Fig. 7. Time evolution of torques of each robot hand joint. The red dashed lines indicate the thresholds on the allowed maximum torque. The pink dashed line
indicates the average torque of all robot hand joints. (For interpretation of the references to color in this figure legend, the reader is referred to the web version
of this article.)

6. Experimental results on HRH threshold for each object type, we chose to minimize the sum
of percentages for which, applying that torque, the object was
The HRH experiments involved 8 healthy participants, divided either dropped or deformed. As a result, the best threshold for
into Group A and Group B and grasping both old and new objects: soft objects was clearly 0.3 N m, which never caused deformation
however, this time a complete HRH experiment was executed. and led to slippage in 11.11% of the cases. As for the threshold
The experimental procedure for each participant and each object for rigid objects, all values from 0.5 N m and above led to no
consisted of three phases. deformation, and slippage in 4.44% of the cases. The value 0.5 N
In the first phase, the participant grasped one of the 24 objects m was thus chosen considering what would happen in case of
without any specific instructions on the contact location or on misclassification: if a soft object were grasped with this torque
paying attention to keep the tactile sensors in touch with the value, it would be deformed 53.33% of the times, which is a lower
object. This was done in order to assess the performance of the percentage compared to those associated with higher torque
classifier outside the training data distribution, so as to obtain thresholds.
a more realistic HRH scenario. The grasping was repeated three As for the obtained percentage of deformation or slippage
times using the 4-finger-thumb grasp, the power grasp (both occurrence in the case of correct soft/rigid classification and con-
already used for CNN training and testing), and finally a grasp sequent application of the correct torque threshold, we can no-
without specific instructions, then used to bring the object to- tice that deformation never occurred, while slippage occurred in
wards the robot. The soft/rigid classification via CNN was carried 4.44% of the cases for soft objects, and in 11.11% of the cases for
out for all three grasps, and the object class was then determined rigid objects. 4.44% for slippage of rigid objects corresponds to 2
via hard voting. trials out of 45. It is crucial to consider that the same three objects
In the second phase, the participant moved the object to- - the beam, the level, and the ball - were involved. Therefore, the
wards the manipulator and stopped near the handover location: slippage might most probably be explained by their small size
the stopping time of the human hand was set when the mea- and inconvenient shape. For instance, the dimensions of the level
sured speed of the marker was below the experimentally-derived are 5.5 × 4 × 1.5 cm, while the size of the palm of the robot
threshold of 10 mm/s. After that, the robot planned its trajectory hand is 9.5 × 14 cm. As a consequence, it is quite complicated
via RRT and reached its designated handover pose. to grasp this object, and sometimes unstable grasps might occur.
The third phase consisted of the actual handover, carried out On the other hand, the value of 11.11% obtained for soft objects
based on the threshold values detailed in the following. is less critical, as soft objects rarely get damaged when they fall,
and might instead get damaged when deformed. The influence
6.1. Thresholds Determination of these thresholds can be seen in Fig. 7, which depicts the time
evolution of all the robot hand joint torques and of their average
The handover phase was initiated when the sum of the robot value when grasping a rigid (left) and a soft (right) object.
joint torques was above the threshold c1 described in Section 3.4, The robot was allowed to start the retreat phase when the
experimentally set to 1.5 N m. absolute value of the time derivative of the sum of the pressure
During handover, upper bounds of 0.5 N m and 0.3 N m values from sensors was above threshold c2 , set equal to 90 kPa (
were imposed on the maximum torque for each robot hand Fig. 8). This threshold was derived empirically from numerous
joint, for objects classified as rigid and soft, respectively. These experimental trials in which two participants released all 18 ob-
thresholds were developed empirically, by carrying out numerous jects ten times each. The chosen value provides a good trade-off
experimental trials. Each of the nine objects of both classes was between the ‘‘false release trigger’’ and the ‘‘not activated trigger’’
grasped and moved to the corresponding box using different scenarios for all objects and different participants. Lowering the
torque threshold values, ranging from 0.1 to 0.7 N m, repeating threshold would increase the ‘‘false release trigger’’ cases and, as
the procedure five times for each object/threshold combination. a result, reduce the safety of the HRH.
The results are illustrated in Table 2, in which we report the After that, the participant waited until the end of the robot
percentage of experiments in which all rigid or soft objects were retreat phase and then initiated the first phase for a new ob-
either dropped or deformed (see also Fig. 2). To determine the ject by grasping it. The experimental procedure was repeated
7
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 8. The maximum absolute sum of the change in pressure values over all
sensors during release (in blue) and before release (in yellow) for all objects,
numbered as in Fig. 5. The straight blue line represents the chosen threshold of
90 kPa. (For interpretation of the references to color in this figure legend, the
reader is referred to the web version of this article.)

for all objects. Statistical data regarding classification results


(Section 6.2) and interaction forces (Section 6.3) were recorded
to provide an objective evaluation of the efficacy of the proposed Fig. 9. Time span of a normal experiment. The top graph is a plot of the force
HRH method. After repeating the experimental procedure for all of a human hand (red) and a robot hand (black) versus time. The bottom graph
is a plot of the sum of the torques of the Franka robot. (For interpretation of
objects, each participant filled in a questionnaire to provide a the references to color in this figure legend, the reader is referred to the web
subjective evaluation of the HRH experience (Section 6.4). For a version of this article.)
better understanding of the experimental procedure, a video is
provided with this paper as supplementary material. Table 3
Accuracy of object classification in the pre-handover phase for trained (Group
6.2. Pre-handover object classification A) and untrained (Group B) participants grasping objects from the training set
(‘‘old objects’’) and new objects.
The outcome of the pre-handover soft/rigid object classifica- Old objects New objects
tion during the HRH experiments is shown in Table 3. Group A 84.7% 83.3%
We first analyze the results for the participants in Group A Group B 73.6% 70.8%
while grasping the old objects: although these participants and
objects were used for training DL models, we would not have
expected the same accuracy as for the dev set in Table 1, because
by the camera, while the dashed vertical gray lines delimit the
a whole HRH experiment was run in this case (which could dis-
time interval in which both human and robot are in contact
tract the subjects from the actual grasping action) and no specific
with the object. The upper plot shows the evolution of the total
instruction was provided on contact location or on keeping the
force applied by human (in red) and robot (in black) hands to
tactile sensors in contact with the object, as mentioned above. As
the object, normalized in the [0, 1] range. It can be noticed that
expected, one can immediately notice that the accuracy obtained
the robot hand applies a constant total force when holding the
in Table 1 when testing the same CNN used in HRH for the dev set
object. The human instead applies a gradually increasing force,
is higher than the accuracy shown in Table 3 for the combination
and tends to decrease it while moving the object towards the
‘‘Group A - old objects’’ (98.5% vs. 84.7%).
robot. The force applied by the human hand is typically increased
We also asked the participants of Group A to run the full HRH
during the handover, which is in agreement with the observations
experiments with the new objects: as a result, the loss of accuracy
about HHH in [15], according to which excess grip forces are
was quite limited (1.4%) compared to the ‘‘Group A - new objects’’
case. This implies that the range of objects used for CNN training usually applied to avoid slippage. Then, the force applied by the
was wide enough to also include relevant characteristics of the human hand rapidly decreases towards zero when the object is
new objects. released. In the lower plot one can see the evolution of the sum
Finally, in the last row of Table 1 it can be observed that the of the robot joint torques. The robot starts moving after about
presence of the participants of Group B reduced the accuracy for 8 s, after having planned its motion via RRT. The spike in the
old and new objects of about 11.1% and 12.5%, respectively. This total torque of the robot arm represents the contact of the object
can be due to the fact that the participants in Group A probably with the robot hand, which triggers the robot grasping. After
did not cover all possible grasping ‘‘styles’’ (e.g., maximum force, remaining approximately constant during the grasping phase, the
grasping speed, etc.) of the participants in Group B. Therefore, robot torque starts varying again during retreat.
this result could be possibly improved by increasing the size of For safe and comfortable HRH, and to avoid damaging either
Group A. The obtained accuracy for ‘‘Group B - new objects’’ was the object or the robot, or injuring the human participant, the
lower than that obtained for the test set of Table 1 (70.8% vs. robot needs to wait until the participant releases the object. For
75.5%): this was also expected, for the same reasons reported for this reason, it is important to use the tactile glove to determine
the ‘‘Group A - old objects’’ case. the end of the passing phase. To show the human reaction when
this protocol is not followed, we programmed the robot to retreat
6.3. Quantitative results on collaboration fluency the object before the participant released it, and the area framed
within the gray box in the upper plot of Fig. 9 illustrates the
The time evolution of some relevant variables during a typical time evolution of human and robot grasping forces in this case.
HRH is illustrated in Fig. 9, in which the dashed vertical blue In the time interval when the handover happens we can observe
line represents the time instant when the marker is detected an increase in the human’s grasping force (red line) when the
8
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Fig. 10. Responses of the participants to the questionnaire: experienced participants are indicated by circles, and inexperienced participants by crosses.

robot pulls the object away: this confirms some of the findings evaluation of their experience) and, on the other hand, make
described in [17], according to which people tend to rapidly them evaluate the robot’s performance more critically.
increase grip forces to ensure stability of the grasped object in We also asked the participants to provide any free comments
the event of unpredictable load–force disturbance. and suggestions after filling the questionnaire. Several partic-
ipants highlighted the use of the ArUco marker as a possible
6.4. Qualitative results source of discomfort. Indeed, the marker located on the partic-
ipant’s wrist must always be in the correct orientation and not
To assess how the HRH experiments were subjectively experi- occluded by clothes or by the object, such that it can be detected
enced by each participant, we asked them to fill in a questionnaire by the camera. To improve this aspect for future research, one
at the end of the experimental procedure. The four questions, possible solution is to use methods that do not make use of ArUco
markers, such as those described in [3,34].
which were either taken verbatim or adapted from those listed
in [2, Sec. VI], are reported in Fig. 10.
7. Conclusions and outlook
Our interpretation of trustworthiness was the belief that the
robot teammate would complete the handover, and would not This paper presented an HRH pipeline which includes the use
damage or drop the object. However, some participants might of a wearable tactile glove to classify rigid/deformable objects
have also considered an interpretation of trustworthiness related prior to handover, the consequent adaptation of the robot grasp-
to safety, i.e., the participants would trust the robot not to hurt ing force, and the use of the tactile glove to determine the end
them during handover. The idea of smoothness of the handover of the passing phase. The obtained experimental results on eight
was interpreted by us as the absence of unexpected movements human participants and 24 objects showed that the proposed
of the robot hand that might have led to failure; based on this approach has a strong potential for real-world HRH scenarios,
interpretation, smoothness would have been negatively impacted especially when grasping soft or rigid objects with the wrong
if the robot had pulled the object too late or while the human was force of the robot hand fingers could lead to damaging them.
still holding it, or if the robot had damaged or dropped the object, Therefore, the answer to the research question ‘‘Is it possible to
possibly due to wrongly classifying it. We assume that users execute HRH with different robot grasping force for soft and rigid
interpreted it in the same way, as the employed terminology was objects, without previous knowledge of the object characteristics?’’
quite intuitive. The concept of aggressiveness can be related both is definitely yes. Regarding the second research question on suc-
to the overall robot motion (e.g., sudden accelerations of the robot cess rate, the classification accuracy via CNN using the pressure
arm), or to the hand only, thus being related to the absence of sensors on the glove varied from 84.7% with Group A participants
handover smoothness. Finally, the idea of fluency of the collab- (precisely instructed on how to grasp the objects also used for
oration was related to all of the three previous concepts, also training), down to 70.8% with Group B participants (grasping
noticing that fluency and smoothness are often used as synonyms objects not used for training, and without precise grasping in-
in human–robot interaction papers. The employed questions are structions). Finally, the questionnaire results showed a general
rather standard in this type of experiments, but we understand positive perception of the HRH process.
that differences in their interpretations by the participants are to Many directions can be identified for future work. A first possi-
be expected. Our aim was to determine the overall perception of bility is to improve user comfort by using methods without ArUco
the HRH experiment without too much focus on single aspects. markers, as mentioned in Section 6.4. Also, the described HRH
framework can be tested within more complex human–robot in-
From the results in Fig. 10, it can be observed that the robot
teraction tasks, for example studying how human subjects adapt
was perceived as trustworthy and its motion as not aggressive.
their motion when the robot modifies its grasping orientation
Also, the handover process was generally perceived as smooth
based on the passed object. A third direction can be that of
and the participants felt to be working fluently with the robot
continuously adapting the grasping torque of the robot hand to
as a team.
the stiffness of the object, rather than using only two threshold
As for the background of the eight participants, all Nazarbayev values. A fourth possibility would consist of extending the pro-
University students from different science/engineering majors, posed classification problem to the case in which participants are
only four of them had previous experience in working with not instructed to use any particular type of grasp.
robots. In Fig. 10, experienced participants are indicated by cir-
cles, and inexperienced participants by crosses. Overall, there was Declaration of competing interest
no strong evidence that previous experience would lead to a
better or worse evaluation of the different questions. This can The authors declare that they have no known competing finan-
also be related to the fact that more experience might, on the one cial interests or personal relationships that could have appeared
hand, make participants more relaxed (thus providing a positive to influence the work reported in this paper.
9
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Data availability [19] M. Bianchi, G. Averta, E. Battaglia, C. Rosales, M. Bonilla, A. Tondo, M.


Poggiani, G. Santaera, S. Ciotti, M.G. Catalano, A. Bicchi, Touch-based
grasp primitives for soft hands: Applications to human-to-robot handover
Data will be made available on request. tasks and beyond, in: 2018 IEEE International Conference on Robotics and
Automation, ICRA, 2018, pp. 7794–7801.
Acknowledgments [20] L. Peternel, W. Kim, J. Babič, A. Ajoudani, Towards ergonomic con-
trol of human-robot co-manipulation and handover, in: 2017 IEEE-RAS
17th International Conference on Humanoid Robotics (Humanoids), 2017
The first two authors contributed equally. This work was pp. 55–60.
funded by Nazarbayev University under Collaborative Research [21] A. Edsinger, C.C. Kemp, Human-robot interaction for cooperative manip-
Project no. 091019CRP2118 and by the Ministry of Education and ulation: Handing objects to one another, in: Proceedings of the IEEE
Science of the Republic of Kazakhstan under grant no. International Symposium on Robot and Human Interactive Communication
(RO-MAN), 2007, pp. 1167–1172.
AP09058050. [22] J. Konstantinova, S. Krivić, A. Stilli, J. Piater, K. Althoefer, Autonomous
object handover using wrist tactile information, in: Proceedings of the
Appendix A. Supplementary data Annual Conference ‘‘Towards Autonomous Robotic Systems’’, TAROS, 2017,
pp. 450–463.
[23] A. Ajoudani, A.M. Zanchettin, S. Ivaldi, A. Albu-Schäffer, K. Kosuge, O.
Supplementary material related to this article can be found Khatib, Progress and prospects of the human–robot collaboration, Auton.
online at https://doi.org/10.1016/j.robot.2022.104311. Robots 42 (5) (2018) 957–975.
[24] A.G. Eguíluz, I. Rañó, S.A. Coleman, T.M. McGinnity, Reliable object han-
dover through tactile force sensing and effort control in the Shadow Robot
References
hand, in: Proceedings of the International Conference on Robotics and
Automation, ICRA, 2017, pp. 372–377.
[1] F. Döhring, H. Müller, M. Joch, Grip-force modulation in human-to-human [25] G.H. Büscher, R. Kõiva, C. Schürmann, R. Haschke, H.J. Ritter, Flexible and
object handovers: effects of sensory and kinematic manipulations, Sci. Rep. stretchable fabric-based tactile sensor, Robot. Auton. Syst. 63 (3) (2015)
10 (22381) (2020). 244–252.
[2] V. Ortenzi, A. Cosgun, T. Pardi, W.P. Chan, E. Croft, D. Kulić, Object [26] W. Dong, L. Yang, G. Fortino, Stretchable human machine interface based
handovers: A review for robotics, IEEE Trans. Robot. 37 (6) (2021) on smart glove embedded with PDMS-CB strain sensors, IEEE Sens. J. 20
1855–1873. (14) (2020) 8073–8081.
[3] P. Rosenberger, A. Cosgun, R. Newbury, J. Kwan, V. Ortenzi, P. Corke, M. [27] A. Oleinikov, B. Abibullaev, M. Folgheraiter, On the classification of
Grafinger, Object-independent human-to-robot handovers using real time electromyography signals to control a four degree-of-freedom prosthetic
robotic vision, IEEE Robot. Autom. Lett. 6 (1) (2020) 17–23. device, in: Proceedings of the International Conference of the IEEE
[4] W. Yang, C. Paxton, A. Mousavian, Y.-W. Chao, M. Cakmak, D. Fox, Reactive Engineering in Medicine Biology Society, EMBC, 2020, pp. 686–689.
human-to-robot handovers of arbitrary objects, in: Proceedings of the [28] S. Sundaram, P. Kellnhofer, Y. Li, J.Y. Zhu, A. Torralba, W. Matusik, Learning
IEEE International Conference on Robotics and Automation, ICRA, 2021 the signatures of the human grasp using a scalable tactile glove, Nature
pp. 3118–3124. 569 (7758) (2019) 698–702.
[5] J. Sanchez, J.A. Corrales, B.C. Bouzgarrou, Y. Mezouar, Robotic manipulation [29] J. Hughes, A. Spielberg, M. Chounlakone, G. Chang, W. Matusik, D. Rus, A
and sensing of deformable objects in domestic and industrial applications: simple, inexpensive, wearable glove with hybrid resistive-pressure sensors
a survey, Int. J. Robot. Res. 37 (7) (2018) 688–716. for computational sensing, proprioception, and task identification, Adv.
[6] A. Billard, D. Kragic, Trends and challenges in robot manipulation, Science Intell. Syst. 2 (6) (2020) 2000002.
364 (6446) (2019). [30] J.J. Kuffner, S.M. LaValle, RRT-connect: An efficient approach to single-
[7] S. Park, D. Hwang, Softness-adaptive pinch-grasp strategy using fingertip query path planning, in: Proceedings of the IEEE International Conference
tactile information of robot hand, IEEE Robot. Autom. Lett. 6 (2021) on Robotics and Automation, ICRA, 2000, pp. 995–1001.
6370–6377. [31] M.R. Cutkosky, On grasp choice, grasp models, and the design of hands for
[8] A. Drimus, G. Kootstra, A. Bilberg, D. Kragic, Design of a flexible tactile manufacturing tasks, IEEE Trans. Robot. Autom. 5 (3) (1989) 269–279.
sensor for classification of rigid and deformable objects, Robot. Auton. Syst. [32] F. Gonzalez, F. Gosselin, W. Bachta, Analysis of hand contact areas and
62 (1) (2014) 3–15. interaction capabilities during manipulation and exploration, IEEE Trans.
[9] A. Mazhitov, A. Adilkhanov, Y. Massalim, Z. Kappassov, H.A. Varol, De- Haptics 7 (4) (2014) 415–429.
formable object recognition using proprioceptive and exteroceptive tactile [33] T. Syrymova, Y. Massalim, Y. Khassanov, Z. Kappassov, Vibro-tactile foreign
sensing, in: Proceedings of the IEEE/SICE International Symposium on body detection in granular objects based on squeeze-induced mechanical
System Integration, SII, 2019, pp. 734–739. vibrations, in: Proceedings of the IEEE/ASME International Conference on
[10] M. Kaboli, K. Yao, G. Cheng, Tactile-based manipulation of deformable Advanced Intelligent Mechatronics, AIM, 2020, pp. 175–180.
objects with dynamic center of mass, in: Proceedings of the IEEE/RAS [34] M.K. Pan, V. Skjervøy, W.P. Chan, M. Inaba, E.A. Croft, Automated detection
International Conference on Humanoids Robots (Humanoids), 2016 of handovers using kinematic features, Int. J. Robot. Res. 36 (5–7) (2017)
pp. 752–757. 721–738.
[11] Y. Massalim, Z. Kappassov, Array of accelerometers as a dynamic vibro-
tactile sensing for assessing the slipping noise, in: Proceedings of the
IEEE/SICE International Symposium on System Integration, SII, 2019
Ayan Mazhitov is currently a M.Sc. student in Robotics
pp. 438–443.
at Nazarbayev University, Astana, Kazakhstan, and a
[12] P. Dallaire, P. Giguère, D. Émond, B. Chaib-draa, Autonomous tactile
research assistant in the Robot Control and Learning
perception: A combined improved sensing and Bayesian nonparametric
(RCL) Lab. He received the B.Sc. in Robotics and Mecha-
approach, Robot. Auton. Syst. 62 (4) (2014) 422–435.
tronics from Nazarbayev University in 2019. From 2017
[13] A. Mason, C. MacKenzie, Grip forces when passing an object to a partner,
to 2019, he worked as a research assistant in several
Exp. Brain Res. 163 (2005) 173–187.
robot manipulation projects. His research interests are
[14] K. Strabala, M.K. Lee, A. Dragan, J. Forlizzi, S.S. Srinivasa, M. Cakmak,
in reinforcement learning and robot control.
V. Micelli, Toward seamless human-robot handovers, J. Human-Robot
Interact. 2 (1) (2013) 112–132.
[15] W.P. Chan, C.A. Parker, H.M.V. der Loos, E.A. Croft, A human-inspired object
handover controller, Int. J. Robot. Res. 32 (8) (2013) 971–983.
[16] J.R. Medina, F. Duvallet, M. Karnam, A. Billard, A human-inspired con- Togzhan Syrymova is currently a M.Sc. student in
troller for fluid human-robot handovers, in: Proceedings of the IEEE/RAS Robotics at Nazarbayev University, Astana, Kazakhstan,
International Conference on Humanoids Robots (Humanoids), 2016 and a research assistant in Robot Control and Learn-
pp. 324–331. ing (RCL) Lab. She received the B.Sc. in Robotics and
[17] M. Controzzi, H. Singh, F. Cini, T. Cecchini, A. Wing, C. Cipriani, Humans Mechatronics from Nazarbayev University. From 2018
adjust their grip force when passing an object according to the observed to 2019, she worked as a research assistant in sev-
speed of the partner’s reaching out movement, Exp. Brain Res. 236 (2018). eral embedded systems projects. Her research interests
[18] W. Wang, R. Li, Z.M. Diekel, Y. Chen, Z. Zhang, Y. Jia, Controlling object are in tactile sensing, medical robotics, and machine
hand-over in human–robot collaboration via natural wearable sensing, IEEE learning.
Trans. Hum.-Mach. Syst. 49 (1) (2019) 59–71.

10
A. Mazhitov, T. Syrymova, Z. Kappassov et al. Robotics and Autonomous Systems 159 (2023) 104311

Zhanat Kappassov received the Specialist Degree in Matteo Rubagotti received the Ph.D. degree in Elec-
Radioengineering from Tomsk State University of Con- tronics, Computer Science, and Electrical Engineering
trol Systems and Radioelectronics (TUSUR), Tomsk, from the University of Pavia, Pavia, Italy, in 2010. Since
Russia, in 2011. Afterwards, he worked in the In- 2018, he has been an Associate Professor of Robotics
dustrial Technology Research Institute (ITRI), Taiwan. and Mechatronics at Nazarbayev University, Astana,
He received his Ph.D. in Robotics from the Institute Kazakhstan. Previously to his current post, he was a
of Intelligent Systems and Robotics (ISIR), Sorbonne Lecturer in Control Engineering at the University of
University (formerly Université Pierre et Marie Curie), Leicester, Leicester, UK, and, before that, a Postdoctoral
Paris, France, in 2017. Since 2020, he has been an As- Fellow at the University of Trento, Trento, Italy, and
sistant Professor of Robotics at Nazarbayev University, at IMT Institute for Advanced Studies, Lucca, Italy. His
Astana, Kazakhstan. His current research interests focus research interests are in control systems and robotics,
on tactile sensing for robot physical interaction and dexterous manipulation. He including physical human–robot interaction. He has co-authored more than
regularly serves as reviewer for journals and conferences in the field of robotics. 60 technical papers in international journals and conferences. Dr. Rubagotti
He was awarded the TOYOTA award during the ISER2016 conference, Tokyo, is IEEE Senior Member, and is currently Subject Editor of the International
Japan. His Ph.D. thesis was nominated as the Best Ph.D. Thesis in 2017 by the Journal of Robust and Nonlinear Control. He is also member of the conference
GDR Robotique association in France. editorial boards of the IEEE Control System Society and of the European Control
Association.

11

You might also like