Stable Grasping Under Pose Uncer-Tainty Using Tactile Feedback. Auton Robots. 2014 36 (4) 309-330.

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Auton Robot (2014) 36:309–330

DOI 10.1007/s10514-013-9355-y

Stable grasping under pose uncertainty using tactile feedback


Hao Dang · Peter K. Allen

Received: 16 August 2012 / Accepted: 9 July 2013 / Published online: 2 August 2013
© Springer Science+Business Media New York 2013

Abstract This paper deals with the problem of stable the 3D information extracted from a perception system. A
grasping under pose uncertainty. Our method utilizes tactile stable grasp parameterized by the hand posture and hand-
sensing data to estimate grasp stability and make necessary object relative pose is then synthesized. In the execution
hand adjustments after an initial grasp is established. We stage, the planned grasp is sent to a path planner to gen-
first discuss a learning approach to estimating grasp stabil- erate a collision-free trajectory and the robot moves along
ity based on tactile sensing data. This estimator can be used the newly generated trajectory to the grasping pose. These
as an indicator to the stability of the current grasp during methods usually use geometrical models of the objects to be
a grasping procedure. We then present a tactile experience grasped in the planning stage. However, since grasp plan-
based hand adjustment algorithm to synthesize a hand adjust- ning is done in a simulation world which is not an exact
ment and optimize the hand pose to achieve a stable grasp. model of the actual workspace due to imperfect perception
Experiments show that our method improves the grasping and robot calibration, the executed grasps can end up unstable
performance under pose uncertainty. and these methods are sensitive to pose uncertainty. Figure 1
shows two examples where stable grasps are not achieved by
Keywords Grasping · Uncertainty · Robustness · simply going through the planning and execution stages. Due
Tactile sensing to pose uncertainty, an executed grasp can end up perturb-
ing the object, pushing away the object, or even knocking the
object off. None of these situations are preferable for a robotic
1 Introduction grasping task.
Another approach is to treat grasping as a control problem
Robust grasping is one of the most important capabilities a where a set of control laws are applied to adjust the hand to
robot is expected to have. Successful robotic grasping estab- achieve some preferred contact configuration on the object,
lishes the first step for a robot to physically interact with its e.g., antipodal grasps (Jia 2000; Platt 2007; Wang et al. 2007).
environment and accomplish other higher level object manip- These methods usually utilize the actual sensing data from
ulation tasks. force, torque, or tactile sensors; so they do not require any
To enable robotic grasping, one of the existing approaches specific hand-object relative pose and are more robust under
is to decompose a grasping procedure into two main stages: pose uncertainty. Methods along this direction are also usu-
planning and execution (Popovic et al. 2010; Saxena et al. ally object model free. Since the control laws are relatively
2008; Berenson and Srinivasa 2008; Goldfeder and Allen computationally inexpensive, these methods run fast. How-
2011). The planning stage is usually done in simulation with ever, a major issue is that these methods either ignore the
hand kinematics or assume relatively simple hand designs,
H. Dang (B) · P. K. Allen such as parallel jaw grippers and their simple variants. So, it
Computer Science Department, Columbia University,
New York, NY 10027, USA
is difficult to extend these methods to complex hand designs
e-mail: dang@cs.columbia.edu which have more dexterity in object manipulation tasks.
P. K. Allen
Both approaches have their own benefits as well as dis-
e-mail: allen@cs.columbia.edu advantages. In our work, we attempt to unify the power of

123
310 Auton Robot (2014) 36:309–330

Fig. 1 Execution of
planned-to-be stable grasps.
a, d Two grasps planned in
simulation that are stable.
b, e Snapshots of successful
execution of the two planned
grasps which were able to lift up
the object stably. c, f Two
failure cases of execution due to
pose error where the executed
grasps were not able to lift up
the object

the two different categories of grasping methods by devel- in detail. Part of this work has been published in (Dang and
oping a grasping pipeline starting in a planning-based style Allen 2012).
but with a closed-loop hand adjustment procedure after grasp
execution. Specifically, we use a planning-based approach to
establish an initial grasping pose on a known object and then 2 Previous work
switch to a control-styled method to adjust the hand locally
on the object and optimize the grasp configuration. The intu- 2.1 Planning-based grasping
ition behind this design is two-fold. First, vision is capable of
providing global geometrical information which is accurate Planning-based grasping pipelines are probably the most
enough to generate an initial grasping pose. Second, when widely used framework in the robotics community
the hand is at the planned grasping pose around the object, (Saxena et al. 2008; Popovic et al. 2010; Berenson and
vision may have difficulty in extracting accurate information Srinivasa 2008; Goldfeder and Allen 2011). Some planning-
concerning hand-object relative pose considering potential based algorithms require the object geometry to be known
occlusion and long distance. At this point of time, tactile as a prior. Ciocarlie et al. proposed the Eigengrasp idea
sensing data can start to play an important role in recover- for grasp planning using an articulated hand and an object
ing the direct interaction between the object and the hand model (Ciocarlie and Allen 2009; Ciocarlie et al. 2007a).
in real time. They provide important information concerning This method effectively reduces the dimension of the search
contact configuration which determines the wrench distrib- space for stable grasps and results in a faster search
ution of the grasp and thus the stability of the grasp. Tactile process to find force-closure grasps. Berenson and Srini-
sensing data can also indicate the relative pose between the vasa (2008) proposed a method to generate collision-free
hand and the object, e.g., (Platt et al. 2011; Pezzementi et al. force-closure grasps for dexterous hands in cluttered envi-
2011; Petrovskaya et al. 2006). Using tactile sensing data, it ronments. Przybylski et al. (2011) introduced a method
is probable to make necessary hand adjustments to achieve to use medial axis transform as an object representation
stable grasping poses. Figure 2 outlines components of our for grasp planning. Roa et al. (2012) designed an algo-
grasping pipeline. Initially, a grasp is applied using modules rithm to compute power grasps for hands with kinematic
perception and grasp planning and execution, which form structure similar to human hands. Miller et al. (2003)
a conventional planning-based grasping pipeline. Once the used shape primitives to represent objects and plan stable
initial grasp is established, the stability of the grasp is esti- grasps with different rules associated with shape primitives.
mated by the grasp stability estimation procedure. If the grasp Goldfeder et al. (2007) extended this idea and proposed
is classified unstable, a hand adjustment will then be synthe- a grasp planning method using a shape decomposition
sized and applied in the hand adjustment procedure. In the tree of an object. Along the same direction, Huebner
following sections, we will discuss the components of Fig. 2 and Kragic (2008) approximated 3D objects with box

123
Auton Robot (2014) 36:309–330 311

(Le et al. 2010; Jiang et al. 2011; Klingbeil et al. 2011;


Rao et al. 2010). Goldfeder et al. (2009a,b) built a data-
base of grasps on different shapes and developed a grasp-
ing pipeline that utilizes partial data to register range data
into shapes in the database and synthesize grasp candidates.
Popovic et al. (2010) proposed a method to execute stable
grasping on unknown objects based on co-planarity and color
information. A similar approach is taken by Kootstra et al.
(2012) who use edge and texture information from the scene
to generate grasp candidates. Boularias et al. (2011) used
Markov Random Fields to learn grasping points for simi-
lar objects. El-Khoury and Sahbani used Gaussian curva-
ture as an indicator of separation points to segment point
clouds and approximated the segments with super-quadratic
primitive shapes. A neural network was then trained to
learn to select appropriate segments for grasping (El-Khoury
and Sahbani 2010). Geidenstam et al. (2009) approximated
3D shapes with bounding boxes and trained a neural net-
work to learn stable grasps based on the box representation.
Horowitz and Burdick (2012) considered grasp and manipu-
lation planning together as a trajectory optimization problem
to solve.

2.2 Control-styled grasping

Antipodal grasps and their variants are a type of grasps many


control-styled methods try to achieve. Jia (2000) used tactile
sensing to locate the contacts while rolling the fingers on
an object and apply a grasp when two antipodal contacts
are achieved. López-Coronado et al. (2002) applied a neural
model to learn a mapping from tactile sensing data to motor
control and used this mapping to center an object with respect
Fig. 2 A grasping pipeline with a regular planning-based grasp exe-
cution procedure and a post-execution grasp adjustment procedure
to a parallel jaw gripper during grasping. Hsiao et al. (2010)
including grasp stability estimation and hand adjustment. A typical developed a reactive algorithm based on tactile and force
planning-based grasping pipeline usually contains only the first two sensor data to locally adjust the pose of a parallel jaw gripper
components perception and grasp planning and execution. Two thresh- and grasp objects. Wang et al. (2007) proposed a control
olds t1 , t2 , 0 < t1 < t2 were used to evaluate the closeness between two
grasps. We will discuss them in Sect. 5.2.2
algorithm which uses the force and torque information to
drive the search process for stable grasps. Also on achieving
antipodal grasps but with a multi-fingered hand, Bierbaum
and Rambow (2009) proposed a method to generate antipodal
primitives and planned grasps using the box-based shape grasp affordances based on reconstructed faces of an object
approximation. through tactile exploration. Platt (2007) proposed a method
When full object geometry cannot be obtained in advance, to learn grasping strategies based on contact relative motions
another group of algorithms can be used for grasp plan- and examined this idea in 2D planar grasping scenarios with
ning, which require only partial knowledge of the object a Robonaut hand.
geometry. Saxena et al. (2007, 2008) used synthesized image In addition to antipodal grasps, Coelho introduced a con-
data to train a classifier to predict grasping points based on troller which considers contact position and normal feed-
image features such as edges, textures, and colors. Similarly, back to synthesize contact configurations for statically sta-
Bohg and Kragic (2010) used shape context feature from ble grasps that involve k contacts (Coelho and Grupen 1997).
synthesized 2D images to learn grasping points. In addition Mishra and Mishra (1994) analyzed grasping with a 2 or 3
to 2D images, some variant methods along similar direc- fingered hand and developed a reactive algorithm to achieve
tions exploit 3D range data to generate grasp candidates ideal contact configurations on an object.

123
312 Auton Robot (2014) 36:309–330

2.3 Grasping under uncertainty mator using a set of simulated stable grasps. This approach
is similar to the previous work by Bekiroglu et al. (2011),
There has been previous work on robust grasping under while we use a different feature which focuses on encoding
uncertainty. In order to plan stable grasps which display more the distribution of grasp contacts. We exploit a simulation
robustness under uncertainties, Berenson et al. (2009) used technique to generate a set of stable grasps from which hand
the task space regions (TSR) framework to represent pose adjustments can be synthesized. This approach differentiates
uncertainty for planning grasp candidates that are most pos- us from previous work by Platt et al. (2010) where control
sible to succeed. Brook et al. (2011) analyzed uncertainty rules are explicitly formulated. This is also different from
in both object identity and object pose for planning the best the previous work by Hsiao et al. (2011) and Laaksonen et
grasping pose. Stulp et al. (2011) designed a framework to al. (2012) where hand adjustments are calculated based on
generate robust motion primitives by sampling the actual analyzing the pose error of the object via tactile feedback.
pose of the object from a distribution that represents the state In the hand adjustment procedure of our pipeline, we try to
estimation uncertainty. Similarly, Weisz and Allen (2012) avoid introducing disturbance to the object during a grasping
proposed a new quality metric to measure the robustness of a process. This consideration comes from a different perspec-
grasp under object pose uncertainty. Kim et al. (2012) con- tive compared to the work by Dogar and Srinivasa (2011)
sidered dynamic movements of the object being manipulated which analyzed the push action that intentionally moves the
during grasp planning to generate optimal grasp candidates. object into the palm to form a stable caging grasp.
Along the same direction, Dogar and Srinivasa (2011) ana-
lyzed push-grasping to deal with environmental uncertainties
in cluttered scenes. 3 Grasp stability estimation
In addition to dealing with uncertainty in the grasp plan-
ning stage, researchers have been considering grasping as We now describe the grasp stability estimation procedure
a reactive procedure and using tactile sensing as sensory of Fig. 2. When a grasp is established on an object, tac-
feedback to improve grasping performance in the execution tile sensors capture the valid contacts of the current grasp.
stage. Platt et al. (2010) proposed three variations on null- This information provides us a way to infer the stability
space grasp control which combine multiple grasp objectives of the grasp. This section discusses our learning method
to improve a grasp in unstructured environments. Felip et that uses tactile sensing data to predict the stability of a
al. (2013) proposed a paradigm for modeling and execut- grasp.
ing reactive manipulation actions, which makes knowledge
transfer to different embodiments possible while retaining 3.1 Extract tactile contacts
the reactive capabilities of each embodiment. Bekiroglu et al.
(2011) used HMM to estimate grasp stability from a series When visual information is not available, tactile feedback
of tactile data. Based on this work, Laaksonen et al. (2012) from the hand is crucial in object grasping and manipulation
proposed a framework to use on-line sensory information to tasks. It gives us information about the object’s local geom-
refine object pose and modify the grasp accordingly. Hsiao etry which is difficult to obtain through vision alone. Tactile
et al. (2011) used tactile sensing data to estimate hand-object sensors play an important role in representing the contacts
relative pose for synthesizing the next hand trajectory so between the surface of the hand and the object that are touch-
that a specific grasp can be achieved. Morales et al. (2007) ing each other. Tactile feedback from tactile sensors indicates
used tactile data to cope with uncertainty for the execution which sensor cells have contacts with the object and which do
of a manipulation task. Zhang and Trinkle (2012) utilized not. It also provides intensity values that represent the forces
both vision and tactile data to improve object tracking for sensed at these activated sensors. With the angle values for
grasping. Hebert et al. (2012) combined tactile data with the joints of the hand, we can also use forward kinematics to
other sensory data to provide object tracking for both grasp- determine both the location and the orientation of each sensor
ing and manipulation under uncertainty. Nikandrova et al. cell. So, we can utilize the tactile feedback to approximate
(2012) proposed a probabilistic framework to use on-line sen- the contact locations and orientations.
sory information for grasp planning. Jiang and Smith (2012) To represent the location and the orientation of a sensor
introduced seashell effect pre-touch sensing to use proxim- cell, we want to use a coordinate system that is local to the
ity sensing data for grasp control and surface reconstruction. hand and is consistent across different grasps. We choose
With a reconstructed surface, a general grasp planning algo- the coordinate system attached to the palm as the reference
rithm could take place to generate stable grasp hypothesis for coordinate system. Given a set of n joint angles of a grasp,
execution. J = [ j1 , j2 , . . . jn ], we write out the location and the orien-
In our work, we also consider grasping as a reactive proce- tation of the k th sensor cell on the i th link in a homogeneous
dure as illustrated in Fig. 2. We train our grasp stability esti- transformation matrix as follows:

123
Auton Robot (2014) 36:309–330 313

sensorik linki sensork


T palm (J ) = T palm (J ) · Tlink i
(1)

linki
where T palm denotes the transformation between the link i
and the palm; it is determined by the joint values J and the
sensork
hand kinematics for each grasp; Tlink i
is the transformation
between the i th link and the k th sensor cell on this link;
it is determined by the sensor cell configuration and is a
constant for every grasp. In the end, we can rewrite the matrix
sensorik
T palm in the form of ci =< p ∈ R 3 , o ∈ S 3 >, where p
specifies the 3-D position and o is a quaternion to specify the
orientation.
Using the location and the orientation of each sensor cell
that is activated due to a contact, we can determine the config-
uration of the contacts involved in a grasp. It is worth noting
that there is error in representing the actual contact locations Fig. 3 Cluster centers of contacts overlayed on a Barrett hand, which
and orientations using this method because each sensor cell is a widely used robotic hand. Spheres are located at the centers of each
cluster. The clusters contain 199,835 contacts collected from a training
has finite dimensions and any contacts residing within the set of 24,640 grasps, which will be discussed in Sect. 5.1.2
same sensor cell will be indistinguishable.

3.2 Compute grasp features: a bag-of-words model nary which determines the dimensionality of the features of
a grasp.
Bag-of-words models are widely used in the field of nat- In Sect. 5.1.2, we will describe how we learn a set of
ural language processing (NLP) (Harris 1970). They are also representative contacts using a clustering algorithm from a
known as bag-of-features models in the field of computer set of simulated grasps on commonly seen objects. Figure 3
vision. In the field of NLP, bag-of-words models use a dic- illustrates an exemplar contact dictionary. The representative
tionary to represent a document without considering the order contacts in the contact dictionary are the cluster centers over-
of the appearance of the words in the document. In the field layed on a Barrett hand, which is a widely used robotic hand.
of computer vision, an image is treated similarly as a docu- The spread angle between finger 1 and 2 of the Barrett hand
ment, where the visual features of an image take the role of is set manually solely for giving a better idea of the hand’s
the words in a document. work space. The centers of the clusters outline the reaching
By the same analogy, we can transfer this idea to the con- space of each finger. In Fig. 3a, the contact spaces of finger 1
text of robotic grasping. A grasp contains a set of contacts and 2 of the Barrett hand display nice symmetry. This agrees
just as a document consists of a number of words. If we treat with the symmetric mechanical design of the two fingers.
a grasp as a document and a contact as a word, it is reasonable
to use a bag-of-words model to describe a grasp in a similar 3.2.2 Grasp feature vectors
way as a bag-of-words model does a document.
The set of cluster centers models the space of the contacts on
the fingers and the palm in a highly discretized dimension.
3.2.1 A contact dictionary With this set of cluster centers, we use the distribution of
the contacts among the cluster centers as feature vectors for
In order to use the bag-of-words approach, we need to build grasps.
a contact dictionary which represents the space of the poten- Given a contact dictionary which has p cluster centers
tial contacts. We write it mathematically as a set of contacts Cˆ = [ĉ1 , ĉ2 , . . . ĉ p ] and a grasp G which consists of q con-
Cˆ = [ĉ1 , ĉ2 , . . . ĉ p ]. It is impractical, if not impossible, to col- tacts CG = [c1 , c2 , . . . cq ], we calculate the distribution vec-
lect all the possible contacts that can appear in a grasp. Thus,
tor of the contacts of grasp G with respect to Cˆ as follows:
we need a reasonable discretization of the space within the
hand’s coordinate system. Considering the kinematics of a 
q
f ci
ˆ =
D(CG , C) ˆ ·
H(ci , C) (2)
robotic hand, we see there are some regions within the hand’s Sci
i=1
local coordinate system that have larger potential than other
regions for a contact to appear. Thus, using a set of repre- where f ci is the force value sensed at the tactile sensor cell
sentative contacts from these regions as a dictionary, we can corresponding to contact ci , Sci is the total amount of forces
enjoy both the statistically sound capability of representation sensed from all the sensors cells on the sensor pad which con-
of the contact space and a low dimensionality of the dictio- ˆ = [h 1 , h 2 , . . . h p ] is a p-dimensional
tact ci is on, H(ci , C)

123
314 Auton Robot (2014) 36:309–330

vector that stores the similarity values between contact ci and


each cluster center in C. ˆ It is computed as:
 
||ci − ĉi ||2
h i = exp − (3)
σ2
where ci and ĉi are both 3-dimensional vectors storing the
contact locations and σ is a parameter set manually. h i mea-
sures the similarity between two contact locations. For a con-
tact that is far from a cluster center, the corresponding Euclid-
ean distance is large resulting in a small similarity value. The
parameter σ controls the rate the similarity values decrease
as the distances go up.
Distribution vectors D(CG , C) ˆ are built from the summa-
tion of the distributions of different numbers of contacts.
Thus, we normalize the distribution vector by scaling it down
using the number of sensor pads which have tactile contacts.
Mathematically, for a grasp G, the normalized distribution
vector is calculated as:
Di
D̂i = (4)
|PG |
Fig. 4 Hand adjustment with tactile experience, an example that illus-
where Di denotes the i th element of the vector D(G, C) and trates the progression of hand adjustment. Initially, the grasp (left col-
umn) barely touches one side of the bottle and the finger surface does not
|PG | is the number of sensor pads with tactile contacts. This align well of the surface of the bottle. After two hand adjustments, the
rescaled distribution vector is the final feature vector we use final grasp (right column) has opposing contacts and the finger surface
to describe a grasp in our work. aligns with the surface of the bottle

3.3 Learning grasp stability


where the hand barely touches one side of the Snapple bottle,
We take a supervised learning approach to the problem of thus failing to establish opposing contacts. In addition, the
grasp stability estimation. Section 5.1 describes how in prac- palm is not aligned with the vertical direction of the Snapple
tice we obtain a training set of grasps, label each training sam- bottle, resulting in contact surfaces with very limited area.
ple according to standard grasp quality metrics, and train an Using a vision system at a distance, this situation is difficult
SVM classifier. Theoretically, a training set of grasps con- to detect since the pose offset is subtle. However, with tactile
tains N instance-label pairs (xi , yi ), i = 1, . . . , N where sensing, the relative hand pose difference is captured well.
xi ∈ R p is a grasp feature vector as we discussed in the pre- After two steps of hand adjustment, the grasp is adjusted such
vious section, yi ∈ {−1, 1} is a label specifying whether this that the contacts are opposing each other and the contact sur-
grasp is stable (1) or unstable (−1), and N is the number of face is increased.
training samples. Given this training data, an SVM can be A hand adjustment specifies the changes to the current
trained and stored in advance. Using a trained SVM, we can grasp. It consists of changes in hand location, orientation,
classify a grasp and predict its stability. More details about and the selected degrees of freedom (DOF) to control.1 We
SVM can be found in previous work by Cortes and Vapnik can write it compactly as
(1995) and Suykens et al. (2010).
Ad j =< p, o, s > (5)

where p ∈ R 3 is a 3-D vector specifying the new hand posi-


4 Hand adjustment tion in the current hand coordinate system, o ∈ S 3 is the new
hand orientation in the current hand coordinate system rep-
We now describe the hand adjustment procedure of Fig. 2. resented as a quaternion, and s ∈ R |Sdo f | is a vector storing
The question this section attempts to answer is: Given a
grasp, which is classified as unstable by the grasp stabil-
1 Usually, a robot hand contains several DOFs, but we only want to
ity estimator, what hand adjustments should the robot make
control a subset of these DOFs during a hand adjustment procedure.
to achieve a stable grasp? Figure 4 is one example of hand
For example, for the Barrett hand, we would only like to control its
adjustment on a Snapple bottle to illustrate how hand adjust- spread angle during a hand adjustment procedure. The DOFs of finger
ments help achieve stable grasps. The hand starts at a grasp flexion will be controlled during hand closing.

123
Auton Robot (2014) 36:309–330 315

value changes for the set of selected DOFs, Sdo f , which we 4.2 Query for stable grasps with similar tactile contacts
want to control in a hand adjustment.
As illustrated in Fig. 2, to achieve reasonable hand adjust- Once the set of tactile contacts are extracted from an actual
ments, we first compute a tactile experience database which grasp using forward kinematics, we query the tactile expe-
consists of a set of stable grasps and use these grasps as a ref- rience database for stable grasps that share similar tactile
erence to synthesize a hand adjustment. The tactile contacts contacts. To this end, we define a distance function which
extracted using forward kinematics and tactile sensor read- measures the similarity between two grasps G1 and G2 . This
ings are used in querying the tactile experience database for distance function considers both the tactile contact configu-
stable grasps with similar tactile contacts. If the stable grasps ration and the hand posture between two grasps. In our work,
with similar tactile contacts are successfully retrieved, hand we only use the location of a contact in the distance metric.
adjustment parameters are synthesized and sent to control the The distance metric can be expressed as
hand to make local movements. If there is no similar tactile
experience in the database, the local surfaces of the object 1 
N1  
at contact are reconstructed by moving the hand around to dist (G1 , G2 ) = · min ||cm
1
− cn2 ||
2 n
collect tactile contacts on the surface and stable grasps are m=1

planned based on the reconstructed local geometry. 1 


N2  
+ · min ||cm
2
− cn1 || + α|| js1 − js2 ||
2 n
m=1
4.1 Tactile experience database
(6)
A tactile experience database consists of stable grasps and i is the m th contact of the grasp i, N is the number of
where cm i
their corresponding tactile contacts. It provides precomputed
contacts of grasp i, and jsi is the joint values for the selected
knowledge about the potential tactile contacts a stable grasp
DOFs of the grasp i. α is a scaling factor for the Euclid-
should contain. A grasp, G, in the tactile experience database
ean distance between the joint angles of the selected DOFs.
can be considered as G = {P, J , T , C, L} where
The first two parts of the right side of the equation measure
the Euclidean distance between the two sets of contacts in
– P =< p, o >, p ∈ R 3 , o ∈ S 3 specifies the hand pose in
terms of their positions. The third part measures the differ-
the object coordinate system, including the position and
ence between the joint angles for the selected DOFs. We also
orientation of the hand. The orientation is represented
apply this function to measure the distance between a local
using quaternions.
tactile experience entry Gil and a grasp G using dist (Gil , G)
– J = { j1 , j2 , . . . , j N }, ji ∈ R records the N joint angles
where the values of G1 in Eq. 6 come from the grasp of Gil .
of the grasp. As an example, for a Barrett hand, we can
With this distance function, we query the tactile experi-
choose N = 7 and record the 7 joint values.
ence database for k nearest neighbors of an actual grasp using
– T = {t1 , t2 , . . . , t K , ti ∈ R} is the K tactile sensor read-
its tactile contacts. We also use this distance metric to decide
ings. As an example, for a Barrett hand, there are 24
whether there is any similar experience found in the data-
tactile sensors on each fingertip and the palm. Since it
base and whether an actual grasp is close enough to a stable
has three fingers and one palm, K = 96.
grasp in the database, which correspond to the two decision
– C = {c1 , c2 , . . . , c M }, ci =< pi , oi >, pi ∈ R 3 , oi ∈
diamonds in the hand adjustment procedure of Fig. 2. We
S 3 is the set of tactile contacts, indicating the locations,
describe the distance thresholds for both decision diamonds
pi , and the orientations, oi of the M activated tactile
in Sect. 5.2.2.
sensors.
– L = {Gil |Gil = {Ad j, J , T , C}} is the local tactile expe-
rience which stores related information for perturbed 4.3 Compute hand adjustment from experience
grasps within the neighborhood of grasp G. Local expe-
rience can be used to better locate a grasp within the All the k nearest neighbors are stable grasps and they share
neighborhood of the corresponding stable grasp based similar tactile contacts with the actual grasp. In this case,
on which the local experience is generated. Ad j stores it is reasonable to assume that the local geometry is simi-
the inverse of the perturbation from the stable grasp to a lar where the contacts are established. Although the actual
perturbed grasp. Using this transformation Ad j, we can grasp shares similar tactile contacts with stable grasps, it is
adjust a perturbed grasp to achieve the corresponding sta- not close enough to be a stable one. However, it is possi-
ble one. ble that this grasp is away from a stable grasp by a small
offset transformation. The goal of this step is to synthesize
Section 5.2.2 describes how we compute this database on this offset transformation and generate a hand adjustment to
commonly grasped objects. optimize the grasp towards a stable one.

123
316 Auton Robot (2014) 36:309–330

Algorithm 1: Compute a hand adjustment Algorithm 3: Compute a weighted transformation


Input: A robotic grasp Gx , and a tactile experience database Input: A grasp Gx and local tactile experience L = {G1l , . . . G lN }
which contains a set of grasps and their tactile contacts ranked by their distance to Gx .
Output: A hand adjustment Ad j =< p, o, s > Output: A hand adjustment Ad j =< p, o, s >
1 Locate k nearest neighbors to Gx , List ∗ = {G1 , . . . Gk } according 1 M = 0;
to dist (Gi , Gx ) 2 for i = 1 to N do
2 r e f er ence_dist = [ ] 3 //StableGrasp(Gil ) is the stable grasp that Gil is perturbed
3 ex perience_database = [ ] from
4 foreach Gi ∈ List ∗ do 4 if dist (Gil , Gx ) ≤ dist (StableGrasp(Gil ), Gx ) then
5 Obtain local tactile experience of Gi , local_ex perience 5 M = i;
6 Rank local_ex perience based on dist (G lj , Gx ) where 6 end
G lj ∈ local_ex perience 7 end
7 ex perience_database.append(local_ex perience) 8 if M = 0 then
 j≤5 9 Return < p = [0, 0, 0], o = [1, 0, 0, 0], s = 0 >
8 r e f er ence_dist.append( j=1 dist (G lj , Gx )) where
10 end
G lj ∈ local_ex perience 
11 dist_sum = k≤M 1
k=1 dist (Gk ,Gx )
l
9 end
10 min_ind = arg min(r e f er ence_dist[ind]) 12 < p, o >=< [0, 0, 0], [0, 0, 0, 0] >; s = 0
ind 13 foreach Gil ∈ L = {G1l , . . . G lM } do
11 ex perience = ex perience_database[min_ind] 14 weight = 1 weight
< p, o > + = dist_sum · Ad j (Gil )
dist (Gi ,Gx )
l
12 Ad j ∗ = W eightedT rans f or mation(Gx , {G lj |G lj ∈ weight
ex perience, 1 ≤ j ≤ 5}) 15 s+ = − dist_sum · s(Gil )
13 Return Ad j ∗ 16 end
17 Normalize o
18 Return Ad j =< p, o, s >

Algorithm 2: Compute local tactile experience the neighborhood of each stable grasp provides detailed rela-
Input: A grasp G tive information of the actual grasp with respect to the stable
Output: A list of perturbed grasps and their corresponding grasp. In Line 10, we decide the stable grasp within whose
tactile feedback, local_ex perience
neighborhood we can best locate the actual grasp. Then the
1 local_ex perience = [ ]
2 Generate a perturbation list p_list weighted mean of the offset transformations of the perturbed
3 Generate s_list to sample changes to selected DOFs grasps within this neighborhood is calculated in Line 12 and
4 foreach s ∈ s_list do is returned as the hand adjustment.
5 foreach < p, o >∈ p_list do
Tactile experience used in Algorithm 1 can be precom-
6 Perturb the hand by < p, o, s > from G
7 Synthesize the local tactile experience, G lp , based on the puted using a predefined list of perturbations. This process is
grasp after perturbation local_ex perience.append(G lp ) described in Algorithm 2. The idea here is to sample differ-
8 end ent perturbed grasps around the stable one and record their
9 end tactile feedback. The predefined list of perturbations consid-
10 Return local_ex perience ers the potential pose error between the hand and the object.
As a practical implementation, we first define an uncertainty
space and then uniformly sample this space to generate a list
Algorithm 1 outlines the overall procedure to search for a of perturbations. In our work, we sample wrist orientation,
hand adjustment command in Fig. 2 using tactile experience. wrist position and selected DOFs to generate these perturbed
The idea in this algorithm is to use the tactile experience to grasping poses. This sampling process provides the required
locate the actual grasp around each of the k nearest neighbors data in Line 2 and 3. From Line 4 to 9, we perturb the hand
(stable grasps) and synthesize a hand adjustment based on the according to each of the perturbations (Line 6) and record
offset transformations from them. The first step of this algo- the tactile feedback and other related information (Line 7).
rithm is to look into the tactile experience database and locate After we have recorded the information of all the perturbed
the top k stable grasps that share similar tactile feedback (Line grasps, the local tactile experience is generated and returned.
1). Since the actual grasp shares similar tactile feedback as Algorithm 3 describes how we compute a weighted trans-
these k stable grasps, the actual grasp is probable to be within formation based on a list of perturbed grasps. The reason that
a small neighborhood of some of these stable grasps. From we consider a weighted transformation is that every perturbed
Line 4 to 9, we look into the neighborhood of each of the k grasp carries useful information to synthesize a reasonable
stable grasps and try to evaluate how well the actual grasp can offset transformation and we want to take them all into
be located within the neighborhood of each stable grasp using account. In this algorithm, Line 2–10 first decides whether
the distance function as in Eq. 6. The refined search within there exists a stable grasp within whose neighborhood we

123
Auton Robot (2014) 36:309–330 317

Fig. 5 Two Eigengrasps used


to specify the hand posture of a e1 e2
Barrett hand, e1 and e2 . e1
Description min max Description min max
controls the spread angle
between two fingers. e2 controls
the finger flexion of all the three
fingers Spread Finger flexion

can better locate the actual grasp. If the actual grasp cannot algorithms in C/C++, to find the optimal parameters of the
be better located within the neighborhood of any of the k function (Lourakis 2004). With a set of optimal parameters,
nearest neighbors, no hand adjustment can be synthesized we can approximate the local geometry and synthesize a
at this step and an identity transformation will be returned mesh for each contact.
(Line 8–10). Otherwise, the final weighted transformation is
calculated in Line 11–17 as the weighted mean of the offset
transformations from perturbed grasps as 4.5 Planning stable grasps on local geometry
 weighti
 · Ad j (Gil ) (7) With the local geometry being built as a mesh model, the
i k weightk Eigengrasp planner (Ciocarlie and Allen 2009) can be used
to search around the current hand pose and plan stable grasps
where the weight, weighti = 1/dist (Gil , Gx ), is the inverse
on this local geometry. The Eigengrasp planner is a sto-
of the distance between a local tactile experience entry Gil
chastic grasp planner that searches for stable grasps in a
and the actual grasp Gx .
low-dimensional hand posture space spanned by eigenvec-
tors called Eigengrasps. As an example, for a Barrett hand
4.4 Explore local geometry which was used in our experiments, it has seven joints and
four DOF. Two Eigengrasps E =< e1 , e2 > were used to
When the actual grasp is far away from any stable grasps describe the hand posture. One controls the spread angle
in the tactile experience database, there will be no similar and the other controls the finger flexion as illustrated in
tactile experience found in the database. In this situation, a Fig. 5. The wrist pose is sampled locally around the ini-
local geometry exploration will take place to reconstruct the tial grasping pose using a complete set of six parameters:
local geometry around each of the contacts between the hand P =< r oll, pitch, yaw, x, y, z >. These six parameters
and the object. Sample points on the surface of the object generates a hand offset transformation from the current hand
are extracted from activated tactile sensors while the hand pose. Thus, the search space for the Eigengrasp planner in
is moving within the neighborhood of the initial grasping our case is an eight-dimensional space S = {E, P}. The sta-
pose. We will describe the exploratory path we used in Sect. bility of a grasp is evaluated using epsilon quality, , which
5.2.4. During the exploration, tactile contacts from tactile measures the minimum magnitude of external disturbance to
sensors on different links are treated as in different groups. break the grasp (Ferrari and Canny 1992).
As an example, for a Barrett hand, which has four links with After the planning process is complete, a stable grasp-
sensors, there are up to four local geometries to reconstruct ing pose is generated. Since the planning is done in the cur-
from collected point clouds. The coordinate system for local rent hand coordinate system, a hand adjustment command in
geometry reconstruction is established at the center of each Fig. 2 can then be synthesized.
local point cloud. The z-dimension for each local point cloud
is aligned with the estimated normal of the surface and the
other two axes aligned with the other two principal directions 4.6 Apply hand adjustment
of the point cloud. It is assumed that a local geometry is
smooth and can be represented using a quadratic function as Once a hand adjustment command Ad j ∗ =< p, o, s > is
follows found, we need to apply this adjustment to the hand: change
z = α20 x 2 + α11 x y + α02 y 2 + α10 x + α01 y + α00 (8) the hand pose and reshape the joints. We decompose this
process into three steps.
Fitting the point cloud to the quadratic function above is an First, the hand opens its fingers so that it lets the object
optimization process. We use levmar, an open source imple- go and backs up to have some safe margin between the palm
mentation of Levenberg-Marquardt nonlinear least squares and the object before the following movement.

123
318 Auton Robot (2014) 36:309–330

Second, the selected DOFs change to the values specified Table 1 Object classes and number of object models in each class
by s. The hand moves to a location 5 cm (subject to change
for different hands) away from the goal position with the goal Skateboard Book Bottle Butcher knife
orientation o. 20 16 48 16
Third, the hand moves in guarded mode towards the Wine glass Vase Hammer Handgun
goal position. The hand will either reach the goal posi- 36 88 16 80
tion or stop if it contacts anything before it reaches the Ice cream cone Knife Lamp Microscope
goal. 48 28 56 20
The reason we decompose the movement into these three Phone handle Rifle Screw driver Wrench
parts is that the adjustment Ad j may end up with potential 16 76 20 16
collision. So we want to first go to a safe place that is away Gear Helmet Mug –
from the goal location with the goal orientation and then 36 40 28 –
approach the goal position using guarded motions.

so some of the models are not obvious choices for grasping


5 Experimental results
experiments. For example, the model set contains insects,
which are often outside our everyday grasping range. So,
We now describe the experiments of our work. We performed
instead of using the full set of grasps with the Barrett hand,
two sets of experiments: one is for the grasp stability esti-
we chose to select grasps with the Barrett hand computed
mation procedure of Fig. 2 and the other is for the entire
on a smaller set of objects that are more frequently grasped
pipeline of Fig. 2. In each set of experiment, we did experi-
and manipulated by us in our everyday life. In total, we col-
ments in both simulation and physical settings. The robotic
lected about 36,960 robotic grasps from 704 objects across
hand we used was a Barrett hand, which is a four-DOF hand
19 different classes. Table 1 shows the object classes and the
with seven joints. Each finger has one DOF but two coupled
number of objects within each of these classes.
joints. This robotic hand is equipped with four tactile pads,
For each grasp, the tactile feedback was simulated in the
one for the distal link of each finger and one for the palm,
GraspIt! simulator. The output of the tactile sensors around
resulting in a 96-sensor system. In simulation experiments,
each contact is characterized by the forces applied at each
we used a simulated version of this hand inside the GraspIt!
sensor cell. So, a contact model that approximates the contact
simulator (Miller and Allen 2004).
region and the pressure distribution is necessary for simulat-
ing a tactile feedback. Pezzementi et al. (2010) used a point
5.1 Experimental results: grasp stability estimation
spread function model to simulate the response of a tactile
sensor system. In our work, we built our tactile simulation
In this section, we show the experiments we did to test
system based on a soft finger contact model by Ciocarlie
the grasp stability estimation procedure of Fig. 2. We first
et al. (2007b). This model is briefly summarized in Appen-
describe how we generated a set of training and test grasps
dix. Interested readers can refer to the original papers for
and trained an SVM classifier. We then show experiments on
more details (Ciocarlie et al. 2007b; Dang et al. 2011).
estimating grasp stability of simulated grasps. In addition,
Since each sensor pad on a Barrett hand is a flat plane, a
we show experimental results on using the stability classifier
stable grasp must have at least two sensor pads in contact with
in physical grasping scenarios.
the object being grasped, resulting in at least two sensor pads
with non-zero responses. So, we rejected grasps which have
5.1.1 Grasp dataset
less than two sensor pads with non-zero responses. We then
split the grasp dataset into two subsets, 23 of the grasps for
Our grasp data is from the Columbia Grasp Database
training and 13 of the grasps for testing. These two subsets are
(CGDB) (Goldfeder et al. 2009a). This database contains
disjoint and grasps in them are uniformly distributed across
hundreds of thousands of simulated grasps constructed from
all the objects.
several robotic hands and thousands of object models. Since
we used the Barrett hand in our experiments, we only chose
grasps with this hand from the database. Object models used 5.1.2 Building a contact dictionary
in the CGDB are from the Princeton Shape Benchmark (PSB)
(Shilane et al. 2004). The PSB provides a repository of 3D Grasps in the training dataset span a wide range of grasping
models which span many objects that we encounter everyday. poses and object shapes. It is reasonable to believe that the
One fact about the PSB model set is that the models were not contacts from these grasps represent the space of potential
originally selected with an eye towards robotic grasping, and contacts of a grasp statistically well. So, we used these grasps

123
Auton Robot (2014) 36:309–330 319

to build our contact dictionary. We first applied a K-means side disturbance can break this grasp even when the robotic
clustering algorithm (MacQueen 1967) to the contacts from hand has already applied the maximum forces it supports.
this set of grasps to obtain a list of clusters. We then used Another consideration is from the perspective of the envi-
the cluster centers as a set of representative contacts to form ronment uncertainty. Due to the uncertainty of the environ-
our contact dictionary. Considering the fact that the reaching ment, objects may move away from their original position
space for each finger of a Barrett hand intersects with each during a grasp execution. A fragile grasp may fail to fully
other very rarely, it is safe to think each contact location is grasp the object in this situation while a stable one may dis-
associated with a contact orientation and vice versa. Based on play its robustness and still succeed in grasping the object
this consideration, we only used the location part of a contact in the perturbation. We have experimentally found a certain
in the clustering process. Thus, the space we were clustering correlation between this robustness and the epsilon quality:
was a regular 3-dimensional Cartesian space. The distance grasps with epsilon quality  > 0.07 tend to be more robust
function we chose in the K-means clustering algorithm mea- in uncertain object perturbations (Weisz and Allen 2012).
sures the Euclidean distance between two contact locations. The volume quality, v, measures the volume of the poten-
In the K-means clustering algorithm, k directly controls the tial wrench space generated by the grasp given unit contact
dimension of a feature vector. A large k will result in a dense normal forces. A grasp with a larger potential wrench space
sampling of the contact space while a relatively small k will would require less forces at each contact than grasps with
yield a sparse sampling of the space. Experimentally, we smaller potential wrench spaces. This indicates that the larger
chose k = 64 to create 64 different clusters. So, a feature the volume quality is, the stronger the grasp could be.
vector of a grasp is 64-dimensional. Figure 3 shows these 64
cluster centers within a Barrett hand’s coordinate system. 5.1.4 Labeling grasps in the dataset

5.1.3 Grasp quality measurements Given all the grasps in the grasp dataset, we plot their epsilon
qualities and volume qualities in Fig. 6. We can observe that
Different measurements could be used to evaluate the quality their epsilon qualities and volume qualities are not well cor-
of a grasp, such as stability, feasibility, and dexterity (Suárez related.
et al. 2006). In our work, the grasp quality measurements gen-
erate the ground truth for the labels of training grasp samples. – Grasps with high epsilon quality do not necessarily have
We chose two quality measurements related to the stability good volume qualities.
of a grasp: the epsilon quality and the volume quality (Miller – Grasps with good volume qualities can still have low
and Allen 1999), which are based on the grasp wrench space epsilon qualities.
(GWS) generated by the grasp. These two grasp quality mea-
surements provide analytical numbers to distinguish stable
In addition, Li and Sastry (1988) pointed out that the epsilon
grasps from unstable ones. In our experiments, we modeled
quality measure is not invariant to the choice of torque origin,
the material of the surface of the hand as rubber and the
material of the object as wood and set the friction coefficient
between the finger and the object as 1.0. 12

A GWS is a 6-dimensional space which contains a set of


10
possible resultant wrenches produced by the fingers on the
object. A wrench is a 6-dimensional vector [ f 1×3 , τ 1×3 ] that
Volume quality

8
describes the combination of the possible force and torque.
In our work, a GWS is generated by assuming the sum of the 6
normal forces applied at each contact is 1. This assumption
approximates a limited power source for the hand (Suárez 4
et al. 2006). In geometry, the volume quality measures the
volume of the potential wrench space and the epsilon quality 2
measures the radius of the largest ball centered at the origin
of a GWS and fully contained in the GWS. 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
The epsilon quality, , refers to the minimum relative Epsilon quality
magnitude of the outside disturbances that could destroy the
grasp. So, when we take into account the limit of the maxi- Fig. 6 Epsilon and volume qualities of grasps from a subset of objects
in the CGDB. This figure shows that the volume quality and the epsilon
mum forces a robotic hand can apply, a grasp would be less
quality of a grasp do not correlate with each other very well. This indi-
stable if it has a smaller epsilon quality. This is because the cates that by combining these two grasp metrics, we can get a more
smaller epsilon quality indicates that a relatively smaller out- comprehensive evaluation criterion

123
320 Auton Robot (2014) 36:309–330

so we used the volume quality as an invariant average case that a false positive prediction will guide the robot to apply
quality measure for the grasp. Either of these measures has its an unstable grasp and use it as if it is a stable one. In most
own benefits. It makes sense that we combine them together working conditions, this action is very risky and even unac-
and form a new evaluation criterion. Based on our experi- ceptable. In Figure 7, we show the percentages of error and
mental results, we used thresholds t = 0.07 and tv = 0.1 false positive predictions for each object class. The percent-
as the boundaries for epsilon and volume qualities to label ages of false positive predictions illustrate the probabilities
a grasp graspi as a stable (1) or an unstable (−1) grasp as of the situation when an unstable grasp is incorrectly clas-
follows, sified as a stable one during a grasping task. For most of
 the object classes, the probabilities of false positive predic-
−1 if i ≤ t or vi ≤ tv
label(graspi ) = (9) tions are relatively low. The overall false positive prediction
1 if i > t and vi > tv
is 8.6 %.
where i denotes the epsilon quality of grasp graspi and vi
is the volume quality of grasp graspi .
5.1.6 Using the grasp stability estimator in physical
5.1.5 Grasp stability estimation on simulated grasps grasping

To compute grasp features, we wanted to use a reasonable To evaluate the performance of the classifier trained with
σ in Eq. 3 which would maximize the range of h i values in simulated data in physical grasping scenarios, we did some
[0, 1] and maximally distinguish different contacts. In our experiments with a Barrett hand on six everyday objects: a
experiments, we analyzed the range of the contacts from all pencil cup, a mug, a candle box, a paper wipe box, a canteen,
the grasps of a Barrett hand and experimentally set σ as 36.45. and a decorative rock as shown in Fig. 8a. Only the mug
We used libsvm (Chang and Lin 2001) to train an SVM based belongs to an object class that is included in the simulated
on the training data which contains about 24,640 grasps and training data. The pencil cup, the candle box, and the paper
tested the SVM on the remaining 12,320 grasps. Table 2 sum- wipe box are objects to some extent similar to the bottle class
marizes the number of stable grasps and unstable grasps in in the training set. The canteen and the decorative rock are
both training and test datasets. Figure 7 shows the classifi- two objects that are very different from other objects in the
cation result in more detail. The overall accuracy across all training set.
the classes of objects is 81.0 %. Considering the context of a In an experiment, we placed an object at a predefined loca-
physical grasping process, the percentage of the false positive tion on a table that was in front of the robot. The robot
predictions is a crucial evaluation criterion. This is because approached the predefined location with different spread
angles from a direction chosen out of a list of predefined
Table 2 Learning performance in simulation
directions. When the robot contacted the object, it stopped
approaching and closed the fingers. Tactile data and joint
Dataset Grasps Stable Unstable Accuracy angles were then collected for grasp stability estimation using
Training 24,640 11,849 12,791 – the classifier trained in Sect. 5.1.5. The arm lifted up the
Test 12,320 5,914 6,406 81.0 % object once a stable grasp was perceived. A trial was con-
sidered to be a failure when the robot was not able to grasp

false prediction
35 false positive prediction
30 false negative prediction
25
20
15
10
5
0
se

ar

er

ok
w ottle

ug

le

ha h

eb r
p

fe

ha t
ife

rd
s

le
on lme
e
as

gu

nc
op

rif

nd
ea
i

ge

iv

oa
va

bo
kn

kn
m

la
gl

dr
re
sc

nd

m
b

ph he
cr
er

w
e

ro

ha

at
in

re

e
ch
ic

ic

sk
sc
m

bu

Fig. 7 Accuracy analysis of estimating grasp stability on simulated false negative predictions (yellow bars) per object class. As shown in
grasps. Horizontal axis is the group names for each object class. Ver- the graph above, the percentages for false positive predictions per object
tical axis is the percentage (%) of the overall false predictions (dark class (light green bars) are relatively low, which is necessary for blind
green bars), the false positive predictions (light green bars), and the grasping (Color figure online)

123
Auton Robot (2014) 36:309–330 321

difficult target. In addition, another source of difficulty comes


from the geometrical complexity of the rock: the surface of
the rock is rough but the tactile sensors have limited sensing
resolution, thus it is difficult to distinguish grasps at nearby
locations, which have different surface normals at contacts
but generate the same tactile feedback.

5.2 Experimental results: stable grasping


with post-execution grasp adjustment

This section describes the experiments we did for the entire


pipeline illustrated in Fig. 2. We will explain how we built
a tactile experience database and how the pipeline improves
grasping performance under assumed pose uncertainty.

5.2.1 Experimental setup

In our experiment, the selected DOFs s in a hand adjustment


Ad j =< p, o, s > controlled the spread angle of the Bar-
rett hand. We chose five commonly seen objects as our test
Fig. 8 Physical experiments on using a grasp stability estimator for objects shown in Figs. 9 and 10: a Snapple bottle, a box, a
grasping. a The six objects in the experiments: a pencil cup, a mug, a detergent bottle, a cup, and a decorative rock. We assumed
candle box, a paper wipe box, a canteen, and a decorative rock. b–d
Three stable grasps on three different objects in the experiment
a table-top grasping scenario where the objects rest on a flat
surface. In this situation, the pose error can be parameter-
ized by < x, y, θ > as illustrated in Fig. 11. In our exper-
Table 3 Experiment results on six objects
iments, we intentionally generated a list of pose error with
Object Mass (kg) # Of exp Success Rate (%) an approximately uniform distribution over x ∈ [−30, 30] in
Mug 0.43–0.93 30 28 93 millimeter, y ∈ [−30, 30] in millimeter, and θ ∈ [−20, 20]
Paper box 0.17 10 9 90 in degree. By injecting different pose errors into a stable
Pencil cup 0.09 10 9 90
Candle box 0.11 10 9 90
Rock 0.28 10 6 60
Canteen 0.5–0.75 40 32 80
Total 0.09–0.93 110 93 84.6

the object stably, i.e., an object fell out of the hand when the
robot tried to lift the object up. Fig. 9 Object models used in the simulation experiments: a Snapple
110 trials were performed on six different objects, includ- bottle, a box, a detergent bottle, a cup, and a decorative rock
ing a canteen of different weight and surface material, and a
mug filled with different weights. Figure 8b–d show several
snapshots of successful lift-ups. In Table 3, we summarize
the experiment results. Overall, the success rate is 84.6 %
across all the objects in our experiments. The canteen with-
out its fabric cover and the decorative rock are two objects
that are very slippery and difficult to grasp. Compared to
other objects, there are much fewer stable grasps on the can-
teen without its fabric cover and the decorative rock. The
decorative rock is convex and a very large proportion of the
Fig. 10 Objects used in the physical experiments: a Snapple bottle, a
surface is facing upwards to some extent, thus frictional force
box, a detergent bottle, a cup, and a decorative rock. The transparent part
becomes the main source of forces to overcome the gravity of the Snapple bottle was painted blue to facilitate the object recognition
during grasping, making the decorative rock an even more process using a vision system

123
322 Auton Robot (2014) 36:309–330

5.2.2 Building a tactile experience database

The tactile experience database stores stable grasps as tac-


tile experience. To build our tactile experience database, we
defined a stable grasp for each object model in Fig. 9 using the
GraspIt! simulator. For each of the stable grasps stored in the
tactile experience database, we also precomputed the tactile
feedback at grasping poses perturbed from each of the stable
grasps due to pose error. To do this, we first put the hand at the
ideal grasping pose. Then, we uniformly sampled the space
of pose uncertainty S = {< x, y, θ > |x ∈ [−30, 30], y ∈
[−30, 30], θ ∈ [−20, 20]} and used each of the sampling
pose error < x, y, θ > to perturb the object and generate the
tactile feedback at each of the perturbed grasping pose. In our
work, the sampling is 5 mm in dimension-x and dimension-y
Fig. 11 Pose error model for table-top grasping, parameterized by
< x, y, θ >. Assuming a table-top grasping scenario, this model con- and 5 degrees in dimension-θ . For the spread angle, we sam-
siders translational error (x, y) within the x − y supporting plane and pled 5 degrees above and below the ideal spread angle for the
rotational error (θ) around the normal direction of the supporting plane grasp. Thus, this precomputation generated 4572 sampling
perturbed grasping poses for each stable grasp. This precom-
putation took place off-line and the database was stored for
grasping pose, we could perturb the stable grasp from its ideal later use. Figure 12 gives us two examples of stable grasps
grasping pose and generate grasping scenarios with different on two different objects and four exemplar local tactile expe-
pose uncertainty. rience records generated from the corresponding pose error.

Fig. 12 Examples of stable grasps and precomputed local tactile expe- The local tactile experience for nearby perturbed grasps is precom-
rience. Each stable grasp in the tactile experience database is stored with puted based on a list of precomputed pose error for the object, which is
a complete set of parameters that can be used to reconstruct the grasp: described in Sect. 5.2.2
including the joint values and hand pose with respect to the object.

123
Auton Robot (2014) 36:309–330 323

20 25 20 20 30

20 25
15 15 15
20
15
10 10 10 15
10
10
5 5 5
5 5
0 0 0 0 0
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5

(a) snapple (b) box (c) detergent (d) cup (e) rock
Fig. 13 Distance to nearest stable grasps before each hand adjustment ment is applied, the distance of the actual grasp to the nearest stable
is applied. The horizontal axis is the number of hand adjustment. The grasp in the tactile experience database decreases
vertical axis is the distance to the nearest stable grasps. As hand adjust-

0.8 0.8 0.8 1 1


0.8 0.8
0.6 0.6 0.6
0.6 0.6
0.4 0.4 0.4
0.4 0.4
0.2 0.2 0.2
0.2 0.2

0 0 0 0 0
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5

(a) snapple (b) box (c) detergent (d) cup (e) rock
Fig. 14 Percentage of stable grasps before each hand adjustment is grasp is executed. These graphs show that as hand adjustment is applied,
applied, averaged over 110 grasping trials. The horizontal axis is the the main trend for the percentage of stable grasps is increasing, which
number of hand adjustment. The vertical axis is the percentage (from indicates that by applying hand adjustment the grasping performance
0 to 1) of stable grasps whose epsilon quality  > 0.1. The first bar in improves
each graph corresponds to the percentage of stable grasps after the initial

These stable grasps along with the tactile feedback from the base and created 110 perturbed grasps on each object as the
perturbed grasping poses were stored to form our tactile expe- initial grasping poses for test. The evaluation procedure then
rience database. started by closing the fingers at an initial grasping pose, fol-
In terms of the parameter in the distance function, Eq. 6, lowed by five consecutive hand adjustments to validate how
we empirically chose the value α = 100 so that 0.01 radian hand adjustments could influence the grasp stability. We also
difference in joint angles is equivalent to 1 mm in Euclidean recorded and analyzed the closest distance to a stable grasp
distance. We also experimentally chose two thresholds for in the database as an indicator to show how hand adjust-
the decision diamonds in the hand adjustment procedure of ments could help reduce the tactile contact difference to sta-
Fig. 2. If the distance metric of an actual grasp to one of ble grasps in the tactile experience database.
the nearest neighbors in the database is less than t1 = 10.0, Figure 13 shows the mean distance to the closest stable
we decide this grasp is close enough to experience. If the grasp in the tactile experience database. As hand adjustments
distance metric of an actual grasp to any one of the nearest were applied, the grasps were optimized in terms of their dis-
neighbors is greater than t2 = 30.0, we decide the actual tances to the stable grasps in the tactile experience database.
grasp is too far from experience and no similar experience is Figure 14 shows the percentage of stable grasps as five con-
found. secutive hand adjustments were made. The main trend is that
as the hand adjustments are applied, the percentage of stable
5.2.3 Grasping with hand adjustment in simulation grasps increases, which indicates that the hand adjustments
based on tactile sensing data does improve the robustness of
In the simulation test, our goal is to see how our hand adjust- the grasping procedure under pose uncertainty.
ment procedure could help improve the grasping perfor-
mance starting at a grasp that is perturbed from the ideal 5.2.4 Grasping with hand adjustment on a real robot
grasping pose due to pose uncertainty. Our experiments were
conducted on the object models shown in Fig. 9. To simulate In the physical experiments, the Barrett hand was attached to
pose error, we randomly sampled the space of pose uncer- a six-DOF Staubli robotic arm. Objects used in our test are
tainty S = {< x, y, θ > |x ∈ [−30, 30], y ∈ [−30, 30], θ ∈ shown in Fig. 10. The tactile experience database we used in
[−20, 20]} and generated 110 pose errors. We injected each the physical experiments was the same as in the simulation
pose error to each stable grasp in the tactile experience data- experiments. A kinect sensor first acquired a 3D point cloud

123
324 Auton Robot (2014) 36:309–330

Fig. 15 An example of hand adjustment with tactile experience. a The initial unstable grasp where the palm did not touch the detergent bottle.
b–e The grasp status after each hand adjustment was applied. f The robot hand lift up the object from the table

of the scene. The recognition method proposed by Papazov – If the object stays stable in hand throughout the entire
and Burschka (2010) was used in our perception system, grasping process, score 1.
which uses partial geometry of an object to recover its full 6D
pose. Once the pose of an object was estimated, a predefined The intuition behind this set of criteria was that the object
stable grasp was retrieved and was perturbed by a pose error should remain stable during finger close, lift up, and a shake
that was generated in the same way as in the simulation test. test to maximally preserve the static status of the object.
Finally, the OpenRave planner (Diankov and Kuffner 2008) When similar tactile experience is found in the tactile
generated a collision-free trajectory to the grasping pose and experience database, hand adjustments are computed based
the hand moved to the target pose and executed the grasp. on tactile experience. Figure 15 gives an example of hand
After this initial grasp was established, the hand adjustment adjustment on a detergent bottle with similar tactile experi-
procedure proceeded to improve this grasp. Strictly speaking, ence. Initially, the palm did not touch the detergent bottle and
the perception system also introduced error into the system, only three fingertips touched the surface of the bottle with
but since the Kinect was relatively close to the object, the small contact areas, making the grasp fragile. After four con-
pose error from the perception system was not as significant secutive hand adjustments, the grasp was optimized so that
compared to the injected pose error. In this sense, we consid- the palm touched the detergent bottle and formed a strong
ered the injected pose error as the major error in the system. power grasp.
For each of the objects, we ran 10 grasping trials, each with When there is no similar tactile experience found in the
a different pose error. tactile experience database, the surface for each contact
When the robot had exited the pipeline of Fig. 2 via state would be reconstructed for grasp planning. In our work, the
stable grasp achieved of Fig. 2, it would lift up the object. robotic hand moved along the x and z directions of its palm
After the lift up action, a “shake test” took place by rotating as illustrated in Fig. 16 and collected contact points when
the last joint of the robotic arm within a range of ±60 degrees. the fingers were closed at each waypoint. The range of the
The scoring criteria for a grasping test were as follows: motion is 50 mm along these two directions with an inter-
val of 5 mm. This process is a relatively time-consuming
– If the object falls on the table after lift up or the shake process compared to calculating a hand adjustment directly
test, score 0; from tactile experience. In our experiments, only 4 out of
– If the object moves in hand during the motion of finger the 50 trials ended up with local geometry reconstruction.
close, lift up, or the shake test but stays in hand in the Figure 17 shows an example of a Barrett hand executing a
end, score 0.5; grasp after it reconstructed the local geometry of a Snapple

123
Auton Robot (2014) 36:309–330 325

Table 4 Details of grasping with a tactile experience database


Object # Of grasps Avg. # adj. Lift-upa Score

Snapple 10 5.5 10 1.0


Box 10 3.3 10 0.95
Detergent 10 2.1 8 0.75
Cup 10 2.4 10 0.95
Rock 10 3 10 0.95
a The
robot hand successfully lifts up the object. The object could have
Fig. 16 Exploratory directions of a Barrett hand for local geometry moved during the grasping and lift-up procedure
reconstruction. In the exploration process, the robotic hand moves along
the x and z directions and collects contact points on the surface of the
object at each waypoint by closing its fingers. The range of the motion w/ grasp adjustment w/o grasp adjustment
is 50 mm along x and z directions with an interval of 5 mm 1
0.8
bottle and planned two stable grasps using the reconstructed 0.6
local geometry. 0.4
Table 4 shows the details of the tests of grasping using our 0.2
grasping pipeline. As a comparison, we also ran 10 grasp-
0
ing trials for each object starting at the same initial grasping snapple box detergent cup rock
pose using a conventional grasping pipeline which does not
have the post-execution grasp stability estimation and hand Fig. 18 Grasping scores w/ and w/o the post-execution grasp adjust-
ment procedure. Bars in blue show the scores of grasping using our
adjustment procedure. Figure 18 shows experimental results
method and bars in red show the scores of grasping without the grasp
on the grasping performance using a grasping pipeline with adjustment procedure. The scoring criteria are discussed in sect. 5.2.4
and without our post-execution grasp stability estimation and (Color figure online)
hand adjustment procedure. Grasping scores give us a more
detailed analysis of the grasping performance, which cap-
tures more detailed object stability information during each scenario, we used a pose error model as illustrated in Fig. 11
dynamic grasping process because a grasp test was consid- to parameterize the uncertainty space due to the pose error,
ered a success as long as the object was in hand after the lift- which is a 3-dimensional space. We created our tactile expe-
up, but during the grasping process the object could move rience database according to this model as well. However, in
in hand or in the any stage of finger closing, lift up, and a more general robot system, grasping situations can be dif-
shake test. For each object, the overall grasping performance ferent from what we have assumed and the uncertainty space
is improved using our method. can be as complicated as 6-dimensional: three dimensions for
translational uncertainty and three dimensions for rotational
6 Discussion uncertainty. In this case, our method could also apply but the
creation of the tactile experience database will be based on
In our work, we assumed a table-top grasping scenario where sampling the 6-dimensional space and the search for hand
an object is placed on a supporting plane. For this grasping adjustments will have to be done within this 6-dimensional

Fig. 17 Grasping using grasps planned on reconstructed local geom- tactile sensing data shown in black. c, d Two example grasps planned
etry. a A snapshot of the executed grasp planned using the GraspIt! based on the local geometry. Grasp in (c) was executed since it had a
simulator on the local geometry shown in (b). In b, the reconstructed larger epsilon quality. The transparent bottle model shown in (c) and
local geometry is shown in gray with original data points extracted from (d) is used here solely for visualization

123
326 Auton Robot (2014) 36:309–330

space. However, this space will be exponentially larger than as proposed by Detry et al. (2012) and look for similar grasp-
the current 3-dimensional space. More advanced search algo- ing parts on novel objects. With similar grasping parts found
rithms should be utilized to improve the efficiency of the on the novel objects, we can align them and synthesize ini-
algorithm. tial grasp hypothesis. Once the initial hypothetical grasp is
We injected artificial errors into the execution of initial executed, our post-execution grasp adjustment can take place
grasping poses. The reason is two-fold: (1) the kinematics of to optimize the grasp. We will address this problem in our
our robot arm is precise and consistent and (2) the pose error future work.
from the vision system is relatively small. In this situation,
we would like to add more errors that are significant enough 7 Conclusion
to make a difference. In the future, we will perform some
experiments using some other less accurate robot arms and In this paper, we presented a grasping pipeline which is more
see how our grasping pipeline performs. robust under pose uncertainty compared to a conventional
In the experiments of Sect. 5.2, we showed the process planning-based grasping pipeline. We developed a closed-
to build our tactile experience database using commonly loop post-execution grasp adjustment procedure to estimate
grasped objects. In order to scale up our system, more the stability of the executed grasp and make necessary hand
objects and grasps should be added using the same method adjustments accordingly. To estimate grasp stability, we used
as described in Sect. 5.2.2. a bag-of-words model to extract grasp features from the tac-
One potential and practical challenge for our current hand tile sensing data and trained an SVM classifier to distinguish
adjustment procedure is that potential disturbance introduced stable and unstable grasps. To synthesize a hand adjustment,
during letting go the object and re-grasping the object is pos- we built a tactile experience database which consists of a
sible in the current setup with the current hardware. Thus set of stable grasps and their corresponding tactile sensor
the object may be moved during finger closing or releasing. feedback. This database provides us with stable grasps using
However, we believe this problem can be alleviated through which we can better locate the executed grasps around a
two different methods. One is integrating more sensitive sen- stable grasp and synthesize a necessary hand adjustment.
sors to detect gentle touch before the object is moved, e.g., the Experiments were conducted in both simulation and physical
strain gauges in the Barrett hand. Another method is to utilize settings. The experimental results indicate that our grasping
proximity sensors which can predict contact locations before pipeline with a post-execution grasp adjustment procedure
the object is touched. We will look into these possibilities in increases the grasping performance under pose uncertainty
our future work. compared to a conventional grasping pipeline.
Another challenge is to deal with objects which are small
relative to an anthropomorphic hand, e.g., wrenches and pens. Acknowledgments This work is funded by NSF Grant IIS-0904514.
These objects are usually difficult to grasp from their natural
poses on a support surface, e.g., a tabletop. We think more Appendix: simulating tactile sensors
advanced control methods should be integrated to establish
the initial grasp. One potential approach is to utilize the work Figure 19 shows an example of tactile sensor simulation on
done by Kazemi et al. (2012). a Barrett hand. To simulate tactile sensors, the first step is
In addition, we will add more grasp quality measurements to find a contact model to approximate the force distribu-
into our work and compare their influence on the perfor- tion near each contact point. In the GraspIt! simulator, a
mance of our pipeline. Currently we have used the epsilon robotic hand and a graspable object are both treated as rigid
quality and the volume quality of a grasp as they are widely bodies. Thus, each contact detected by a collision detection
used in the robotics community. There are other grasp qual- system is initially modeled as a point contact. In the real
ity measurements available and some of them may benefit world, however, the hand and the object in contact are actu-
our grasping pipeline in general. So, by adding more quality ally deformable to some extent, resulting in an area in contact
measurements into our pipeline, we may be able to have a rather than a point. A point contact assumption then no longer
more comprehensive quality evaluation criterion. holds reasonably. To simulate the contact region between the
At the core of our stability estimation and hand adjust- two bodies touching each other, we use a soft finger contact
ment algorithms, the local geometry at contact is the focus model as is developed in (Ciocarlie et al. 2007b). This model
and there is no assumption made about the global shape of takes into account the local geometry and structure of the
an object. Thus, our methods are global-shape-independent. objects in contact and captures frictional effects such as cou-
This makes it possible to extend our methods to grasping pling between tangential force and frictional torque. It locally
novel objects. One approach is to utilize the idea of “grasp- approximates the surfaces of the two touching bodies as
ing by parts” where we first extract grasping parts from the
objects in the tactile experience database using methods such z i = Ai x 2 + Bi y 2 + Ci x y, i ∈ {1, 2} (10)

123
Auton Robot (2014) 36:309–330 327

Fig. 19 Tactile sensor simulation. a A robotic grasp of a Barrett readings on the palm. The tactile readings go from black (no response)
hand on a mug. b The simulated sensor responses on the hand. to pink (saturation) (Color figure online)
c–e The tactile responses on finger F1, F2, F3 respectively. f The tactile

where the local contact coordinate system has its origin at the Algorithm 4: Computing tactile feedback
center of the contact and the z axis aligned with the normal Input: A robotic grasp with a list of point contacts between the
of the contact. The subscript i distinguishes the contacting hand and the object
bodies from each other. Output: 2D arrays that carry the simulated tactile sensor values
of the corresponding sensor cells
Based on this approximation, we can deduce the separa-
1 Initialize the output tactile sensor cell arrays to zero’s
tion h between two surfaces in the form of 2 foreach point contact do
3 Calculate the relative radii at the contact
1 2 1 2 Calculate the contact region
h= 
x + y (11) 4
2R 2R  5 Discretize the contact region to 10 × 10 sub-regions
6 Calculate the forces within each discretized sub-region
where R  and R  are the relative radii of curvature of the according to the pressure distribution
7 foreach discretized contact sub-region do
objects in contact, depending only on their local geometry. 8 foreach tactile sensor cell do
After a contact region is determined, we consider how 9 if the discretized contact sub-region overlays on the
the forces are formed within the contact region so that the tactile sensor cell
response of the corresponding tactile sensor cells can be 10 then
11 Accumulate the force of the discretized contact
analyzed and evaluated. To express the pressure distribu- sub-region onto the overlaying tactile sensor cell
tion inside a contact region using non-planar models that 12 end
take into account the local geometry of the objects involved, 13 end
we choose a Hertzian model as used in (Ciocarlie et al. 14 end
2007b). 15 end
16 Return the sensor cell arrays
In this model, the ratio of frictional torque to contact load
which is used to compute the eccentricity parameter of the
friction ellipsoid can be obtained from

max(τn ) 3π √
= μ ab (12) References
P 16

where μ is the frictional coefficient, τn is a frictional moment Bekiroglu, Y., Laaksonen, J., Jorgensen, J. A., Kyrki, V., & Kragic, D.
(2011). Assessing grasp stability based on learning and haptic data.
about the contact normal, P is the contact load, and a and b IEEE Transactions on Robotics, 27(3), 616–629. doi:10.1109/TRO.
are the lengths of the semi-axes. 2011.2132870.
Based on the soft finger contact model, we compute the Berenson, D., & Srinivasa, S. (2008). Grasp synthesis in cluttered envi-
contact region for a hand-object contact as well as the pres- ronments for dexterous hands. In IEEE-RAS International Confer-
ence on Humanoid Robots (Humanoids08).
sure distribution within the contact region. Since a tactile Berenson, D., Srinivasa, S., Kuffner, J. (2009). Addressing pose
sensor cell performs as an atomic sensing unit, we discretize uncertainty in manipulation planning using task space regions. In
the soft finger contact region so that we can accumulate the IEEE/RSJ International Conference on Intelligent Robots and Sys-
total forces within each discrete part and use this to compute tems (IROS 2009).
Bierbaum, A., & Rambow, M., (2009). Grasp affordances from
the forces sensed on each corresponding tactile sensor cell.
multi-fingered tactile exploration using dynamic potential fields. In
We summarize the procedure to generate the tactile feedback Humanoid Robots 2009, Humanoids 2009 (pp. 168–174). doi:10.
of a robotic grasp in Algorithm 4. 1109/ICHR.2009.5379581.

123
328 Auton Robot (2014) 36:309–330

Bohg, J., & Kragic, D. (2010). Learning grasping points with shape Goldfeder, C., Ciocarlie, M., Dang, H.,& Allen, P. (2009a). The
context. Robotics and Autonomous Systems, 58, 362–377. doi:10. columbia grasp database. In IEEE International Conference on
1016/j.robot.2009.10.003. Robotics and Automation 2009, ICRA ’09 (pp. 1710–1716).
Boularias, A., Kroemer, O., & Peters, J. (2011). Learning robot grasping Goldfeder, C., Ciocarlie, M., Peretzman, J., Dang, H., & Allen, P.
from 3-d images with markov random fields. In IEEE/RSJ Interna- (2009b). Data-driven grasping with partial sensor data. In IEEE/RSJ
tional Conference on Intelligent Robots and Systems (IROS) 2011 International Conference on Intelligent Robots and Systems 2009,
(pp 1548–1553). IROS 2009 (pp. 1278–1283). doi:10.1109/IROS.2009.5354078.
Brook, P., Ciocarlie, M., & Hsiao, K. (2011). Collaborative grasp plan- Harris, Z. (1970). Distributional structure. In Papers in Structural and
ning with multiple object representations. In IEEE International Transformational Linguistics. Dordrecht: D. Reidel Publishing Com-
Conference on Robotics and Automation (ICRA) 2011. (pp 2851– pany (pp. 775–794).
2858). doi:10.1109/ICRA.2011.5980490. Hebert, P., Hudson, N., Ma, J., Howard, T., Fuchs, T., Bajracharya, M.,
Chang, C. C., & Lin, C. J. (2001). LIBSVM: A library for support vec- & Burdick, J. (2012). Combined shape, appearance and silhouette for
tor machines. http://www.csie.ntu.edu.tw/~cjlin/libsvm. Accessed simultaneous manipulator and object tracking. In IEEE International
20 July 2013. Conference on Robotics and Automation (ICRA) 2012 (pp 2405–
Ciocarlie, M., Goldfeder, C.,& Allen, P. (2007a). Dimensionality reduc- 2412). doi:10.1109/ICRA.2012.6225084.
tion for hand-independent dexterous robotic grasping. In IEEE/RSJ Horowitz, M. B., & Burdick, J. W. (2012). Combined grasp and manip-
International Conference on Intelligent Robots and Systems (2007), ulation planning as a trajectory optimization problem. In IEEE
IROS 2007 (pp. 3270–3275). doi:10.1109/IROS.2007.4399227. International Conference on Robotics and Automation (ICRA) 2012
Ciocarlie, M., Lackner, C., & Allen, P. (2007b). Soft finger model (pp. 584–591). doi:10.1109/ICRA.2012.6225104.
with adaptive contact geometry for grasping and manipulation tasks. Hsiao, K., Chitta, S., Ciocarlie, M., & Jones, E. (2010). Contact-reactive
World Haptics Conference (pp. 219–224). grasping of objects with partial shape information. In IEEE/RSJ
Ciocarlie, M. T., & Allen, P. K. (2009). Hand posture subspaces for International Conference on Intelligent Robots and Systems (IROS)
dexterous robotic grasping. The International Journal of Robotics (pp. 1228–1235).
Research, 28(7), 851–867. Hsiao, K., Kaelbling, L., & Lozano-PÃI’rez, T. (2011). Robust grasping
Coelho, J. A., & Grupen, R. A. (1997). A control basis for learning under object pose uncertainty. Autonomous Robots, 31, 253–268.
multifingered grasps. Journal of Robotic Systems, 14(7), 545–557. doi:10.1007/s10514-011-9243-2.
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Huebner, K., & Kragic, D. (2008). Selection of robot pre-grasps
Learning, 20, 273–297. doi:10.1007/BF00994018. using box-based shape approximation. In IEEE/RSJ International
Dang, H., & Allen, P. (2012). Learning grasp stability. In IEEE Inter- Conference on Intelligent Robots and Systems 2008, IROS 2008
national Conference on Robotics and Automation (ICRA) 2012 (pp. 1765–1770). doi:10.1109/IROS.2008.4650722.
(pp. 2392–2397). doi:10.1109/ICRA.2012.6224754. Jia, Y.B. (2000). Grasping curved objects through rolling. In Proceed-
Dang, H., Weisz, J., & Allen, P.K. (2011). Blind grasping: Stable ings of IEEE International Conference on Robotics and Automation
robotic grasping using tactile feedback and hand kinematics. In IEEE 2000, ICRA ’00 (Vol. 1, pp. 377–382). doi:10.1109/ROBOT.2000.
International Conference on Robotics and Automation (ICRA), 2011 844085.
(pp. 5917–5922) doi:10.1109/ICRA.2011.5979679. Jiang, L. T., & Smith, J. R. (2012). Seashell effect pretouch sensing
Detry, R., Ek, C., Madry, M., Piater, J., & Kragic, D. (2012). Gener- for robotic grasping. In IEEE International Conference on Robotics
alizing grasps across partly similar objects. In IEEE International and Automation (ICRA), 2012 (pp. 2851–2858). doi:10.1109/ICRA.
Conference on Robotics and Automation (ICRA), 2012 (pp. 3791– 2012.6224985.
3797). doi:10.1109/ICRA.2012.6224992. Jiang, Y., Moseson, S., & Saxena, A. (2011). Efficient grasping from
Diankov, R., & Kuffner, J. (2008). Openrave: A planning architecture for rgbd images: Learning using a new rectangle representation. In IEEE
autonomous robotics. Robotics Institute, Pittsburgh, PA, Technical, International Conference on Robotics and Automation (ICRA), 2011
Report CMU-RI-TR-08-34. (pp. 3304–3311). doi:0.1109/ICRA.2011.5980145.
Dogar, M., & Srinivasa, S. (2011). A framework for push-grasping in Kazemi, M., Valois, J. S., Bagnell, J. A., & Pollard, N. (2012). Robust
clutter. In N. R. Hugh Durrant-Whyte & P. Abbeel (Eds.), Robotics: object grasping using force compliant motion primitives. In Proceed-
Science and Systems VII. Cambridge, MA: MIT Press. ings of Robotics: Science and Systems, Sydney, Australia.
El-Khoury, S., & Sahbani, A. (2010). A new strategy combining empir- Kim, J., Iwamoto, K., Kuffner, J. J., Ota, Y., & Pollard, N. S. (2012).
ical and analytical approaches for grasping unknown 3d objects. Physically-based grasp quality evaluation under uncertainty. In IEEE
Robotics and Autonomous Systems, 58, 497–507. International Conference on Robotics and Automation (ICRA), 2012
Felip, J., Laaksonen, J., Morales, A., & Kyrki, V. (2013). Manipulation (pp. 3258–3263). doi:10.1109/ICRA.2012.6225342.
primitives: A paradigm for abstraction and execution of grasping Klingbeil, E., Rao, D., Carpenter, B., Ganapathi, V., Ng, A.Y., &
and manipulation tasks. Robotics and Autonomous Systems, 61(3), Khatib, O. (2011). Grasping with application to an autonomous
283–296. doi:10.1016/j.robot.2012.11.010. checkout robot. In IEEE International Conference on Robotics
Ferrari, C., & Canny, J. (1992). Planning optimal grasps. In IEEE Inter- and Automation (ICRA), 2011 (pp. 2837–2844). doi:10.1109/ICRA.
national Conference on Robotics and Automation (pp. 2290–2295). 2011.5980287.
doi:10.1109/ROBOT.1992.219918. Kootstra G, PopoviÄG M, JÃÿrgensen JA, Kuklinski K, Miatliuk K,
Geidenstam, S., Huebner, K., Banksell, D., & Kragic, D. (2009). Learn- Kragic D, & KrÃger N. (2012). Enabling grasping of unknown
ing of 2D grasping strategies from box-based 3D object approxi- objects through a synergistic use of edge and surface information.
mations. In Proceedings of Robotics: Science and Systems, Seattle, The International Journal of Robotics Research, 31(10), 1190–1213.
USA. doi:10.1177/0278364912452621.
Goldfeder, C., & Allen, P. (2011). Data-driven grasping. Autonomous Laaksonen, J., Nikandrova, E., & Kyrki, V. (2012). Probabilistic sensor-
Robots, 31, 1–20. doi:10.1007/s10514-011-9228-1. based grasping. In IEEE/RSJ International Conference on Intelligent
Goldfeder, C., Allen, P., Lackner, C., & Pelossof, R. (2007). Grasp Robots and Systems (IROS), 2012 (pp. 2019–2026). doi:10.1109/
planning via decomposition trees. In IEEE International Conference IROS.2012.6385621.
on Robotics and Automation 2007 (pp. 4679–4684). doi:10.1109/ Le, Q., Kamm, D., Kara, A., & Ng, A. (2010). Learning to grasp objects
ROBOT.2007.364200. with multiple contact points. In IEEE International Conference on

123
Auton Robot (2014) 36:309–330 329

Robotics and Automation (ICRA), 2010 (pp. 5062–5069). doi:10. objects based on co-planarity and colour information. Robotics and
1109/ROBOT.2010.5509508. Autonomous Systems, 58(5), 551–565.
Li, Z., & Sastry, S. (1988). Task-oriented optimal grasping by multifin- Przybylski, M., Asfour, T., & Dillmann, R. (2011). Planning grasps for
gered robot hands. IEEE Journal of Robotics and Automation, 4(1), robotic hands using a novel object representation based on the medial
32–44. doi:10.1109/56.769. axis transform. In IEEE/RSJ International Conference on Intelligent
López-Coronado, J., & Pedreño Molina, J. (2002). A neural model Robots and Systems (IROS) (pp. 1781–1788).
for visual-tactile-motor integration in robotic reaching and grasp- Rao, D., Le, Q., Phoka, T., Quigley, M., Sudsang, A., & Ng, A. (2010).
ing tasks. Robotica, 20, 23–31. Grasping novel objects with depth segmentation. In IEEE/RSJ Inter-
Lourakis M (Jul. 2004). levmar: Levenberg-marquardt nonlinear least national Conference on Intelligent Robots and Systems (IROS) (pp.
squares algorithms in C/C++. Retrieved Jan 31, 2005 from http:// 2578–2585).
www.ics.forth.gr/lourakis/levmar/. Roa, M. A., Argus, M. J., Leidner, D., Borst, C., & Hirzinger, G. (2012).
MacQueen, J. B. (1967). Some methods for classification and analysis Power grasp planning for anthropomorphic robot hands. In IEEE
of multivariate observations. In L.M.L Cam, J. Neyman (Eds.). Pro- International Conference on Robotics and Automation (ICRA), 2012
ceedings of the fifth Berkeley Symposium on Mathematical Statistics (pp. 563–569). doi:10.1109/ICRA.2012.6225068.
and Probability Berkeley: University of California Press (Vol. 1, pp. Saxena, A., Driemeyer, J., Kearns, J., & Ng, A. Y. (2007). Robotic
281–297). grasping of novel objects. In B. Schölkopf, J. Platt, & T. Hoffman
Miller, A., & Allen, P. (1999). Examples of 3d grasp quality computa- (Eds.), Advances in Neural Information Processing Systems 19 (pp.
tions. In Proceeding of IEEE International Conference on Robotics 1209–1216). Cambridge, MA: MIT Press.
and Automation 1999 (Vol. 2, pp. 1240–1246). Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of
Miller, A., Knoop, S., Christensen, H., & Allen, P. (2003). Automatic novel objects using vision. The International Journal of Robotics
grasp planning using shape primitives. In Proceedings of IEEE Inter- Research, 27(2), 157–173.
national Conference on Robotics and Automation 2003, ICRA ’03 Shilane, P., Min, P., Kazhdan, M., & Funkhouser, T. (2004). The prince-
(Vol. 2, pp. 1824–1829). doi:10.1109/ROBOT.2003.1241860. ton shape benchmark. In Shape Modeling International (pp. 167–
Miller, A. T., & Allen, P. K. (2004). Graspit! a versatile simula- 178).
tor for robotic grasping. IEEE Robotics & Automation Magazine, Stulp, F., Theodorou, E., Buchli, J., & Schaal, S. (2011). Learning to
11(4), 110–122. grasp under uncertainty. In IEEE International Conference on Robot-
Mishra, T., & Mishra, B. (1994). Reactive algorithms for 2 and 3 finger ics and Automation (ICRA) (pp. 5703–5708).
grasping. In IEEE/RSJ International Workshop on Intelligent Robots Suárez, R., Roa, M., & Cornella, J. (2006). Grasp quality measures.
and Systems. Technical Report, Technical University of Catalonia. http://hdl.
Morales, A., Prats, M., Sanz, P., & Pobil, A.P. (2007). An experi- handle.net/2117/316. Accessed 20 July 2013.
ment in the use of manipulation primitives and tactile perception Suykens, J. A. K., Alzate, C., & Pelckmans, K. (2010). Primal and dual
for reactive grasping. In Workshop on Robot Manipulation: Sens- model representations in kernel-based learning. Statistics Surveys,
ing and Adapting to the Real World, Robotics: Science and Systems 4, 148–183. doi:10.1214/09-ss052.
(RSS 2007). Wang, D., Watson, B.T., & Fagg, A. (2007). A switching control
Nikandrova E, Laaksonen J, & Kyrki V (2012). Explorative sensor- approach to haptic exploration for quality grasps. In Proceedings
based grasp planning. In G. Herrmann, M. Studley, M. Pearson, A. of the Robotics: Science & Systems 2007 Workshop on Sensing and
Conn, C. Melhuish, M. Witkowski, J.H. Kim, P. Vadakkepat (Eds.), Adapting to the Real World.
Lecture Notes in Computer Science, Advances in Autonomous Robot- Weisz, J., & Allen, P.K. (2012). Pose error robust grasping from contact
ics (Vol. 7429, pp. 197–208), Berlin: Springer. wrench space metrics. In IEEE International Conference on Robotics
Papazov, C., & Burschka, D. (2010). An efficient ransac for 3d object and Automation (ICRA), 2012. (pp. 557–562). doi:10.1109/ICRA.
recognition in noisy and occluded scenes. In Asian Conference on 2012.6224697.
Computer Vision (pp. 135–148). Zhang, L., & Trinkle, J. C. (2012). The application of particle filtering
Petrovskaya, A., Khatib, O., Thrun, S., & Ng, A. (2006). Bayesian esti- to grasping acquisition with visual occlusion and tactile sensing. In
mation for autonomous object manipulation based on tactile sensors. IEEE International Conference on Robotics and Automation (ICRA)
In Proceedings of IEEE International Conference on Robotics and 2012 (pp. 3805–3812). doi:10.1109/ICRA.2012.6225125.
Automation 2006, ICRA (2006) (pp. 707–714). doi:10.1109/ROBOT.
2006.1641793.
Pezzementi, Z., Jantho, E., Estrade, L., & Hager, G. (2010). Characteri-
zation and simulation of tactile sensors. In IEEE International Con-
Hao Dang is a Ph.D. stu-
ference on Haptics Symposium 2010 (pp. 199–205). doi:10.1109/
dent at Columbia University.
HAPTIC.2010.5444654.
He received his B.E. degree
Pezzementi, Z., Reyda, C., & Hager, G. (2011). Object mapping, recog-
in Computer Science from Bei-
nition, and localization from tactile geometry. In IEEE International
jing University of Aeronautics
Conference on Robotics and Automation (ICRA), 2011 (pp. 5942–
and Astronautics and the M.S.
5948). doi:10.1109/ICRA.2011.5980363.
and M.Phil in Computer Science
Platt, R. (2007). Learning grasp strategies composed of contact relative
from Columbia University. His
motions. In 7th IEEE-RAS International Conference on Humanoid
current research interests include
Robots 2007 (pp. 49–56). doi:10.1109/ICHR.2007.4813848.
robotic grasping and dexterous
Platt, R., Fagg, A., & Grupen, R. (2010). Null-space grasp control:
manipulation.
Theory and experiments. IEEE Transactions on Robotics, 26(2),
282–295. doi:10.1109/TRO.2010.2042754.
Platt, R., Permenter, F., & Pfeiffer, J. (2011). Using bayesian filtering to
localize flexible materials during manipulation. IEEE Transactions
on Robotics, 27(3), 586–598. doi:10.1109/TRO.2011.2139150.
Popovic, M., Kraft, D., Bodenhagen, L., Baseski, E., Pugeault,
N., Kragic, D., et al. (2010). A strategy for grasping unknown

123
330 Auton Robot (2014) 36:309–330

Peter K. Allen is Professor of include real-time computer vision, dextrous robotic hands, 3-D model-
Computer Science at Columbia ing and sensor planning. In recognition of his work, Professor Allen has
University. He received the A.B. been named a Presidential Young Investigator by the National Science
degree from Brown University Foundation.
in Mathematics-Economics, the
M.S. in Computer Science from
the University of Oregon and
the Ph.D. in Computer Science
from the University of Pennsyl-
vania, where he was the recipient
of the CBS Foundation Fellow-
ship, Army Research Office fel-
lowship and the Rubinoff Award
for innovative uses of comput-
ers. His current research interests

123

You might also like