Professional Documents
Culture Documents
Path Planning and Tracking Methods For Quadruped Robots
Path Planning and Tracking Methods For Quadruped Robots
Quadruped robots
Student Name: Mr. Vedang Tiwari
SURGE No.: 2130308
Guide Name: Dr. Mangal Kothari
Date: 01-08-2021
Abstract: Mobile terrestrial robots are traditionally designed to move on flat terrain, and the
mapping, planning, and control algorithms are typically developed for a two-dimensional abstraction
of the environment. When it comes to navigation on rough terrain (e.g., with tracked vehicles or
legged robots), these algorithms must be extended to consider all three dimensions of the
surroundings. The most common approach is to build an elevation map of the environment, where
each coordinate on the horizontal plane is associated with an elevation or height value. We are using
the Universal Grid Map Library as a generic mapping framework with the Robot Operating System
(ROS) for the quadruped robot. The library supports multiple data layers, such as assigning elevation,
variance, color, surface normal, occupancy, etc. They can produce a probabilistic representation of
the environment. The mapping procedure is formulated from a robot-centric perspective to
explicitly account for pose drift as the robot moves.
Keywords: elevation mapping, Grid Map Library, Robot Operating System (ROS), proprioceptive
sensors, exteroceptive sensors, state estimation, Extended Kalman Filter (EKF), kinematics, and
dynamics libraries.
1. Introduction
A few decades before, robots were a thing straight out of science fiction. In these stories, we saw
them sometimes as human beings' greatest enemy, but most of the time as our best companions.
And often picture them as a humanoid or quadruped robot (in general a legged robot), not only
because they are closer to natural physiology but also because they show superiority in rough and
uncertain terrain over wheeled robot tracked systems could even transverse discontinuous paths.
Unlike aerial robots, they are in contact with the ground, thus carrying much heavier loads. They
have many potentials and could be used as a domestic pet or for defense and rescue purposes.
Legged robots could perform highly dynamic maneuvers, thus requires sophisticated control
strategies to maintain balance and generating appropriate feet and body trajectories for accuracy. A
quadruped is less sophisticated than a hexapod (inspired by insects) because fewer legs are deployed
and more stable than a biped (humanoid robot) because of increased supported area, thus best
suited for most jobs. Still, a highly developed relationship between perception and locomotion is
necessary for autonomous navigation without falling, slipping, or getting stuck.
To use these robots in natural surroundings outside the laboratories, they must have reactive
capabilities, which implies handling unforeseen situations such as dust, sunlight, slipping, or external
impulses. This demands continuously adapting the generated motion plans with the incoming data
from the sensors such as IMU, Depth sensors, contact or torque sensors, stereo cameras, and LiDAR
to generate a probabilistic map of the environment continuously updated with minimum possible
latency or time delay.
2. Literature Review
Not much research is done in this field specific to the quadruped robot, with only a few scientists like
Marco Camurri, Simona Nobili, Peter Fankhauser managed to establish their work. This section has
reviewed two state-of-the-art methods to endow a legged robot with motion estimation and
mapping capabilities. We have seen that legged machines can benefit from multi-sensor fusion
techniques, primarily when employed in complex operations in GPS-denied environments. The trend
towards methods that use a combination of proprioceptive and exteroceptive state estimation is
highlighted in this table.
a) Elevation Mapping
How to acquire and merge geometrical information about the environment in a compact,
consistent, and helpful representation. We are using the Universal Grid Map Library created
by Peter Fankhauser and Marco Hutter. The proposed method incorporates the drift and
uncertainties of the state estimation and a noise model of the distance sensor. It yields a
probabilistic terrain estimate as a grid-based elevation map, including upper and lower
confidence bounds. Grid maps can be thought of as a 2.5-dimensional representation, where
each cell in the grid holds a height value. They have implemented the elevation mapping
software on the quadrupedal robot StarlETH.
b) State Estimation
how to compute (efficiently, robustly, and at high frequency) an estimate of the robot's base's
position, orientation, and linear and angular velocity. When executed with only proprioceptive
inputs, drift in place is unbounded, and additional exteroceptive sources are required to
accommodate for it. The pose drift is limited using exteroceptive sensors like cameras, Light
Detection and Ranging (LiDAR), or depth sensors. We are using a multi-sensor state estimator
called Pronto by Marco Camurri and Simona Nobili. This is among the most recent algorithms
based on Extended Kalman Filter (EKF) that fuses IMU and Leg Odometry sensing for pose and
velocity estimation and can integrate pose corrections from visual and LIDAR and Odometry to
correct pose drift. It has been tested on IIT HyQ and ANYbotics ANYmal.
3. System Overview
A standard quadruped model consists of 3 degrees of freedom (DoF) in each leg and six unactuated
DoF for the torso, making it a total of 18 DoF. Below is the xacro file of our quadruped robot.
<?xml version="1.0"?>
<material name="white">
<color rgba="1 1 1 1"/>
</material> -->
4. Approach
After analyzing different onboard sensor suites, we choose to include contact or torque sensors,
joint encoders, IMUs, depth sensor (RGB-D sensor), stereo camera, and LiDAR on our robot with
optional intermittent GPS position updates.
Block diagram of the system. These modules output filter measurements as ROS messages.
The range measurement data we get for elevation mapping through the universal grid map library
from Kinect-type sensors, laser range sensors, and stereo cameras. The node subscribes to
PointCloud2 depth measurements and a robot pose estimate of type PoseWithCovarianceStamped.
For pose estimation, we are using the Pronto state estimator. To integrate your own Visual and
LIDAR Odometry packages, make them produce the same message type that the fovis (the FOVIS
visual odometry algorithm) will expect them to produce.
Finally, we could plan the path with an RRT-based path planner after the elevation map is generated.
// + Link shank_link_RH
shank_link_RH_u = tau(KNEE_JOINT_RH) - shank_link_RH_p(AZ);
shank_link_RH_U = shank_link_RH_AI.col(AZ);
shank_link_RH_D = shank_link_RH_U(AZ);
com_base = Vector3(0.0,0.0,0.0);
tensor_base.fill(
m_base,
com_base,
Utils::buildInertiaTensor<Scalar>(ix_base,iy_base,iz_base,0.0,0.0,0.
0) );
// First pass, link 'hip_link_LF'
hip_link_LF_v = ((xm->fr_hip_link_LF_X_fr_base) * base_v);
hip_link_LF_v(iit::rbd::AZ) += qd(HIP_JOINT_LF);
motionCrossProductMx<Scalar>(hip_link_LF_v, vcross);
hip_link_LF_a = (xm-
>fr_hip_link_LF_X_fr_base) * base_a + vcross.col(iit::rbd::AZ) * qd(HIP_JOIN
T_LF);
hip_link_LF_a(iit::rbd::AZ) += qdd(HIP_JOINT_LF);
rcg::shvan1::Jacobians::Type_fr_base_J_LF_FOOT::Type_fr_base_J_LF_FOOT()
{
(*this)(0,0) = 1.0;
(*this)(0,1) = 0.0;
(*this)(0,2) = 0.0;
(*this)(1,0) = 0.0;
(*this)(2,0) = 0.0;
(*this)(3,0) = 0.0;
}
// Link shank_link_RH:
iit::rbd::transformInertia<Scalar>(shank_link_RH_Ic, frcTransf -
> fr_thigh_link_RH_X_fr_shank_link_RH, Ic_spare);
thigh_link_RH_Ic += Ic_spare;
Fcol(KNEE_JOINT_RH) = shank_link_RH_Ic.col(AZ);
DATA(KNEE_JOINT_RH+6, KNEE_JOINT_RH+6) = Fcol(KNEE_JOINT_RH)(AZ);
Fcol(KNEE_JOINT_RH) = frcTransf -
> fr_thigh_link_RH_X_fr_shank_link_RH * Fcol(KNEE_JOINT_RH);
DATA(KNEE_JOINT_RH+6, ELBOW_JOINT_RH+6) = F(AZ,KNEE_JOINT_RH);
DATA(ELBOW_JOINT_RH+6, KNEE_JOINT_RH+6) = DATA(KNEE_JOINT_RH+6, ELBOW_JO
INT_RH+6);
Fcol(KNEE_JOINT_RH) = frcTransf -
> fr_hip_link_RH_X_fr_thigh_link_RH * Fcol(KNEE_JOINT_RH);
DATA(KNEE_JOINT_RH+6, HIP_JOINT_RH+6) = F(AZ,KNEE_JOINT_RH);
DATA(HIP_JOINT_RH+6, KNEE_JOINT_RH+6) = DATA(KNEE_JOINT_RH+6, HIP_JOINT_
RH+6);
Fcol(KNEE_JOINT_RH) = frcTransf -
> fr_base_X_fr_hip_link_RH * Fcol(KNEE_JOINT_RH);
base_X_hip_joint_LF_chain = tmpX * ht.fr_base_X_fr_hip_link_LF;
tmpSum += inertiaProps.getMass_hip_link_LF() *
( iit::rbd::Utils::transform(base_X_hip_joint_LF_chain, inertiaP
rops.getCOM_hip_link_LF()));
MotionTransforms::Type_fr_base_X_LF_FOOT::Type_fr_base_X_LF_FOOT()
{
(*this)(0,2) = 0.0;
(*this)(0,3) = 0.0;
(*this)(0,4) = 0.0;
(*this)(0,5) = 0.0;
(*this)(1,3) = 0.0;
(*this)(1,4) = 0.0;
(*this)(1,5) = 0.0;
(*this)(2,3) = 0.0;
(*this)(2,4) = 0.0;
(*this)(2,5) = 0.0;
(*this)(3,5) = 0.0;
}
These are sections of the generated code for only one link giving us the jacobians, inertia
properties, forward and inverse dynamics, and transformation matrixes
References: