Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

ROBOT MECHANISMS

A robot's mechanism is the physical hardware that makes up the robot. This includes the robot's
body, wheels, arms, motors, and other physical components. The robot's mechanism needs to be
carefully designed to carry out the robot's intended tasks. For example, a robot designed to clean
floors might have wheels that are optimized for maneuverability and traction.
A robot's mechanism can also be seen as a machine that takes inputs (like data from sensors) and
produces outputs (like movement of the wheels or arms). The code (or software) tells the robot
how to turn the inputs into the desired outputs.

KINEMATIC FORMS

In this lesson, we study two basic robot forms that accomplish two very different kinds of jobs.
Mobile robots typically have wheels, which they use to locomote on the ground. Manipulator arms
are usually bolted to a fixed surface and have a serial chain linkage which can change shape, as
well as a hand or end-effector for manipulating objects. Note that a manipulator arm can also be
mounted on a mobile robot, giving rise to the problem of mobile manipulation.

There are many different kinematic forms, but the most common ones are serial mechanisms and
parallel mechanisms.

Serial Mechanisms: are linear, meaning that the components are arranged in a line. Good
examples of serial mechanism are manipulator robots, that have arms and hands that can interact
with the world, like a robot that can assemble parts on an assembly line

Parallel Mechanisms: have components that are arranged in a more complex pattern. Good
examples of parallel mechanisms are mobile robots; that move around a space, like a self-driving
car, a Roomba vacuum, or a spider-like robot with multiple legs.

Serial mechanisms are simpler to design and build, but they are often less robust and less precise
than parallel mechanisms. Parallel mechanisms are more complex to design and build, but they are
more robust and precise. Serial mechanisms typically require simpler code, since the movements
of the robot are more linear. Parallel mechanisms often require more complex code, since the
movements of the robot are more complex.

Further explanation on how the code and the mechanism are linked to the robot's sensors. The
sensors provide data to the code, which in turn tells the mechanism what to do. For example, if a
distance sensor detects that the robot is getting close to an object, the code can tell the mechanism
to slow down.

Imagine that you are driving a car, and the car's sensors detect that the road is slippery. The car's
control system (the software) receives that information from the sensors, and it tells the car's
mechanism (the engine and wheels) to slow down. In the same way, a robot's sensors, code, and
mechanism work together to achieve the robot's goals.

Additionally, let's think of the code as a recipe. The sensors are like the ingredients in the recipe,
and the mechanism is like the kitchen tools (like the stove or the oven). The recipe tells the cook
what to do with the ingredients using the kitchen tools. So, the code tells the mechanism what to
do with the information it gets from the sensors.

Another important concept is the degrees of freedom (DOF) of a robot. The DOF is the number of
independent ways a robot can move. A robot arm with six joints has six degrees of freedom. A
mobile robot with four wheels has four degrees of freedom.

Figure 1.1: Mobile robot (red, with two wheels) and manipulator arm (orange, fixed to the table).

Mobile Robots

Mobile robots come with a variety of drive mechanisms. Wheels may be drive wheels or free
spinning. They may have a fixed axle, a steered axle, or a free axle. A free-spinning, free-axle
wheel is a caster. Since some drive mechanisms need only two wheels, a caster is added to
maintain stability on a flat surface.

The major drive mechanisms are differential drive, bicycle steering, and Ackerman steering.

Differential drive: is the simplest, and it’s what’s in a Roomba vacuum cleaner. Differential drive
robots use two drive wheels that share a fixed axis and can be driven at different rates to achieve
turning, and are also powered and controlled independently of each other. This type of mechanism
allows the robot to move in any direction and rotate in place.

Bicycle steering: involves one drive wheel with a fixed axle and one free-spinning wheel with a
steered axle.

Ackerman steering: is what most cars use. It is similar to a pair of bicycles – two parallel fixed
wheels in back and two steered wheels in the front. These drive mechanisms will be revisited in
the Kinematics unit.
Figure 1.2: Differential drive, bicycle steering, and Ackerman steering. The tracks produced by the
wheels are shown.

Differential drive is the more common type of mechanism, since it is simpler and more cost-
effective. However, holonomic drive offers some advantages, such as greater precision and the
ability to move sideways.

Manipulator Arms

One important aspect of manipulator robot mechanisms is the kinematic chain. The kinematic
chain is the sequence of joints and links that make up the manipulator robot. For example, a simple
robot arm might have a kinematic chain that looks like this: base joint, shoulder joint, elbow joint,
and wrist joint. Each of these joints has a specific range of motion that determines the overall
motion of the robot arm.

Using a more concrete example. Imagine a simple two-joint robot arm with a base joint and an
elbow joint. The base joint allows the arm to move up and down, while the elbow joint allows the
arm to bend. If the elbow joint is fixed in place, the arm can only move up and down. However, if
the elbow joint is free to move, the arm can reach a much wider range of positions.

Taking a different approach. Imagine that the robot arm is like a human arm. The base joint is like
your shoulder, and the elbow joint is like your elbow. Your shoulder allows you to move your arm
up and down, while your elbow allows you to bend your arm. If you fix your elbow in place, you
can only move your arm up and down. However, if you allow your elbow to move, you can reach
a much wider range of positions.

Having established the connection between human and robot arms, we can talk about a kinematic
tree.

A Kinematic Tree is a graphical representation of a robot's kinematic chain. The joints of the
robot are represented as nodes, and the links between the joints are represented as lines. This allows
us to visualize the robot's range of motion and plan its movements.

Furthermore, imagine a simple robot arm with a base joint, an elbow joint, and a wrist joint. The
base joint is at the top of the robot arm, and the wrist joint is at the bottom. Between the base joint
and the wrist joint, there are two links that represent the arm itself. The base joint has one degree
of freedom (up and down), and the elbow joint has one degree of freedom (bend and straighten).

Using a different approach. Imagine that the kinematic tree is like a family tree. Just like how your
family tree shows your relationships to your parents, grandparents, siblings, and cousins, the
kinematic tree shows the relationships between the joints and links of the robot arm. The base joint
is like the "parent" of the elbow joint, and the elbow joint is like the "parent" of the wrist joint.

Joints and Links

A joint is a mechanical connection that allows two or more links to move relative to each other.
The links are the rigid bodies that make up the robot arm. Each link is connected to the next link
by a joint. The links can move relative to each other because of the joints.

Using a concrete example, let's imagine a simple robot arm with two links and one joint. The first
link is the "upper arm" and the second link is the "lower arm". The joint is at the "shoulder" of the
robot arm. The links can move up and down because of the joint.

Joints impose constraints on the motion of neighboring links. Although joints can be complex, all
possible joints can be constructed as composites of two basic forms. One common type of joint is
a rotational joint, also called a revolute joint. A rotational joint allows one link to rotate around a
fixed axis with respect to another link. Another common type of joint is a prismatic joint, which
allows one link to move along a linear axis with respect to another link.

Types of Links

A common type of link is a rigid body, which means that the link can't change its shape. A rigid
body can be any shape, like a square, a circle, or a triangle. Another common type of link is a
flexible body, which can change its shape. Flexible bodies are often used in robots that need to
wrap around or squeeze through tight spaces.

The purpose of joints is to constrain or prevent motion. Each of these joint models constrains
motion except in one direction (whether angular or linear). This direction of allowed motion is
often referred to as a degree of freedom. Thus, when someone talks about the number of degrees
of freedom that a mechanism has, they are referring to the number of joints in it, loosely speaking.

A manipulator arm typically comprises an alternating sequence of rigid links and actuated, sensed
joints.

ACTUATORS AND SENSORS

Figure 1.3: Revolute and prismatic joints. Both possess one degree of freedom.

One definition of a robot is a machine that can sense, think, and act. The thinking is handled by
computation, but the other two capabilities require hardware.
An actuator is a device that converts energy (electrical energy) into motion (mechanical energy).
In other words, an actuator turns the control signals from the computer into physical movement.
The most common type of actuator is a linear actuator, which creates a linear, or straight-line,
motion. Linear actuators are used in a wide range of applications, from car power windows to
robotic arms.

Actuators cause the robot to move deliberately in the world in a predictable way, and they are most
often electric motors. Motors exert a torque (force) in proportion to the voltage applied to them.
Many other types of actuators exist, including hydraulic, pneumatic, and chemical actuation.

In a robot, sensors are devices that measure things in the environment. For example, in a self-
driving car, the sensors might include cameras, radar, and lidar. Cameras take pictures of the
surroundings, radar detects the distance to nearby objects, and lidar uses laser light to create a 3D
map of the environment.

Let's take a look at a simpler example. Imagine a robot vacuum cleaner. The vacuum cleaner has
a bumper on the front that acts as a sensor. When the bumper bumps into an object, the robot
knows to stop moving forward. In this example, the bumper is the sensor, and it's telling the robot
how close it is to an object. Hence, sensor tell the robot about the environment.

Sensors detect targeted physical properties in the world, and can be configured to target the robot
itself (proprioceptive sensors) or the outside word (exterioceptive sensors). Examples of
proprioception often used in robots are:

1. encoders – measure absolute or relative position of a revolute joint by counting ticks.


2. torque sensor – detect the amount of force a motor is exerting on a joint.

These sensors are used to build a controller to drive a motor to set its joint to the value desired by
the robot’s programming. Several examples of exterioception are:

1. imaging sensor – a camera or similar device to build a pixel grid depicting a scene in front
of the sensor. Imaging can be done in the visual spectrum, infrared, etc. RGB+D cameras
like the Kinect use structured light, projected in the infrared spectrum, to reconstruct a depth
component.
2. range finder – a depth sensor, often employing a sweeping beam to detect depth in a swept
volume. Examples include sonar and laser range-finders (also called LIDAR).

Relationship between Sensors and Actuators

As stated earlier, the sensor tells the robot about the environment. The actuators use this
information to decide what to do. In the self-driving car example, the sensors tell the computer
where the other cars are. The computer tells the wheels to move in a certain way to avoid the other
cars.

For instance, imagine you're driving a car at night. You use your headlights to see the road in front
of you. Your headlights are like sensors - they give you information about the environment. Your
hands and feet are like actuators - they make the car move based on the information you get from
your headlights.
Taking a higher step, imagine that you're driving the car in the rain. You turn on your windshield
wipers to remove the water from your windshield. The wipers are like an additional sensor - they
give you more information about the environment. Your brain uses the information from the
headlights and the wipers to decide how to drive.

In a robot, there are three steps between sensors and actuators. First, the sensors take
measurements. Second, the computer processes the measurements. Third, the computer sends
commands to the actuators. This is called the "sense-think-act" cycle.

In the "sense" step, the sensors detect information about the environment. In the "think" step, the
computer analyzes the information to figure out what to do. In the "act" step, the computer sends
commands to the actuators to make the robot do what it needs to do.

Each step in the "sense-think-act" cycle involves a feedback loop. For example, in the "sense" step,
the sensors detect information about the environment and send that information to the computer.
In the "think" step, the computer analyzes the information and makes decisions. In the "act" step,
the computer sends commands to the actuators, which changes the environment. The sensors then
detect these changes and the cycle continues.

Let’s assume a robot is to navigate a maze. First, the robot's sensors detect the walls of the maze.
The computer analyzes this information and figures out the best path through the maze. The
computer then sends commands to the actuators to move the robot forward. The sensors detect
when the robot reaches the end of the maze.

Using a more complex example. Let's imagine a robot arm that's stacking blocks. First, the sensors
detect the location and orientation of the blocks. The computer analyzes this information and
figures out how to move the arm to pick up the blocks. The computer then sends commands to the
actuators to move the arm and grasp the blocks. Finally, the robot arm stacks the blocks.

ERROR

Both sensors and actuators are subject to error. It is therefore a fact of life that nothing a robot does
will ever be perfect or certain. This fact gives rise to two important problems in robotics.

Estimation: is the act of fusing together all the available evidence from the complete history of
sensed data and actions taken to determine a best guess about what is true in the world. Estimation
can be used to build maps, localize objects of interest, or localize the robot in the world.

Every robot has to estimate its position and orientation, even if it has very accurate sensors. The
robot uses its sensors and its knowledge of how it moves to make these estimates. There is always
a certain amount of error in these estimates. The robot can account for this error by making its
estimates with a certain level of confidence.

For instance, imagine a robot arm that's picking up a cup. First, the robot estimates the position
and orientation of the cup. Then, the robot estimates the position and orientation of the robot's
gripper. Finally, the robot calculates the motion that will bring the gripper to the cup. However,
the robot can't know the exact position and orientation of the cup or the gripper. Instead, it makes
its best estimate based on the available data.
Planning: is the act of reasoning about uncertain world state (an estimate) as well as uncertain
future actions to maximize the likelihood of successfully accomplishing a task. Planning can be
used to move objects around with respect to other objects (e.g. doing the dishes, handling nuclear
materials) or to move the robot itself with respect to the world (e.g. self-driving cars).

The robot uses its estimates to plan its actions. In our cup example, the robot plans how to move
the arm and gripper to pick up the cup. The robot also plans how to avoid obstacles and collisions.
The robot's planning process takes error into account. This helps the robot account for errors that
might happen during execution.

There are two key concepts in planning: forward planning and inverse planning. Forward planning
starts with the current state of the robot and plans a sequence of actions to reach a goal state.
Inverse planning starts with the goal state and plans the sequence of actions to reach it from the
current state. Both forward planning and inverse planning take into account the robot's capabilities
and the errors in its sensors and actuators.

Let's assume you are in a room with a cup on a table. Your goal is to pick up the cup. Forward
planning would start with your current state (sitting in a chair) and then plan the sequence of
actions to reach your goal (holding the cup). Inverse planning would start with your goal (holding
the cup) and then plan the sequence of actions to reach that goal from your current state.

In forward planning, you start with what you know and then plan what you want to do. In inverse
planning, you start with what you want to do and then plan how to get there based on what you
know.

Let’s assume you are trying to get to trek to the school main gate to take a vehicle to your hostel.
Forward planning would start with your current location (classroom) and then plan the sequence
of actions to get to the gate (goal). Inverse planning would start with the goal (the grocery store)
and then plan the sequence of actions to reach that goal from your current location.

We require a set of tools to reason about error and uncertainty, which will be the focus of the final
section of the course. A system produces readings that include both signal and noise (error). We
cannot know how much the signal and noise contribute to an individual reading. However, we
frequently employ assumptions that enable us to make useful statements about the noise
characteristics. Often, these methods work by collecting a large quantity of samples and computing
statistics on them.

Accuracy and Precision.

Two important concepts used to measure error are:

Accuracy – how close a measurement is to the true value. In robotics, accuracy isa measure of
how close a robot can come to its desired position, or how close a sensor reading is to the ground
truth or the genuine value that it should ideally return. Accuracy is how close the robot's
measurements are to the true values.
Repeatability or Precision – how repeatable a measurement is. In robotics, precision is a measure
of how close a robot can come to its previous attempts at the same motion or the same
measurement. Precision is how repeatable the robot's measurements are.

An accurate and precise measurement is close to the true value and highly repeatable. In our cup
example, the robot might be highly accurate but not very precise. That means it could get the cup
every time, but the cup might be in slightly different positions each time. Even if the robot is very
accurate, it still needs to be precise in order to stack the cups properly. If the robot is precise but
not accurate, it will consistently pick up the cup in the same way but not in the right way.

Let’s shed more light on comparing accuracy and precision with hitting a target. Imagine you're
throwing darts at a target. Accuracy is how close you are to the bullseye. Precision is how close
your darts are to each other. A precise measurement is like a tight group of darts close to the
bullseye, but not necessarily on the bullseye. An accurate measurement is on the bullseye, but
might be far from any other measurements.

The terms accurate and precise are often used qualitatively to measure motors, sensors, or any
hardware or software system that generates a real-valued result. However, if we are measuring a
sequence of output samples from a system, we can quantitatively estimate whether the system is
accurate by the following test:

where µ is the mean of the samples, ta is the known target value, and e is a threshold error, e.g. 5%.
In this case, we would say that a system is accurate if its average error from the target is less than
5%. If the system is perfectly accurate, then µ = ta, yielding an error of zero.

We can also estimate whether the system is precise by the following test:

where σ is the standard deviation of the samples, µ is the mean of the samples, and ep is a threshold
error, e.g. 5%. That is, a precise system has a standard deviation that is less than 5% of its mean.

A system can be both accurate and precise, accurate but not precise, precise but not accurate, or
neither accurate nor precise. For example, suppose that a mobile robot is using a range finder to
measure the distance to an obstacle known to be 10m away. Over several measurements, a range
finder that is both accurate and precise will return distances that are both close to the ground truth
on average (accurate) and clustered together (precise), such as the series of readings 9.91m,
10.07m, and 10.02m. A range finder that is accurate but not precise might return the series of
readings 9.49m, 10.51m, and 9.95m — the mean of these measurements is close to the ground
truth, but the variance is large. On the other hand, a range finder that is precise but not accurate
might return the readings 10.85m, 10.79m, and 10.98m — these measurements have little variance,
but their average is quite far from the ground truth. Finally, if the range finder returns
measurements that are all over the place and do not average out to be close to the ground truth,
such as 9.6m, 11.4m, and 13.01m, then we consider the range finder to be neither accurate nor
precise.

Depending on the cause of an error, it might tend to decrease accuracy, precision, or both. When
we talk about a decrease in accuracy or precision, we mean that |(µ − ta)/ta| and σ/µ get larger.

Uncertainty

Another tool used to analyze error is uncertainty, which is the degree of doubt or unpredictability
in a measurement. It can also be defined as a measure or estimate of an unknown error. In our robot
example, uncertainty is how much variation there is in the robot's measurements. Even if the robot
is both accurate and precise, it still might have some uncertainty. For example, the robot might
have trouble picking up the cup on a slippery surface. There are three common assumptions used
to model potentially uncertain sensors or actuators:

1. Determinism – has an expected value of zero. The motors and sensors are perfectly accurate
and repeatable. It is also referred to as Zero-mean noise, because, it has a predictable
average value. This means that, on average, the measurements are close to the true value.
In our dartboard example, this would be like the average location of the darts being close
to the bullseye.

Let’s assume you have a thermometer that measures the temperature in your room. Over
time, the temperature will fluctuate. Zero-mean noise says that, on average, the temperature
should be the same as the true temperature. However, individual measurements might be
higher or lower than the true temperature. The average of all the measurements should still
be close to the true temperature.

2. Nondeterminism – follows a normal distribution. A normal distribution is a bell-shaped


curve that describes the probability of different outcomes. The motors and sensors are not
repeatable. Repeated attempts to read a sensor or actuate a motor will yield different results
unpredictably. Nondeterminism is also referred to as Gaussian noise, because, it is random
and can't be predicted exactly. Most measurements, such as camera measurements, have
Gaussian noise.

Let’s assume a dartboard with a bullseye in the center. Most of the darts will hit close to
the bullseye, but some will be far away. The bullseye represents the expected value, and
the far-away darts represent uncertainty. Gaussian noise is like that - most measurements
are close to the expected value, but some are far away. This forms a normal distribution
curve, with the peak of the curve at the expected value.

3. Stochasticity – a common assumption in robotics because it is a simple way to model the


uncertainty in measurements. It is completely random, with no correlation between
different measurements. This means that each measurement is completely independent of
the others. Stochasticity is also known as White noise, because, it is random and
unpredictable. This means that each measurement is completely independent of the others.
The motors and sensors are not repeatable, but their readings or actions are drawn from a
probability distribution that is related to the commanded action or the ground truth of the
sensor target. Thanks to the law of large numbers, many stochastic sensor readings could
be used to estimate the ground truth.

Let’s take for instance you are driving on a bumpy road. The bumps on the road are like
white noise. Each bump is completely random and independent of the others. Each bump
might affect your car in a different way, but there's no pattern to the bumps. In robotics,
white noise is like those bumps - it affects each measurement independently.

Gaussian noise, white noise, and zero-mean noise are all called a Gaussian noise process. This is
a mathematical model that is used to represent uncertainty in measurements. This model is useful
because it allows us to analyze and predict the uncertainty in our measurements.

Uncertainty in sensing and actuation can be thought of as nature’s actions in response to the robot’s
commands. Sometimes nature cooperates to a larger or smaller extent. In the worst case
(nondeterminism), no amount of sensor readings is sufficient to reconstruct the ground truth of the
target.

You might also like