Professional Documents
Culture Documents
Kamal's Thesis
Kamal's Thesis
Kamal Sharma
MASTER OF TECHNOLOGY
of
August, 2011
Homi Bhabha National Institute
Recommendations of the Thesis Examining Committee
As members of the thesis examining committee, we recommend that the dissertation prepared by
Kamal Sharma entitled “Developing and Optimizing strategies for Robotic Peg-In-Hole insertion with
a Force /Torque sensor”, may be accepted as fulfilling the dissertation requirement for the Degree
of Master of Technology.
Co-guide -
Signature:
Date:
ACKNOWLEDGEMENTS
I express my sincere thanks and gratitude to Dr Prabir K Pal, who is a marvelous mentor and guide.
He provided a very caring and innovative environment to me. The best thing is that he never
convinces until some task is made flawless. That encouraged me to develop very robust and
accurate applications.
Mrs. Varsha T. Shirwalkar proved to be a great Technology Advisor to me. She taught me to work
out and face all sort of challenges that came during my project. Especially she provided me a vast
knowledge over Force Control in a very intuitive and easy to understand manner.
I am thankful to Dr. Venkatesh and Dr. Dwarakanath who provided me the in-depth knowledge for
the working of Force/Torque Sensor.
I am thankful to Mr. A. P. Das who helped me out in the mechanical engineering and control related
issues.
I thank Mr. Abhishek Jaju (Having expertise on KUKA robot) who provided to me the thorough
understanding of Robotics and especially, the working of KUKA robot.
Then, I pay my gratitude towards Mr. Shobhraj Singh Chauhan and Mrs. Namita Singh who provided
a very spiritual, motivating and healthy environment to me that led to a very fast and smooth
project completion.
Special thanks to Mr. Shishir Kr. Singh who constantly criticized my work and provided to me a
constant assessment that helped me out in making the things perfect.
At last, I want to take this golden opportunity to express my gratitude towards Mr. Manjit Singh,
Head, DRHR for teaching me to use research for application oriented and real-world problems and
also for providing all the facilities and necessary infrastructure which was a prerequisite for the
initiation and completion of the project.
Kamal Sharma
DEDICATED to
Maa, Papa, Mintu
&
Bajrang Bali Ji
Content
s
List Of Figures..............................................................................................................................................9
List Of Tables.............................................................................................................................................12
Abbreviations............................................................................................................................................13
Introduction...............................................................................................................................................13
1.1 Problem Statement and Objective...................................................................................................14
1.1.1 Problem Statement...............................................................................................................15
1.1.2 Objective...................................................................................................................................15
2. Industrial Robot.....................................................................................................................................16
2.1 Industrial robots..............................................................................................................................16
2.1.1 History......................................................................................................................................16
2.2 KUKA KR-6 Robot.............................................................................................................................17
2.2.1 Robot controller........................................................................................................................19
2.2.2 3-COM Card..............................................................................................................................21
2.3 Robot Control..............................................................................................................................21
2.3.1 Conventional control................................................................................................................21
2.3.2 Programming of KUKA robots (KRL)..........................................................................................22
2.3.3 Real-time control (KUKA.Ethernet RSI XML).............................................................................24
3. The Force/Torque Sensor......................................................................................................................30
3.1 A General 6-Axis Force/Torque Sensor............................................................................................30
3.1.1 Stress and Strain.......................................................................................................................30
3.1.2 Working Of A Strain-Gauge.......................................................................................................32
3.1.3 Principle of Strain Measurement..........................................................................................32
3.2 ATI 660-60 F/T Sensor......................................................................................................................35
4. Hybrid Position/Force Control...............................................................................................................37
4.1 The Theory.......................................................................................................................................37
5. Preliminary Experiments using Force Sensing and Control....................................................................39
5.1 Recalibrating the Force/Torque Sensor...........................................................................................39
5.2 Kuka Surface Scanner (KSS).............................................................................................................40
5.3 Kuka Stiffness Finder (KSF)...............................................................................................................44
5.4 Lead Through Programming (LTP) Through Force Control...............................................................46
5.5 A Continuous Tracing Algorithm (TUSS)...........................................................................................47
5.5.1 Introduction..............................................................................................................................47
5.5.2 Human Approach......................................................................................................................48
5.5.3 TUSS..........................................................................................................................................49
5.5.4 Analysing Current Algorithm under various cases....................................................................53
5.5.5 Improved TUSS to avoid the problem of Deadlock Cycles........................................................58
5.5.6 Experimental Results................................................................................................................59
6. Peg In The Hole......................................................................................................................................66
6.1 Hole Search......................................................................................................................................68
6.1.1 Blind Search Strategies.............................................................................................................68
6.1.1.1 Grid Search........................................................................................................................68
6.1.1.2 Spiral Search......................................................................................................................70
6.1.2 Intelligent Search Strategies.....................................................................................................72
6.1.2.1 Neural Peg In The Hole......................................................................................................73
6.1.2.1.1 Neural Networks.........................................................................................................73
6.1.2.1.2 Mathematical Model for Parallel Peg..........................................................................76
6.1.2.1.3 Simulation Results.......................................................................................................81
6.1.2.1.4 Tilted Peg Case............................................................................................................83
6.1.2.2 Precession Strategy............................................................................................................84
6.1.2.2.1 Need for Precession....................................................................................................84
6.1.2.2.2 The Precession Strategy..............................................................................................84
6.2 Peg Insertion....................................................................................................................................87
6.2.1 Gravity Compensation..............................................................................................................87
6.2.2 Large Directional Error (LDE) Removal......................................................................................89
6.2.2.1 LDE Removal Using Information From Hole Search...........................................................89
6.2.2.2 LDE Removal Using Moment Information..........................................................................90
6.2.2.3 Stopping Criterion For LDE Removal..................................................................................91
6.2.3 Small Directional Error (SDE) Removal......................................................................................91
7. Optimization(why fonts have changed here??......................................................................................97
7.1 Optimizing The Search.....................................................................................................................97
7.2 Optimizing The LDE Removal.........................................................................................................100
7.3 Optimizing The SDE Removal.........................................................................................................102
8. Results & Conclusions..........................................................................................................................103
8.1 Results For Hole Search.................................................................................................................103
8.2 Results For Insertion......................................................................................................................103
List Of Figures
Figure 1.1: KUKA teach-pendent...............................................................................................................13
Figure 2.1: KUKA KR-6 Industrial Robot.....................................................................................................15
Figure 2.2: KUKA KR-6 Dimensions............................................................................................................16
Figure 2.3: KUKA KR-6 specifications.........................................................................................................17
Figure 2.4: KUKA KR C2 controller.............................................................................................................18
Figure 2.5: Internals KUKA KC2 controller.................................................................................................18
Figure 2.6: 3-Com LAN card.......................................................................................................................19
Figure 2.7: Example of short robot program syntax..................................................................................20
Figure 2.8: An example of KRL‐code..........................................................................................................21
Figure 2.9: Functional principle of data exchange.....................................................................................23
Figure 2.10: Data exchange sequences......................................................................................................24
Figure 2.11: The different coordinate systems..........................................................................................25
Figure 2.12: The structure of configuration file.........................................................................................25
Figure 2.13 CONFIG block of configuration file..........................................................................................26
Figure 2.14 SEND block of configuration file..............................................................................................26
Figure 2.15 RECIEVE block of configuration file.........................................................................................27
Figure 3.1 : Effects of a tensile and compressive force..............................................................................28
Figure 3.2 : Mechanical Properties of some materials...............................................................................30
Figure 3.3 : Converting measured Strain into Voltage value......................................................................31
Figure 3.4 : Converting measured Strain into Voltage value......................................................................33
Figure 3.5 ATI 660-60 Delta Sensor............................................................................................................34
Figure 4.1 : Example of a Force-Controlled task-turning a screwdriver.....................................................36
Figure 4.2 : Schematic Of A Hybrid Controller..........................................................................................36
Figure 5.1: GUI for KSS...............................................................................................................................38
Figure 5.2: Kuka Scanning a surface using KSS...........................................................................................39
Figure 5.3: Schematic for working of KSS..................................................................................................40
Figure 5.4: Profile generated by KSS for the surface shown in Figure 5.1.................................................41
Figure 5.5: Defining Stiffness Of A Material...............................................................................................41
Figure 5.6: KSF working over a mouse pad................................................................................................42
Figure 5.7: GUI for KSF...............................................................................................................................43
Figure 5.8: Payload Assist..........................................................................................................................44
Figure 5.9: A human finger tracing a surface from right to left while maintaining the contact................45
Figure 5.10: Black Box depicting the human approach..............................................................................46
Figure 5.11: Robot with a probe, moving over a surface while maintaining the contact..........................47
Figure 5.12: Black-Box depicting the TUSS Algorithm...............................................................................48
Figure 5.13: Probe moving on a fully horizontal flat surface.....................................................................50
Figure 5.14: State Diagram for a fully horizontal flat surface....................................................................51
Figure 5.15: Probe moving on a fully vertical flat surface..........................................................................51
Figure 5.16: State Diagram for a fully vertical flat surface.........................................................................52
Figure 5.17: Probe moving on a slant surface of Type 1............................................................................52
Figure 5.18: State Diagram for a slant surface of Type 1...........................................................................53
Figure 5.19: Probe moving on a slant surface of Type 2............................................................................53
Figure 5.20: State Diagram for a slant surface of Type 2...........................................................................54
Figure 5.21: Binary equations for Improved TUSS.....................................................................................56
Figure 5.22: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles..........................56
Figure 5.23(a): Experimental Setup to test Improved PVK Algorithm........................................................57
Figure 5.23(b): GUI To Collect Data From The Improved PVK Algorithm..................................................57
Figure 5.24: Comparing the actual contour with the path followed by Improved PVK Algorithm.............58
Figure 5.25 – Comparing the actual contour with the path followed when the point of contact remains
same..........................................................................................................................................................58
Figure 5.26: Force Profile in X – Direction.................................................................................................59
Figure 5.27: Force Profile in Y – Direction..................................................................................................59
Figure 5.28: Slope Of Tangents (In Degrees) derved from the force values..............................................60
Figure 5.29: Smoothened Slope Of Tangents (In Degrees) derved from the force values.........................61
Figure 5.30: Comparing Geometrically found Slope with that found from the Forces..............................61
Figure 5.31: Reconstructing the contour using force values ( i.e. the slope obtained from force values). 62
Figure 6.1: Hole Search..............................................................................................................................63
Figure 6.2: Depiction of Large Directional Error........................................................................................64
Figure 6.3: Depiction of Small Directional Error........................................................................................65
Figure 6.4: Search Points for Grid Search..................................................................................................66
Figure 6.5: Grid Search with clearance=0.5 mm........................................................................................67
Figure 6.6: Spiral Search criterion for constant speed search....................................................................68
Figure 6.7: Spiral Search with clearance=0.5 mm......................................................................................69
Figure 6.8: Basic structure of neurocontroller...........................................................................................70
Figure 6.9: Structure of a neuron..............................................................................................................71
Figure 6.10: 3-layer Backpropagation Neural Network.............................................................................72
Figure 6.11: The Parallel Peg case.............................................................................................................73
Figure 6.12: Case when the center of peg lies outside the chord of contact.............................................73
Figure 6.13: Case when the center of peg lies inside the chord of contact...............................................74
Figure 6.14: Computing the moments.......................................................................................................74
Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case.......................77
Figure 6.16: Neural Network Training.......................................................................................................78
Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D..................................................79
Figure 6.18: Tilted Peg Case.......................................................................................................................79
Figure 6.19: Contact States for Tilted Peg Case.........................................................................................80
Figure 6.20: The precession strategy: (a) The peg is initially tilted by Theta tilt...........................................82
Figure 6.21: Peg height w.r.t. distance from hole’s center........................................................................83
Figure 6.22: Need For Gravity Compensation............................................................................................84
Figure 6.23: Large Directional Error Case..................................................................................................86
Figure 6.24: Large Directional Error Removal Using Information From Search.........................................87
Figure 6.25: Large Directional Error Removal Using Moment Information................................................88
Figure 6.26: Direction Of Moment Sensed and The Direction in Which The Peg needs to be moved.......89
Figure 6.27: Removing Small Directional Error..........................................................................................89
Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the mid-point......90
Figure 6.29: Calculating BackJump............................................................................................................91
Figure 6.30: Calculating The Upper Limit On BackJump............................................................................91
Figure 6.31: Insertion with Time using SDE Removal.................................................................................92
Figure 6.32: Insertion and Upward Force Due to Jamming........................................................................93
Figure 7.1: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler............................96
Figure 7.2: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler............................98
List Of Tables
Table 3.1: Specifications For ATI 660-60 Delta Sensor (Last Row)............................................................34
Table 5.1: Calibration Matrix For Working Sensor.....................................................................................37
Table 5.2: Calibration Matrix For Damaged Sensor...................................................................................38
Table 5.3: TUSS Algorithm depicted in the Truth Table............................................................................49
Table 7.1: DOE for Precession Search........................................................................................................95
Table 7.2: DOE for LDE Removal................................................................................................................97
Table 7.3: Experimental Runs for SDE Removal.........................................................................................99
Abbreviations
Introduction
Usually, robotic operations are of pick and place type and in the fields that are remote or those
which are hazardous for humans to be present. For such tasks, robots can be programmed by
hard coding the positions or the points where it has to reach for the accomplishment of a
particular task.
But there are many situations where the environments are not very much structured [1], the
interactions are not predictable or tolerances are very less in magnitude and the accuracy
required is much higher.
One good example for such a kind of task is mechanical assembly where tolerances are very
less. Suppose, two parts need to be mated and the clearance between those two parts is of the
order of lesser than a millimeter. Now the exact location of the female part can be specified
through positional measurement or through computer vision. But if the accuracy of position
measurement is not suitable for mating such low clearance parts, then the robot may try to
inaccurately push the male part into the female one, and that may lead to the development of
large forces that can harm the parts or the robot itself. Also sometimes the camera vision is not
possible due to the obstruction of view [2]. So, we are left with an approximate idea of the
position of one part and the assembly has to be done with this much knowledge.
At such times, the force-control comes into play. The parts have to be mated by getting
feedback from the forces and torques that are generated from the interaction of the mating
parts. There are two methods to employ the force information (Force here means both the
forces and the torques) for assembly task. One is to use passive sensing. There are passive
devices like Remote Centre Ccompliance (RCC) device for mechanical assembly, but their
limitation is that they are not customized for specific assembly and are not general purpose
devices. Also they are not so flexible as to be adapted to any environment [3].
The other method is using active sensing and developing an intelligent controller that actively
takes force feedback and decides its next command to the robot for the assembly to take place.
Hybrid Position/Control theory [4] provides the method for building such a controller.
Peg-in-hole insertion is the essential first step in the validation of algorithms for robotic
assembly. In our work, we have to assemble a cylindrical peg into a cylindrical hole with
the clearance of 0.5 mm. using active sensing of Force signals. For that purpose we have
developed the algorithms for accurately positioning the peg inside the hole as well as
properly orienting it for insertion without any jamming.
After the successful testing of our algorithms, we have optimized them for a particular
assembly operation.
1.1.2 Objective
The objective of the project is to develop a robust strategy for robotic Peg-In-hole insertion
using a 6-axis Force/Torque sensor that can provide the force feedback in X, Y and Z-Directions
as well as the Torque Feedback in the same directions. The clearance (i.e. the difference
between the Peg and the Hole diameters) kept is 0.5 mm.
This assembly has to be done with a 6 DOF Kuka robot. For this purpose, a real-time control
program needs to be written that can communicate with the robot and command it to move in
the cartesian space so as to accomplish the assembly.
Real‐time control of industrial robots is today usually done using a so called teach‐pendant (see
Figure 1.1), with single motions in directions or axis, or multiple direction movement.
2. Industrial Robot
This section describes Industrial Robot, first in general and then more detailed about this
specific project.
2.1.1 History
The first real robot patents were applied by George Devol in 1954 [9]. Two years later the first
robot was produced by a company called Unimation, founded by George Devol and Joseph F.
Engelberger. In these first machines the angles of each joint were stored during a teaching
phase and could then later be replayed. The accuracy was very high 1/10000 of an inch. Worth
to notice is that the most important measure in robotics is not accuracy, but repeatability. This
is due to the fact that normally the robots are taught to do one task over and over again. The
first 6‐axis robot was invented in 1969 by Victor Scheinman at Stanford University, named "the
Stanford arm" and could substitute the motions of an arm [9]. The Stanford arm made it
possible to make arbitrary paths in space and also permit the robot to perform more
complicated tasks, such as welding and assembly. In Europe the industrial robots took off
around 1973 when both KUKA and ABB entered the market and launched their first models.
Both came with six electro‐mechanically driven axes. In the late 1970's the consumer market
started showing interest and several America based companies entered the market. The real
robotics boom came in 1984 when Unimation was sold to Westinghouse Electric Corporation.
Few Japanese companies managed to survive, and today the major companies are Adept
Technology, Staubli‐Unimation, the Swedish‐Swiss Company ABB ( Asea Brown Boveri) and the
German company KUKA Robotics.
The programming of an industrial robot can be divided into two categories, online, and offline
programming. Offline programming means that the robot code is created offline with no
connection to the physical robot. The generated program is later transferred to the robot and
executed in the real environment. Normally a verification of the code is made in some sort of
simulation software before transferring the program. Online programming includes
programming when the software is directly connected to the physical robot. This can be done
with a teach pendant used for moving the robot through certain positions which are stored,
and then a trajectory is created between the stored points. This process of adjusting a position
in space is commonly referred to as "jogging" or "inching" the robot.
Most industrial robots have a similar programming structure, telling them how to act. One
defines points in space and then says how the robot is supposed to reach these points, normally
called P1, P2, P3 etc. An example of program syntax is shown below in Figure 2.7.
Many industrial robot manufacturers offer a simulation software package for their robots
which eases the programming and at the same time makes it possible to perform offline
programming.
The benefits from using offline programming are many; it prevents costly damages that could
happen in the real world, it can save money since you don’t have to interrupt the production
while programming. It speeds up for example ramp‐up time when switching to production of a
new model etc.
The declaration and initialization part can also be located in the .DAT file, but are here showed
in the same program to more clearly explain the different steps. The PTP HOME in the main
section makes the robot perform a point‐to‐point motion to the defined home position, which
is defined in the initialization part. The first instruction to the robot is normally called a BCO‐run
(Block Coincidence), which makes the robot go to a predefined point in space.
This is done in order to set the correspondence between the current real robot position and
the programmed position. The next line makes the robot go to the specified point, which in this
case is defined directly in the same row as a point in the base coordinate system specified by six
values. The six values are; X, Y, Z that defines the point in a 3D‐space and A (rotation around Z‐
axis), B (rotation around Y‐axis) and C (rotation around X‐axis) that defines the tool orientation.
The KUKA.Ethernet RSI XML software package is required to control the robot from an external
computer in real time. The term external system is often mentioned and refers to the
computer(s) that are connected to the robot controller (in this case the computer that holds
the server that the robot’s controller is communicating with). The KUKA.Ethernet RSI XML is an
add‐on technology package with the following functions [12]:
Cyclical data transmission from the robot controller to an external system in the
interpolation cycle of 12ms (e.g. position data, axis angles, operating mode, etc
Cyclical data transmission from an external system to the robot controller in the
interpolation cycle of 12ms (e.g. sensor data)
Influencing the robot in the interpolation cycle of 12ms
Direct intervention in the path planning of the robot
The KUKA.Ethernet RSI XML enables the robot controller to communicate with the external
system via a real‐time‐capable point‐to‐point network link. The exchanged data is transmitted
via the Ethernet TCP/IP or UDP/IP protocol as XML strings. For this thesis, the Ethernet TCP/IP
was used since it makes a more secure communication with response from the destination to
the host, while the UDP/IP lacks this response (explanation / reference???).
Programming of the KUKA Ethernet RSI XML package is based on creating and linking RSI
objects. RSI‐objects are small pieces of pre‐programmed code that can be executed and has
additional functionalities than the normal KRL‐code. To be able to communicate externally
through Ethernet, a specific standard object (ST_ETHERNET) needs to be created.
The code line for creating for example the ST_ETHERNET is typically: err =ST_ETHERNET
(A,B,config_file.xml)
Where,
err = type of string used by the RSI XML (called RSIERR) containing the error code produced
when creating the object (normally #RSIOK when it works)
A= an integer value that contains the specific RSI‐object ID so that it is possible to locate and
refer to
B= an integer value for the container to which you want the RSI‐object to belong,
ST_ETHERNET is the object that can be influenced by external signals, and also send data back
to the external system in form of XML files, containing different tags with data. The data can be
for example information about the robot’s actual axis positions, Cartesian actual positions etc.
This data shall be sent to the server and back within each interpolation cycle of 12ms. In this
project, the communication object ST_ETHERNET was used. When one of these objects is
created and linked correctly, the robot controller connects to the external system as a client.
There are many different types of RSI‐objects and depending on what you want to do, you have
to create and link the correct objects to each other. Besides the standard Ethernet card, an
additional card (3-COM) was needed, to be able to handle the speed of the transferred data
[10].
The robot controller initiates the cyclical data exchange with a KRC data packet and transfers
further KRC data packets to the external system in the interpolation cycle of 12 ms. This
communication cycle is called a IPO‐cycle (Input Process Output) and can be seen in Figure 2.9
above. The external system must respond to the KRC data packets with a data packet of its
own.
To be able to influence the robot, one needs to initiate an RSI‐object for the movements. There
are mainly two objects used for this. First an object called ST_AXISCORR(A,B) for specific
movements in axis A1 to A6, where A is the specific ID of the created object, and B is the
container that the object belongs to.
The second object is called ST_PATHCORR(A,B) for movements in Cartesian coordinates, where
A, B are the same as for the ST_AXISCORR object. A coordinate system is also needed (normally
BASE, TCP, WORLD) as a reference for the movements. This is done by creating a RSI‐object
called ST_ON1(A,B) where the parameter A is a string containing the coordinate system that is
supposed to be used (expressed as #BASE, #TCP or #WORLD) and B is an integer value, 0 if the
correction values sent to the robot shall be absolute or 1 if they shall be relative. A schematic
picture of the data exchange sequence with the different RSI‐objects is shown in Figure 2.10
above.
When the robot is delivered from the factory, the BASE coordinate system is the same as the
WORLD and both are located in the base of the robot by default. BASE is normally moved to the
base of the work piece on which the robot is working. The differences between the different
coordinate systems can be seen in Figure 2.11 below.
NOTE: For the robot used in this thesis, all three systems; WORLD CS, ROBOT CS and BASE CS
have the same origin located on the base of the robot.
When the robot controller communicates with the external system it interchanges XML Strings.
The content in the XML strings for the demo program provided by KUKA, is decided and defined
in a file called ERX_config.xml (the configuration file), which is located in the robot controller,
inside the INIT folder. A typical structure of this file is shown in Figure 2.12 below.
As can be seen in Figure 2.13 above, the IP address and port to which the robot controller will
connect, when establishing the connection, are set in this file.
Under the tag <RECEIVE> is described what the robot controller expects to receive from the
external system(see Figure 2.15). In this case corrections in six values (X, Y, Z, A, B and C) are
included, tagged as RKorr.
3. The Force/Torque Sensor
This section describes the working of a general 6-axis Force/Torque Sensor and then provides
details of the sensor used for our work.
ε = ΔL/L
ε: Strain
L: Original length
ΔL: Elongation
ε = –ΔL/L
For example, if a tensile force makes a 100mm long material elongate by 0.01mm, the strain
initiated in the material is as follow:
ε = ΔL/ L = 0.01/100
= 0.0001 = 100 x10–6
Thus, strain is an absolute number and is expressed with a numeric value plus x10–6 strain, με
or μm/m. The relation between stress and the strain initiated in a material by an applied force
is expressed as follows based on Hooke's law:
σ = Eε
σ: Stress
E: Elastic modulus
ε: Strain
Stress is thus obtained by multiplying strain by the elastic modulus. When a material receives a
tensile force, it elongates in the axial direction while contracting in the transverse
direction. Elongation in the axial direction is called longitudinal strain and contraction in the
transverse direction, transverse strain. The absolute value of the ratio between the longitudinal
strain and transverse strain is called Poisson's ratio, which is expressed as follows:
v=|E2/E1|
v=Poisson’s ratio.
E1=Longitudinal Strain
E2=Transverse Strain
Poisson's ratio differs depending on the material. For reference, major industrial materials have
the following mechanical properties including Poisson's ratio (see Figure 3.2).
where, Ks is a gage factor, the coefficient expressing strain gage sensitivity. General-purpose
strain gages use copper-nickel or nickel-chrome alloy for the resistive element, and the
gage factor provided by these alloys is approximately 2.
For accurate measurement, the strain gage and adhesive should match the measuring material
and operating conditions including temperature.
If R1 = R2 = R3 = R4 = R,
eo = ((R2 + R ΔR – R2)/ (2R + ΔR) 2R) . E
Thus obtained is an output voltage that is proportional to a change in resistance, i.e. a change in
strain. This microscopic output voltage is amplified for analog recording or digital
indication of the strain.
So, we can see that the output voltages of the sensor are proportional to the forces and
torques being applied to the sensor’s gages.
A Force/Torque sensor, in principle, consists of multiple such gages so as to provide the values
for forces and torques in multiple axis.
Figure 3.4 shows a dismantled Force/Torque Sensor.
Figure 3.4 : Converting measured Strain into Voltage value
Finally, the three components of forces and the three components of torques can be calculated
as under :
V1..V6 are the 6 output voltages and A is a 6X6 matrix also known as the calibration matrix.
Table 3.1: Specifications For ATI 660-60 Delta Sensor (Last Row)
4. Hybrid Position/Force Control
This section describes the theory of Hybrid Position/Force Control that is widely used in Force-
Controlled applications with active sensing.
Once the natural constraints are used to partition the degrees of freedom into a position-
controlled subset and a force-controlled subset, and desired position and force control
trajectories are specified through artificial constraints, it remains to control the manipulator.
The controller is shown in Figure 4.2 below.
We assumed that Fz would remain constant for our experiments and eliminated one variable
i.e. Fz and also the dependency on V1 from the system of equations given above. Finally, we
come up with a new Calibration Matrix as shown in Table 5.2.
Table 5.2: Calibration Matrix For Damaged Sensor
The KSS has to be supplied with two Cartesian points (Start Point and End Point) (see Figure 5.1)
that are the two extremes of a volume enclosing the surface that has to be scanned.
The robot is having a probe and will start from the Start Point and move down until it
experiences some threshold force value in upper direction due to the contact of the probe with
the surface (see Figure 5.3). The robot will scan the surface in a grid like fashion discretely till
the End Point, where the step size or quanta being the value specified by the user (see Figure
5.1).
Figure 5.3: Schematic for working of KSS
These discrete points at which the probe stops are stored and plotted so as to generate the
profile of the surface scanned. The profile generated for the surface shown in Figure 5.1 is
shown in Figure 5.4.
Figure 5.4: Profile generated by KSS for the surface shown in Figure 5.1
Kuka Stiffness Finder first tries to identify contact with the material surface and afterwards, it
presses the material with some predefined force. In the process, it records the penetration
caused and calculates the stiffness value as defined in the equation above.
KSF also maintains a log of stiffness values for all the materials that had been tested with it.
It then uses its log to compare the new material placed under it with the materials already
within the log and tries to predict the new material.
We tested KSF over a mouse pad (see Figure 5.6 and Figure 5.7) two times and the second time,
it correctly predicted the material as the mouse pad.
5.5.1 Introduction
Most of the work done in robot force control assumes that a model of the environment is
known apriori ( Demey, Bruyninckx and De Schutter [13], Masoud and Masoud [14] ). But
generally, it is difficult to get the correct model of environment. Also sometimes force control is
required to be implemented on an unknown surface. TUSS does not assume any model of the
environment. We have implemented hybrid Force/Position control as defined by M.H. Raibert
and J.J. Craig [4] . Here we have kept the orientation of the robotic tool as constant. The
contour tracing of the surface is done by applying a constant force in downward x direction
while robot motion is in y direction. The tracing tool makes a point contact with the surface.
Hence here, out of six Cartesian degrees of freedom, x is force controlled whereas y, z, a, b, c
are position controlled. Many applications require the orientation of the tool to be normal to
the surface of the work piece. This type of work is stated by many authors using Velocity/Force
control ( Kazerooni and Her [15], Goddard, Zheng, and Hemami [16] ) . Our work is based on
position controlled industrial robot. The orientation control in normal direction is not
considered here.
Figure 5.9: A human finger tracing a surface from right to left while maintaining the contact
Move down until touch is sensed or there is obstacle in downward direction. This indicates that
surface to be traced is reached.
Move left, feeling the touch in downward direction and there is no obstacle in the direction of
motion. Move slightly up if there is any obstacle on the surface. The three steps stated above
are executed in parallel.
So, to implement this approach, we have to create a black-box as shown in Figure 5.10:
5.5.3 TUSS
We have to implement the black box stated in Figure 5.10 on the robot side.
For our algorithm, we have used the hybrid control approach in which the force control is done
in x axis and the position control is done in y axis. Suppose, we have to move over a surface as
shown in Figure 5.11:
Figure 5.11: Robot with a probe, moving over a surface while maintaining the contact
We have fixed up a 2-D coordinate system with our aluminum probe i.e. the tool (Figure 5.11).
Some basic elements of the TUSS algorithm are:
1. Inputs:
FX:
To sense the feeling of touch, we need to monitor the force in upward direction. FX is a binary
variable that, when set, represents the feeling of touch or the force in the positive X direction.
So, we can say that if the force in the positive X direction, i.e. the Force in +X, crosses a
threshold value, FX=1 else FX =0.
FY:
To identify or sense an obstacle while in motion, we need to detect the force in the positive Y
direction (remember that we are moving in the negative Y direction). FY is a binary variable
that, when set, represents the feeling of obstacle or the force in the positive Y direction. So, we
can say that if the force in the positive Y direction, i.e. the Force in +Y, crosses a threshold value,
FY=1 else FY=0. This can be given as under:
FY=0.
1. Outputs:
UP:
UP is a binary variable that, when set, commands the robot to move in the upward direction i.e.
the positive X direction. When UP is reset, there is no motion of the probe in the positive X
direction.
DOWN:
DOWN is a binary variable that, when set, commands the robot to move in the downward
direction i.e. the negative X direction. When DOWN is reset, there is no motion of the probe in
the negative X direction.
LEFT:
LEFT is a binary variable that, when set, commands the robot to move in the left direction i.e.
the negative Y direction. When LEFT is reset, there is no motion of the probe in the negative Y
direction.
Move down until touch is sensed and there is no obstacle, i.e. DOWN=1 when FX=0 and FY=0,
else DOWN=0.
Move left if touch is felt and there is no obstacle, i.e. LEFT=1 when FX=1 and FY=0, else LEFT=0.
Move up if there is any obstacle, i.e. UP=1 when FY=1, else UP=0. Our algorithm can be
presented using a truth table as shown in Table 5.3:
Therefore, the downward motion is controlled by monitoring the force in +X direction and the
left and upward motions are controlled by monitoring the force in +Y direction. One more
parameter is the distance moved in up, down and left directions. The RSI will monitor the forces
in each IPOC of 12 m.s. and will set or reset the values of UP, DOWN and LEFT binary variables.
But how much should the probe move in these directions, in each IPOC of 12 m.s., will decide
the speed of motion.
Suppose we give constant values for the distance that will be moved in each direction. This will
lead to a motion in contact with constant speed. The speed in all three directions may be
different .In our experiment, we have not kept the motions in each IPOC as constant values.
But, we have made the distance to be moved dependent on the corresponding forces, i.e.:
KLEFT
( Note: The motion in -Y direction is kept a positive constant so as to have a net motion from
Right-To-Left while the forces in X and Y being maintained by moving UP and Down. )
This leads to a constant speed motion until some touch or obstacle is sensed and subsequent
motion is like that of a spring.
Case 1: Flat Surface Fully Horizontal. This case is shown in Figure 5.13:
In this case, the probe moves on the surface smoothly while maintaining a continuous contact.
Case 2: Flat Surface Fully Vertical. This case is shown in Figure 5.15:
In this case, the probe moves on the surface smoothly while maintaining a continuous contact.
From the directions of the arrows in Figure 5.18, we can see that the probe is not in continuous
contact with the surface. It tries to maintain the contact in steps (after each IPOC).Even the
human finger loses the contact in such a case and as soon as it detects it (the reaction time of
the brain), it tries to make the contact again. Here, the IPOC time of the RSI relates to the
brain’s reaction time (To be more precise, The Fingertip Reaction Time [17].)
Since the IPOC time is very less (12 m.s.), this approximation of the continuous contact motion
is good enough for various applications.
This case is the most interesting one. Although the probe will maintain a continuous contact,
but there is a problem of Deadlock Cycles. Here, state A and state B can lead to a deadlock cycle
of ABABAB......... causing a cycle of Up and Down motions at the same point thereby leading to
no motion from right to left. So, reaching either of the states A & B may lead to a deadlock that
may last for indefinite time.
The State-Space solution we devised is a recurrent Markovian system (or Markov Chain) [18]
[19] due to following characteristics:
1. Since the model of the environment is unknown apriori, the occurrence of next state is
completely stochastic and does not depend on the current state.
2. Since the robot has repeatability of 0.1 mm., even in the cyclic motion UP-DOWN-UP-
DOWN may or may not lead to the graph-cycles [20][21].
Due to the unknown surface environment and repeatability error, the next state is
independent of the past states which satisfies the memoryless property.
Explanation of Deadlock Cycle
In a slant as shown in Figure 5.19, the probe will experience forces in both the +X as well as +Y
directions.
Now, the probe being in state A will go up and reach state B. From B, it will move down and
reach state A. There is a probabilistic chance that state A transits to state D or state B to state C
(depending upon the Threshold forces and the constants KDOWN, KUP and KLEFT), which can lead to
motion, otherwise if the probe gets stuck in the Deadlock Cycle ABABABAB......, the probe will
perform up and down motion at the same point.
These are:
LastWasUp and
LastWasDown.
LastWasUp=1 denotes that the last motion command was an Up motion command. LastWasDown=1
denotes that the last motion command was a Down motion command. So, whenever we get an Up
motion command, we should check the flag LastWasDown. If it is reset, then we can perform the Up
motion (also setting the LastWasUpFlag and resetting the LastWasDown flag), otherwise we will perform
a Left motion and reset both the flags.
Similarly, when we receive a Down motion command, we should check the flag LastWasUp. If it is reset,
then we can perform the Down motion (also setting the LastWasDown flag and resetting the LastWasUp
flag), otherwise we will perform a Left motion and reset both the flags.
This can be stated through the binary equations shown in Figure 5.21:
Figure 5.21: Binary equations for Improved TUSS
Therefore, finally our system becomes a Mealy Machine [22] whose output depends upon the
current state as well as the inputs (LastWasUp, LastWasDown) and is shown in Figure 5.22:
Figure 5.22: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles
Figure 5.23(b): GUI To Collect Data From The Improved PVK Algorithm
The probe is a champhered one with radius = 5.00 mm. ; The direction of motion is from Right –
To – Left ; FXThreshold = 4.0 N, FYThreshold = 4.0 N, KDOWN = 0.01 mm./N, KUP = 0.005 mm./N and KLEFT =
0.1mm. Figure 5.24 compares the actual contour with the path followed by the Improved TUSS
algorithm:
Figure 5.24: Comparing the actual contour with the path followed by Improved PVK Algorithm
Since we have taken a champhered probe, the point of contact changes while tracing, so there
is an initial mismatch of 5.00 mm., i.e. the radius of the probe, between the actual contour and
the path traced.
Figure 5.25 shows the path traced when the point of contact almost remains the same:
Figure 5.25 – Comparing the actual contour with the path followed when the point of contact remains
same
Part 1 of Figure 5.25 can be compared with Figure 5.17 and Part 2 of Figure 5.24 can be
compared with Figure 5.19.
Figure 5.26 & Figure 5.27 show the force profiles:
The average value of FY = 5.9 N ( for positive values of FY where Force Control was done ) and
FYThreshold = 4.0 N.
Figure 5.28: Slope Of Tangents (In Degrees) derved from the force values
It can be easily seen that the slope is continuously varying from -90 o to + 90o as expected ( from
Left – To – Right ) . The slope was smoothened using locally weighted scatter plot smoothening
and plotted and compared with the slope found geometrically in Figure 5.29 and Figure 5.30
respectively :
Figure 5.29: Smoothened Slope Of Tangents (In Degrees) derved from the force values
Figure 5.30: Comparing Geometrically found Slope with that found from the Forces
Using the slope found from the force values, we tried to reconstruct the contour as shown in
Figure 5.31:
Figure 5.31: Reconstructing the contour using force values ( i.e. the slope obtained from force values)
So, we see from Figure 5.31 that a good inference of a contour can be made from the force
values obtained while tracing the contour using The Improved TUSS Algorithm.
6. Peg In The Hole
This section describes the development of algorithms to achieve a Peg In The Hole assembly.
Peg-In-Hole Problem is the benchmark problem for Robotic Assembly; Given the nominal
position and orientation of the hole, we have to use the signals from F/T sensor to position and
align the peg for insertion into the hole.
In general, there is a three step solution to the problem:
1. Initially, the peg can be guided either through the position control or vision control to
approximately reach and hit the hole. Then the search for the hole’s center begins so as
to make the center of the peg within the clearance area around the centre of the hole
[2]. This removes the positional error between the peg and the hole (see Figure 6.1).
2. Now, since the peg is sufficiently at the centre of the hole, the directional or
orientational error between the peg and the hole has to be removed so that the peg
easily inserts into the hole. The first case involves the removal of Large Directional Error
where the peg may still be outside the clearance region of the hole [5] (see Figure 6.2).
Figure 6.2: Depiction of Large Directional Error
3. Then comes the removal of Small Directional Error, where, the peg is accurately within
the clearance range of the hole and only the directional manipulation needs to be done
in the peg’s orientation so as to have a smooth peg insertion without any jamming [5]
(see Figure 6.3).
Figure 6.3: Depiction of Small Directional Error
For the search to be exhaustive and to ensure that the peg does not miss the hole, the spacing between
the search points should not be greater than √ 2∗c [2] where c is the clearance between the peg and
the hole and is defined as:
c=(D-d)/2, where
D=Hole Diameter
D=Peg Diameter
This kind of search can be done using 1-Dimensional Force Feedback (assuming that the hole
surface is plane). The Peg needs to be moved down until it touches the hole surface and then
either a discrete or a continuous trace path can be followed until the peg descends into the
hole.
This search can be used for both the cases:
a) When the peg is parallel to the hole surface: The peg will descend into the hole when it
comes within the clearance range of the hole.
b) When the peg is tilted: The peg may descend into the hole even when it has not reached
the center of the hole; In such a case, the goal achievement is identified when the peg
descends the most. Please note that the tilted peg may hit the hole walls, so a two-
dimensional force feedback continuous tracing algorithm like TUSS is required for this
search in tilted peg case.
We implemented a Grid search with a clearance of 0.5 mm. The path traced is shown in Figure
6.5.
Figure 6.5: Grid Search with clearance=0.5 mm.
The dip at the center denotes that the peg has reached the center of the hole.
For our work, we chose the Archimedean Spiral. In polar coordinates (r, θ) it can be described
by the equation
with real numbers a and b. Changing the parameter a will turn the spiral, while b controls the
distance between successive turnings.
The pitch of such a spiral is defined as P=2 π∗b. The pitch refers to the space between the
turns of a spiral. For the Spiral search to be exhaustive, the criterion is that the pitch should be
less than or equal to the assembly clearance c [2].
Figure 6.6 shows the rate at which the spiral needs to be progressed so as to move at a
constant path speed. Also pitch P is equal to the clearance so as to make the search exhaustive.
Figure 6.6: Spiral Search criterion for constant speed search
Again, This search can be used for both the cases:
a) When the peg is parallel to the hole surface: The peg will descend into the hole when it
comes within the clearance range of the hole.
b) When the peg is tilted: The peg may descend into the hole even when it has not reached
the center of the hole; In such a case, the goal achievement is identified when the peg
descends the most. Please note that the tilted peg may hit the hole walls, so a two-
dimensional force feedback continuous tracing algorithm like TUSS is required for this
search in tilted peg case.
Figure 6.7 shows the path traced in Spiral search with clearance=0.5 mm.
Again, the dip at the end denotes that the peg has reached the centre of the hole.
y = f (x)
y={Rm} is an m-dimensional vector that denotes the state of the system. x={Rn} is an
n-dimensional vector of measured physical quantities. The mapping from x to y can be
represented by a function f. If x is measurable and if y is observable then we can estimate the
state y from a model f* of the function f. If a goal state is given, we can attempt to control the
system to the desired state by some control strategy using the identified mapping f*. But in
practice, the system may be too complex for an analytic approach to succeed. The mapping
may be highly nonlinear and difficult to model mathematically. Due to the powerful nonlinear
computational ability of neural networks, we choose to use a neural-net to construct an
approximate mapping for function f instead of attempting an analytic derivation. We expect a
neural net will generate a mapping from meas1ured features to the system state like
y’= g(x)
1
(Note: Till now we were able to use the damaged F/T sensor keeping the assumption that one force component is kept constant, but
further experiments are free from such assumption, therefore, we used the new F/T sensor for further work)
Figure 6.8: Basic structure of neurocontroller
By computing the difference between the current state and the goal state, we can derive a
control action from an action generator. In response to the appropriate control actions, the
system state will evolve to converge on the goal state. The combination of the neural-net
mapping and the action generator is called a neurocontroller in this thesis. The structure of the
neurocontroller is illustrated in Figure 6.8. The mapping in which we are interested is that from
the moments or torques to the position of the peg with respect to the hole.
The basic processing unit of a neural network is called a neuron or node. A neural network is
formed through weighted connections among the neurons. A neuron consists of multiple
inputs, an activation function and an output, as shown in Figure 6.9.
The neuron’s inputs are from external inputs or outputs of other neurons. The weighted sum of
these inputs drives the neuron’s activation function. An output is produced by the activation
function, which will have different forms for different kinds of neural networks.
The weights shown in Figure 6.9 are the storage elements of the neural network. Before the
neural net is trained, they are assigned random values. Training the neural net consists of
adjusting these weights according to some example data from the system. The example data is
called a pattern or training set for the neural network. Each pattern is a pair of input and output
vectors. The input is a vector of measurable features, and the output, in our case, is a vector
that describes the location of the hole. After learning, the weights store the information of the
system resulting in an approximate mapping from the input space domain to the output space.
It is thus useful to rewrite our mapping in a different form, recognizing a vector of weights as
another input
y’= g(x,w)
where w is the vector of weights. There exists a variety of methods for seeking an optimal set of
weights to best approximate the desired mapping. In all cases, though, the goal is to adjust the
weights w to model the system as precisely as possible. Here, we introduce some neural-net
methods used in this thesis. The most commonly used neural-net method is the traditional
multi-layer feedforward neural net with a backpropagation learning algorithm [23]. The
architecture of this kind of neural net is shown in Figure 6.10.
It consists of an input layer, one or more hidden layers, and an output layer. The neurons
between any two adjacent layers are fully interconnected in the feedforward direction. The
weight of each connection is adjusted during training. The activation function can be Gaussian,
logistic or a sigmoid function for the hidden layers. We choose a linear function for the output
layer nodes. To simulate the functional mapping precisely through a BP neural network, we
must select the proper number of hidden nodes, the parameters for the activation function and
the connection weights.
6.1.2.1.2 Mathematical Model for Parallel Peg
The basic peg-in-hole problem is shown in Figure 6.11. In this model, we assume that both the
surface of the subassembly and the peg bottom surface are parallel to each other. So when the
peg moves in contact with the subassembly, there is surface-to-surface contact (except for some
conditions we will discuss later). With the peg moving towards the hole, the contact state will
change. This change will be reflected through the reaction forces and moments.
As shown in Figure 6.12, when the center of the peg is “outside” the line between points A and
B, the reaction moments and forces provide no information about the peg’s location relative to
the hole. Here, “outside” the line means the distance from the hole center to the peg center is
greater than the distance from the hole center to the chord AB. As the peg moves inside this
line, the reaction force due to contact must be off-center with respect to the peg center,
leading to a measurable reaction moment. (The peg will tilt slightly relative to the subassembly
surface under this condition as shown in Figure 6.13). As the peg position changes, the direction
and value of the moment will be different. The neural-net controller can use this moment
information as clues to indicate the peg position and then guide the peg to move towards the
desired destination.
Figure 6.12: Case when the center of peg lies outside the chord of contact
Figure 6.13: Case when the center of peg lies inside the chord of contact
Given an arbitrary position of the peg, we want to know how large the moments are in this
position. If our torque sensor is located at point rsensor, and if contact force vector fcontact acts
through point rcontact, then the resulting moment at the sensor, msensor, will be:
If we ignore the friction forces fx and fy and let the downward force fz exerted by the robot be
a constant, then the moment is related to the distance between point P (peg’s center) and E
(the middle point of the two contact points) as shown in Figure 6.14. Here we will deduce this
relationship.
In Figure 6.14, point P, (xp, yp), denotes the peg’s center and point H, (xh, yh), denotes the hole
center. A, (xa, ya), and B, (xb, yb), are the two points at the intersections between the circular
boundaries of the peg and the hole.
Figure 6.14: Computing the moments
To compute a reaction moment, we need to know the location of the resultant contact force.
The distribution of the contact pressure over the region of overlap between the peg and
subassembly is unknown. However, if the peg tips even infinitesimally into the hole, then the
contact forces must be concentrated at points A and B. In this case, the resultant contact force
must act through a point lying on the line A-B. If the concentrated reaction forces at A and B are
balanced, then the resultant force will be at point E, mid-way between A and B. Under these
assumptions, we can compute the relationship between measured moments and relative
location of the center.
To obtain the forward mapping, we assume knowledge of the coordinates of P and H then
derive the moment based on computed coordinates of point E. First, we compute the area of
triangle APH:
Where lPH is the distance from point P to point H and the value “s” is defined as half the
perimeter of triangle APH. We can compute the area of triangle APH, ADAPH, by Heron’s
formula [24] as follows:
Because the area of ∆APH is also equal to half of lAE times lPH, we derive lHE as follows:
Then, we can get the coordinates of point E.
Moments in the x and y directions are non-zero only within a limited region. If the peg moves
out of this range, the moments are both zero or at least provide no information regarding the
hole location. One boundary of this region corresponds to the peg falling into the hole, which
occurs when:
A second boundary corresponds to the center of the peg moving outside the line A-B (i.e. points
H and P lie on opposite sides of line AB), which occurs when
Under the second condition, the moments are both zero. The plots in Figure 6.15 show the
computed moments as a function of peg coordinates relative to the hole center. The
computations are based on parameters rhole=50mm, rpeg=47mm, and fz = 1N. We can see that
moments in the x and y directions have similar maps except for a 90-degree rotation about the
z axis.
a)
b)
Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case.
a) Moment In X. b). Moment In Y.
6.1.2.1.3 Simulation Results
The neural network was trained using mathematical model as shown in Figure 6.16:
b)
Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D.
We can see from Figure 6.17 that as soon as the moments are sensed, the peg directly jumps to
the center of the hole.
6.1.2.1.4 Tilted Peg Case
The previous model assumed an ideal condition. In fact, the surface of the subassembly can not
be perfectly parallel to the bottom surface of the peg. There is always a tilt between the peg
and the subassembly surface. Thus in most positions there is only one contact point between
the peg and subassembly (as shown in Figure 6.18).
Since this solution is not appropriate for tilted case (tilted case most often is present), we
dropped the idea to implement this solution. We moved to another approach that is robust and
suited for the tilted case.
Figure 6.20: The precession strategy: (a) The peg is initially tilted by Theta tilt.
(b) The peg touches the hole surface, and peg height h1 is recorded. (c) As
the tilt axis is rotated, the peg precesses. (d) The peg dips into the hole,
height h2 < h1.
Consider the initial condition shown in Figure 6.20(b). The point of contact between the peg
and the hole is on the hole surface. As the peg precesses, the contact point moves along the
perimeter of the peg, and on a corresponding circular path on the hole surface until it reaches
the hole edge. During this interval, the nominal height of the peg is constant. As the peg dips
into the hole, the peg height decreases until a critical point where the peg is in contact with the
hole edge in two places. With further precession, the peg rises out of the hole and the peg
height increases.
This change in peg height reveals not only the direction of the hole center relative to the peg-
position, but also the distance. The direction of the hole center is given by the vector
perpendicular to the tilt-axis at the moment of minimum peg height. The distance of the hole
center from the peg-center can either be calculated analytically from the decrease in peg
height, or looked up from a table of sampled peg height values corresponding to peg-hole
distance. A visualization of such a table is shown in Figure 6.21. The minimum peg height
values recorded during precession for different relative peg-hole positions are plotted. Hence,
for the experiment, the minimum peg height would be matched to this table to obtain the
possible peg positions relative to the hole. This is shown in Figure 6.21.
With the relative peg-hole position localized to two possible values, we can proceed in a variety
of different ways. One option is to select one of the two values and utilize it to compute the
hole-configuration w.r.t. the peg, and attempt assembly. If assembly fails, then we know for
sure that the other value is the actual relative peg-hole position. Another option is to move the
peg to a different position and repeat the precession strategy. The results from the two
experiments analyzed together will be sufficient to localize the relative peg-hole position.
For the precession strategy to be successful, the precessing peg has to pass over the hole. For
this to happen, we can initially use any of the blind search strategies described in Section 6.1.1.
As soon as a specified value of dip occurs, the hole is sensed and the precession starts.
Since the load vector always remains the same w.r.t. the Base Coordinate System, to find the
load in new TCS, we need to find out the new vector components of the load as viewed from
the new TCS.
The load can be considered as a point in the coordinate space with (x,y,z) representing the
three components of net force or the net torque. Now, if we wish to find the new coordinates
of the same point in new TCS, it is done as follows:
T2xyz =Rz(a)Ry(b)Rx(c)T1xyzRx(c)Ry(b)Rz(a)
Where,
Rx,Ry,Rz are the Rotation Matrices in X,Y and Z respectively given by:
If we subtract these new components from the sensor readings, what we get are the
Gravity Compensated Force and Torque readings purely due to the Contact Forces and
Torques.
In such a case, the peg is outside the hole and makes the three-point contact with the hole.
In such a case, not only orientational but positional correction is also required to put the peg
inside the hole.
From the direction of the hole center, we can get the direction (perpendicular to the direction
of the hole center) in which the peg needs to be rotated to align it with the hole. Now using the
Hybrid Control Scheme (discussed in Section 4), the Position Control is done in the rotational axis
defined by the Direction Of Rotation in Figure 6.24 and the Force Control is done in X and Y of TCP (or
the Peg) (see Figure 6.23) to maintain the three-point contact. Gradually, the Three-Point Contact will
convert into The Two-Point Contact as shown in Figure 6.3 and it calls up for the next step i.e. The Small
Directional Error Removal.
Since the direction of moment does not specify the direction in which the peg needs to be
moved, we try to rotate the peg in both the directions about the line of net moment and then
record the moments obtained in both rotations. Then we compare both the moments and the
direction in which the moments are decreasing is the direction in which the peg needs to be
rotated to align with the hole. Again we are using the Hybrid Control Scheme (discussed in Section
4), the Position Control is done in the rotational axis defined by the Direction of Net Moment and the
Force Control is done in the downward direction (see Figure 6.27).
Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the mid-point.
The BackJump is taken to avoid jamming while rotation and the wall hits are sensed by the
force values.
The value of BackJump can be calculated as shown in Figure 6.29.
Figure 6.29: Calculating BackJump
To avoid the peg’s corner hitting the wall, the backjump can be taken as the arc length l.
So, BackJump = l = θ*r, where r is the radius of the peg and θ is the tilt angle.
The BackJump should not be too large so as to avoid the peg getting out of the hole. The
maximum limit of the BackJump can be calculated as (see Figure 6.30):
Therefore,
θ*r*Sin θ<=2(R-r*Cos θ)
Figure 6.31 and 6.32 show the experimental results obtained while using SDE Removal on a
Hole of diameter 57 mm. and a Peg of diameter 56.5 mm.
We can see from Figure 6.31 that there are more than one back jump and insertion steps in SDE
Removal. This is due to the inaccuracy in the direction of moments sensed. As soon as we get
the correct direction of the moments, the insertion is done in a single step. As we see from
Figure 6.32, the insertion is allowed until the upward force Fx crosses some threshold value (see
red-marked points in Figure 6.32). As soon as upward force exceeds the threshold value set due
to jamming, back jump is taken and the proper orientation for smooth insertion is searched for.
SDE Removal stops when a pre-specified amount of depth is reached by the Peg inside the Hole.
7. Optimization(why fonts have changed
here??
We have used Design Of Experiments (DOE) for optimization of our assembly task. Since the statistical nature of
the assembly task and DOE’s increasing popularity in manufacturing quality control, DOE has been used in the
robot assembly parameter optimization [26].
DOE includes designing our experiments in such a manner that we can analyze the direct effects as well as
interaction effects of some factors or parameters over the optimization goal.
DOE has various kinds of designs like Full Factorial designs, Custom Designs, etc.
Full factorial designs include all the possible combinations of the factors’ values to make the design. Thus if there
are n factors each with two levels of values (High & Low), there will be in total 2 n number of trials in the design.
We have used DOE to optimize the time of search as well as insertion. We have used a statistical analysis tool that
creates design as well as analyzes the data to get the optimal value for the parameters affecting the time of search
and insertion. Since we have implemented the search using Precession Strategy and insertion using LDE Removal
and SDE Removal, these three algorithms are different and have different factors affecting their time of
completion.
Thus, we optimize these three tasks separately.
a) Dip: The Dip refers to the amount by which the peg descends into the hole to sense it
and after that start the precession.
b) Contact Force For Search: This is the force that is maintained by the peg to maintain the
contact with the hole.
c) Angular Speed For Precession: This is the speed with which the peg precesses.
Now we need to consider each of the parameters above as a two-level parameter. For that, we define the lower
and upper limits for each of the above parameters (Lower Limits are given by subscript L and Upper Limits by
subscript H, L and H standing for Low and High respectively).
b) ContactForceL = 2 N, ContactForceH = 5 N
Now we design our experiments to analyze the direct effects as well as the interaction effects of the
factors affecting the Time Of Search.
a1*Dip+a2*ContactForce+a3*AngularSpeed+a4*Dip*ContactForce+
a5*ContactForce*AngularSpeed+a6*Dip*AngularSpeed+a7*Dip*ContactForce*AngularSpeed+
a8*Dip*Dip+a9*ContactForce*ContactForce+a10*AngularSpeed*AngularSpeed.
Here, a1-a10 are constants and a8, a9, a10 represent quadratic effects.
For such interactions to be analyzed, we made a Design for our experiment that constituted 16 runs with
three replicates for each run. The three replicates were taken to note the Time data consistently and finally
we take the average of the times recorded in these three runs.
a)
b)
Figure 7.1: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler.
The prediction profiler (see Figure 7.1(b) ) predicted that the search time will be minimum at:
Dip=2 mm., ContactForce (Fx)=3.9 N and AngularSpeed=0.2o per command and the minimum
search time was predicted to be 50,210 millisecs. When we used the above given values for
running the assembly, the Search Time came out to be 80,000 milliseconds.
a) Contact Force: The force maintained by the peg with the hole.
b) Angular Speed: The degrees by which the peg rotates per command.
ContactForceL=2.0 N, ContactForceH=5.0 N
Again we want to analyze the individual effects as well as the interaction effects of these
parameters. For that we designed our experiments as shown in Table 7.2.
b)
Figure 7.2: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler.
The prediction profiler (see Figure 7.2(b) ) predicted that the LDE Removal time will be
minimum at: ContactForce =3.75 N and AngularSpeed=0.2o per command and the minimum LDE
Removal time was predicted to be 50,660 millisecs. When we used the above given values for
running the assembly, the LDE Removal Time came out to be 58,969 milliseconds.
7.3 Optimizing The SDE Removal
When we analyze SDE Removal, we can see that as soon as the peg gets the correct direction of
moment, it will search for the two wall-hits and the insertion will be complete in a single step
(But there may be more than one insertion steps as quoted in Section 6.2.3 due to inaccuracies
in the moments sensed). So we could not find some meaningful parameters that could affect
SDE Removal.
We tried to take two parameters and examine the effects of these parameters on SDE Removal.
These were ContactForce and BackJump. We got the results as shown in Table 7.3.
8 6 39427
8 5 39531.33
8 7 32385.33
10 6 39302.33
10 6 38442.67
10 5 32588.33
We can see from Table 7.3 that these two parameters had almost no effect on the time for SDE
Removal, so for final assembly we took one of the combinations randomly from Table 7.3.
8. Results & Conclusions
3. Sir's Russian Book on Force Control along with the page number.
6. Shashank Shekhar and Oussama Khatib, "Force strategies in Real Time Fine Motion
Assemblies", ASME Winter Annual Meeting, 1987.
7. Wyatt S. Newman, Yonghong Zhao and Yoh-Han Pao, "Interpretation of Force and
Moment Signals for Compliant Peg-in-Hole Assembly", Proceedings of the 2001IEEE
International Conference on Robotics & Automation.
10. http://www.kuka-robotics.com/usa/en/products/industrial_robots/low/kr6_2/
11. http://en.wikipedia.org/wiki/Stiffness
12. http://www.britannica.com/EBchecked/topic/333644/lead-through-programming
13. Sabine Demey, Herman Bruyninckx, Joris De Schutter, "Model-Based Planar Contour
Following in the Presence of Pose and Model Errors", I. J. Robotic Res., 1997: 840~858.
14. A. Masoud and S. Masoud, “Evolutionary action maps for navigating a robot in an
unknown, multidimensional, stationary, environment, part II: Implementation results”,
in IEEE Int. Conf. Robotics and Automation, NM, Apr. 21–27, 1997, pp. 2090–2096.
15. H. Kazerooni and M.G. Her , “Robotic deburring of two dimentional parts with unknown
geometry”, IEEE International symposium on Intelligent Control, August, 1998.
16. Ralph E. Goddard, Yuan F. Zheng, and Hooshang Hemami, “ Dynamic Hybrid
Velocity/Force Control of Robot Compliant Motion over Globally Unknown Objects”,
IEEE Transactions on Robotics and Automation, VOL. 8, NO. 1, February 1992
17. http://hypertextbook.com/facts/2006/reactiontime.shtml
18. Statistics, Probability and Random Processes by Jain and Rawat, CBC Publications,
Jaipur, India.
19. Statistics and Probability Theory by Dr. Y.N. Gaur and Nupur Srivastava, ISBN 978-81-
88870-28-8, Genius Publications, Jaipur, India.
20. Discrete Mathematical Structures by Jain and Rawat, CBC Publications, Jaipur, India.
21. Discrete Mathematical Structures by Dr. V.B.L. Chaurasia and Dr. Amber Srivastava,
ISBN 81-88870-12-9, Genius Publications, Jaipur, India.
22. Theory of Computer Science by K.L.P. Misra and N. Chandrasekaran, ISBN 81-203-1271-
6, Prentice Hall of India, New Delhi, India.
23. Rumelhart, D.E.; Hinton, G.E.; Williams R.J. 1986a. Learning Representations of Back-
Propagation Errors, Nature (London), vol.323, pp533-536, 1986.
24. Zwillinger, D. CRC Standard Mathematical Tables & Formulae 30th edition, pp.462,
1996.
25. http://en.wikipedia.org/wiki/Precession
26. Dave Gravel, George Zhang, Arnold Bell, and Biao Zhang, "Objective Metric Study for
DOE-Based Parameter Optimization in Robotic Torque Converter Assembly", Advanced
Manufacturing Technology Development, Ford Motor Company, Livonia, MI.