Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 106

Developing and Optimizing strategies for

Robotic Peg-In-Hole insertion with a


Force /Torque sensor
By

Kamal Sharma

Enrolment No. – *********

A dissertation submitted to the Board of Studies in Engineering Sciences in partial fulfillment of


requirements

For the Degree of

MASTER OF TECHNOLOGY

of

HOMI BHABHA NATIONAL INSTITUTE

August, 2011
Homi Bhabha National Institute
Recommendations of the Thesis Examining Committee

As members of the thesis examining committee, we recommend that the dissertation prepared by
Kamal Sharma entitled “Developing and Optimizing strategies for Robotic Peg-In-Hole insertion with
a Force /Torque sensor”, may be accepted as fulfilling the dissertation requirement for the Degree
of Master of Technology.

Name Signature and Date

Member-1 Dr. (Smt.) Gopika Vinod

Member-2 Mr. Amitava Roy

Member-3 Dr. A.K. Bhattacharjee

Technical advisor Mrs. Varsha T. Shirwalkar

Co-guide -

Guide/Convenor Dr. Prabir Kumar Pal

Chairman Dr. Prabir Kumar Pal


DECLARATION
I, hereby declare that the investigation presented in the thesis has been carried out by me. The
work is original and, to best of my knowledge and belief, it contains no material previously
published or written by another person nor material which to a substantial extent has been
accepted for the award of any other degree or diploma of the University or other institute of higher
learning, except where due acknowledgement has been made in the text.

Signature:

Candidate Name: Kamal Sharma

Date:
ACKNOWLEDGEMENTS
I express my sincere thanks and gratitude to Dr Prabir K Pal, who is a marvelous mentor and guide.
He provided a very caring and innovative environment to me. The best thing is that he never
convinces until some task is made flawless. That encouraged me to develop very robust and
accurate applications.

Mrs. Varsha T. Shirwalkar proved to be a great Technology Advisor to me. She taught me to work
out and face all sort of challenges that came during my project. Especially she provided me a vast
knowledge over Force Control in a very intuitive and easy to understand manner.

I am thankful to Dr. Venkatesh and Dr. Dwarakanath who provided me the in-depth knowledge for
the working of Force/Torque Sensor.

I am thankful to Mr. A. P. Das who helped me out in the mechanical engineering and control related
issues.

I thank Mr. Abhishek Jaju (Having expertise on KUKA robot) who provided to me the thorough
understanding of Robotics and especially, the working of KUKA robot.

Then, I pay my gratitude towards Mr. Shobhraj Singh Chauhan and Mrs. Namita Singh who provided
a very spiritual, motivating and healthy environment to me that led to a very fast and smooth
project completion.

Special thanks to Mr. Shishir Kr. Singh who constantly criticized my work and provided to me a
constant assessment that helped me out in making the things perfect.

At last, I want to take this golden opportunity to express my gratitude towards Mr. Manjit Singh,
Head, DRHR for teaching me to use research for application oriented and real-world problems and
also for providing all the facilities and necessary infrastructure which was a prerequisite for the
initiation and completion of the project.

Kamal Sharma
DEDICATED to
Maa, Papa, Mintu
&
Bajrang Bali Ji
Content
s
List Of Figures..............................................................................................................................................9
List Of Tables.............................................................................................................................................12
Abbreviations............................................................................................................................................13
Introduction...............................................................................................................................................13
1.1 Problem Statement and Objective...................................................................................................14
1.1.1 Problem Statement...............................................................................................................15
1.1.2 Objective...................................................................................................................................15
2. Industrial Robot.....................................................................................................................................16
2.1 Industrial robots..............................................................................................................................16
2.1.1 History......................................................................................................................................16
2.2 KUKA KR-6 Robot.............................................................................................................................17
2.2.1 Robot controller........................................................................................................................19
2.2.2 3-COM Card..............................................................................................................................21
2.3 Robot Control..............................................................................................................................21
2.3.1 Conventional control................................................................................................................21
2.3.2 Programming of KUKA robots (KRL)..........................................................................................22
2.3.3 Real-time control (KUKA.Ethernet RSI XML).............................................................................24
3. The Force/Torque Sensor......................................................................................................................30
3.1 A General 6-Axis Force/Torque Sensor............................................................................................30
3.1.1 Stress and Strain.......................................................................................................................30
3.1.2 Working Of A Strain-Gauge.......................................................................................................32
3.1.3 Principle of Strain Measurement..........................................................................................32
3.2 ATI 660-60 F/T Sensor......................................................................................................................35
4. Hybrid Position/Force Control...............................................................................................................37
4.1 The Theory.......................................................................................................................................37
5. Preliminary Experiments using Force Sensing and Control....................................................................39
5.1 Recalibrating the Force/Torque Sensor...........................................................................................39
5.2 Kuka Surface Scanner (KSS).............................................................................................................40
5.3 Kuka Stiffness Finder (KSF)...............................................................................................................44
5.4 Lead Through Programming (LTP) Through Force Control...............................................................46
5.5 A Continuous Tracing Algorithm (TUSS)...........................................................................................47
5.5.1 Introduction..............................................................................................................................47
5.5.2 Human Approach......................................................................................................................48
5.5.3 TUSS..........................................................................................................................................49
5.5.4 Analysing Current Algorithm under various cases....................................................................53
5.5.5 Improved TUSS to avoid the problem of Deadlock Cycles........................................................58
5.5.6 Experimental Results................................................................................................................59
6. Peg In The Hole......................................................................................................................................66
6.1 Hole Search......................................................................................................................................68
6.1.1 Blind Search Strategies.............................................................................................................68
6.1.1.1 Grid Search........................................................................................................................68
6.1.1.2 Spiral Search......................................................................................................................70
6.1.2 Intelligent Search Strategies.....................................................................................................72
6.1.2.1 Neural Peg In The Hole......................................................................................................73
6.1.2.1.1 Neural Networks.........................................................................................................73
6.1.2.1.2 Mathematical Model for Parallel Peg..........................................................................76
6.1.2.1.3 Simulation Results.......................................................................................................81
6.1.2.1.4 Tilted Peg Case............................................................................................................83
6.1.2.2 Precession Strategy............................................................................................................84
6.1.2.2.1 Need for Precession....................................................................................................84
6.1.2.2.2 The Precession Strategy..............................................................................................84
6.2 Peg Insertion....................................................................................................................................87
6.2.1 Gravity Compensation..............................................................................................................87
6.2.2 Large Directional Error (LDE) Removal......................................................................................89
6.2.2.1 LDE Removal Using Information From Hole Search...........................................................89
6.2.2.2 LDE Removal Using Moment Information..........................................................................90
6.2.2.3 Stopping Criterion For LDE Removal..................................................................................91
6.2.3 Small Directional Error (SDE) Removal......................................................................................91
7. Optimization(why fonts have changed here??......................................................................................97
7.1 Optimizing The Search.....................................................................................................................97
7.2 Optimizing The LDE Removal.........................................................................................................100
7.3 Optimizing The SDE Removal.........................................................................................................102
8. Results & Conclusions..........................................................................................................................103
8.1 Results For Hole Search.................................................................................................................103
8.2 Results For Insertion......................................................................................................................103
List Of Figures
Figure 1.1: KUKA teach-pendent...............................................................................................................13
Figure 2.1: KUKA KR-6 Industrial Robot.....................................................................................................15
Figure 2.2: KUKA KR-6 Dimensions............................................................................................................16
Figure 2.3: KUKA KR-6 specifications.........................................................................................................17
Figure 2.4: KUKA KR C2 controller.............................................................................................................18
Figure 2.5: Internals KUKA KC2 controller.................................................................................................18
Figure 2.6: 3-Com LAN card.......................................................................................................................19
Figure 2.7: Example of short robot program syntax..................................................................................20
Figure 2.8: An example of KRL‐code..........................................................................................................21
Figure 2.9: Functional principle of data exchange.....................................................................................23
Figure 2.10: Data exchange sequences......................................................................................................24
Figure 2.11: The different coordinate systems..........................................................................................25
Figure 2.12: The structure of configuration file.........................................................................................25
Figure 2.13 CONFIG block of configuration file..........................................................................................26
Figure 2.14 SEND block of configuration file..............................................................................................26
Figure 2.15 RECIEVE block of configuration file.........................................................................................27
Figure 3.1 : Effects of a tensile and compressive force..............................................................................28
Figure 3.2 : Mechanical Properties of some materials...............................................................................30
Figure 3.3 : Converting measured Strain into Voltage value......................................................................31
Figure 3.4 : Converting measured Strain into Voltage value......................................................................33
Figure 3.5 ATI 660-60 Delta Sensor............................................................................................................34
Figure 4.1 : Example of a Force-Controlled task-turning a screwdriver.....................................................36
Figure 4.2 : Schematic Of A Hybrid Controller..........................................................................................36
Figure 5.1: GUI for KSS...............................................................................................................................38
Figure 5.2: Kuka Scanning a surface using KSS...........................................................................................39
Figure 5.3: Schematic for working of KSS..................................................................................................40
Figure 5.4: Profile generated by KSS for the surface shown in Figure 5.1.................................................41
Figure 5.5: Defining Stiffness Of A Material...............................................................................................41
Figure 5.6: KSF working over a mouse pad................................................................................................42
Figure 5.7: GUI for KSF...............................................................................................................................43
Figure 5.8: Payload Assist..........................................................................................................................44
Figure 5.9: A human finger tracing a surface from right to left while maintaining the contact................45
Figure 5.10: Black Box depicting the human approach..............................................................................46
Figure 5.11: Robot with a probe, moving over a surface while maintaining the contact..........................47
Figure 5.12: Black-Box depicting the TUSS Algorithm...............................................................................48
Figure 5.13: Probe moving on a fully horizontal flat surface.....................................................................50
Figure 5.14: State Diagram for a fully horizontal flat surface....................................................................51
Figure 5.15: Probe moving on a fully vertical flat surface..........................................................................51
Figure 5.16: State Diagram for a fully vertical flat surface.........................................................................52
Figure 5.17: Probe moving on a slant surface of Type 1............................................................................52
Figure 5.18: State Diagram for a slant surface of Type 1...........................................................................53
Figure 5.19: Probe moving on a slant surface of Type 2............................................................................53
Figure 5.20: State Diagram for a slant surface of Type 2...........................................................................54
Figure 5.21: Binary equations for Improved TUSS.....................................................................................56
Figure 5.22: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles..........................56
Figure 5.23(a): Experimental Setup to test Improved PVK Algorithm........................................................57
Figure 5.23(b): GUI To Collect Data From The Improved PVK Algorithm..................................................57
Figure 5.24: Comparing the actual contour with the path followed by Improved PVK Algorithm.............58
Figure 5.25 – Comparing the actual contour with the path followed when the point of contact remains
same..........................................................................................................................................................58
Figure 5.26: Force Profile in X – Direction.................................................................................................59
Figure 5.27: Force Profile in Y – Direction..................................................................................................59
Figure 5.28: Slope Of Tangents (In Degrees) derved from the force values..............................................60
Figure 5.29: Smoothened Slope Of Tangents (In Degrees) derved from the force values.........................61
Figure 5.30: Comparing Geometrically found Slope with that found from the Forces..............................61
Figure 5.31: Reconstructing the contour using force values ( i.e. the slope obtained from force values). 62
Figure 6.1: Hole Search..............................................................................................................................63
Figure 6.2: Depiction of Large Directional Error........................................................................................64
Figure 6.3: Depiction of Small Directional Error........................................................................................65
Figure 6.4: Search Points for Grid Search..................................................................................................66
Figure 6.5: Grid Search with clearance=0.5 mm........................................................................................67
Figure 6.6: Spiral Search criterion for constant speed search....................................................................68
Figure 6.7: Spiral Search with clearance=0.5 mm......................................................................................69
Figure 6.8: Basic structure of neurocontroller...........................................................................................70
Figure 6.9: Structure of a neuron..............................................................................................................71
Figure 6.10: 3-layer Backpropagation Neural Network.............................................................................72
Figure 6.11: The Parallel Peg case.............................................................................................................73
Figure 6.12: Case when the center of peg lies outside the chord of contact.............................................73
Figure 6.13: Case when the center of peg lies inside the chord of contact...............................................74
Figure 6.14: Computing the moments.......................................................................................................74
Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case.......................77
Figure 6.16: Neural Network Training.......................................................................................................78
Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D..................................................79
Figure 6.18: Tilted Peg Case.......................................................................................................................79
Figure 6.19: Contact States for Tilted Peg Case.........................................................................................80
Figure 6.20: The precession strategy: (a) The peg is initially tilted by Theta tilt...........................................82
Figure 6.21: Peg height w.r.t. distance from hole’s center........................................................................83
Figure 6.22: Need For Gravity Compensation............................................................................................84
Figure 6.23: Large Directional Error Case..................................................................................................86
Figure 6.24: Large Directional Error Removal Using Information From Search.........................................87
Figure 6.25: Large Directional Error Removal Using Moment Information................................................88
Figure 6.26: Direction Of Moment Sensed and The Direction in Which The Peg needs to be moved.......89
Figure 6.27: Removing Small Directional Error..........................................................................................89
Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the mid-point......90
Figure 6.29: Calculating BackJump............................................................................................................91
Figure 6.30: Calculating The Upper Limit On BackJump............................................................................91
Figure 6.31: Insertion with Time using SDE Removal.................................................................................92
Figure 6.32: Insertion and Upward Force Due to Jamming........................................................................93
Figure 7.1: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler............................96
Figure 7.2: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler............................98
List Of Tables
Table 3.1: Specifications For ATI 660-60 Delta Sensor (Last Row)............................................................34
Table 5.1: Calibration Matrix For Working Sensor.....................................................................................37
Table 5.2: Calibration Matrix For Damaged Sensor...................................................................................38
Table 5.3: TUSS Algorithm depicted in the Truth Table............................................................................49
Table 7.1: DOE for Precession Search........................................................................................................95
Table 7.2: DOE for LDE Removal................................................................................................................97
Table 7.3: Experimental Runs for SDE Removal.........................................................................................99
Abbreviations

1.RCC Remote Compliance Center


2.DOF Degrees Of freedom
3.KRL Kuka Robotic Language
4.RSI Robot Sensor Interface
5.XML Extended Markup Language
6.KSS Kuka Surface Scanner
7.KSF Kuka Stiffness Finder
8.LTP Lead Through Programming
9.TUSS Trace Using State Space
10.TCP Tool Center Point
11.F/T Sensor Force/Torque Sensor
12.BCS Base Coordinate System
13.TCS Tool Coordinate System
14.LDE Large Directional Error
15.SDE Small Directional Error
16.DOE Design Of Experiments

Introduction
Usually, robotic operations are of pick and place type and in the fields that are remote or those
which are hazardous for humans to be present. For such tasks, robots can be programmed by
hard coding the positions or the points where it has to reach for the accomplishment of a
particular task.
But there are many situations where the environments are not very much structured [1], the
interactions are not predictable or tolerances are very less in magnitude and the accuracy
required is much higher.
One good example for such a kind of task is mechanical assembly where tolerances are very
less. Suppose, two parts need to be mated and the clearance between those two parts is of the
order of lesser than a millimeter. Now the exact location of the female part can be specified
through positional measurement or through computer vision. But if the accuracy of position
measurement is not suitable for mating such low clearance parts, then the robot may try to
inaccurately push the male part into the female one, and that may lead to the development of
large forces that can harm the parts or the robot itself. Also sometimes the camera vision is not
possible due to the obstruction of view [2]. So, we are left with an approximate idea of the
position of one part and the assembly has to be done with this much knowledge.
At such times, the force-control comes into play. The parts have to be mated by getting
feedback from the forces and torques that are generated from the interaction of the mating
parts. There are two methods to employ the force information (Force here means both the
forces and the torques) for assembly task. One is to use passive sensing. There are passive
devices like Remote Centre Ccompliance (RCC) device for mechanical assembly, but their
limitation is that they are not customized for specific assembly and are not general purpose
devices. Also they are not so flexible as to be adapted to any environment [3].
The other method is using active sensing and developing an intelligent controller that actively
takes force feedback and decides its next command to the robot for the assembly to take place.
Hybrid Position/Control theory [4] provides the method for building such a controller.
Peg-in-hole insertion is the essential first step in the validation of algorithms for robotic
assembly. In our work, we have to assemble a cylindrical peg into a cylindrical hole with
the clearance of 0.5 mm. using active sensing of Force signals. For that purpose we have
developed the algorithms for accurately positioning the peg inside the hole as well as
properly orienting it for insertion without any jamming.
After the successful testing of our algorithms, we have optimized them for a particular
assembly operation.

1.1 Problem Statement and Objective


This section discusses the Problem Statement and the Objective of this thesis.
1.1.1 Problem Statement
Our research deals with the Development and Optimization of strategies for Robotic
Peg-in-hole insertion with a Force /Torque sensor.
There are various works that have been done for Peg-In-Hole insertion [1, 2, 5-7].
The approach used by Siddharth R. Chhatpar [2] deals with the force sensing (without torque
sensing) and position sensing for the assembly.
The approach used by Y. Zhao [7] uses torque sensing and neural networks for the assembly
operation but that approach is not suited for arbitrary peg orientation.

1.1.2 Objective
The objective of the project is to develop a robust strategy for robotic Peg-In-hole insertion
using a 6-axis Force/Torque sensor that can provide the force feedback in X, Y and Z-Directions
as well as the Torque Feedback in the same directions. The clearance (i.e. the difference
between the Peg and the Hole diameters) kept is 0.5 mm.
This assembly has to be done with a 6 DOF Kuka robot. For this purpose, a real-time control
program needs to be written that can communicate with the robot and command it to move in
the cartesian space so as to accomplish the assembly.

Real‐time control of industrial robots is today usually done using a so called teach‐pendant (see
Figure 1.1), with single motions in directions or axis, or multiple direction movement.

Figure 1.1: KUKA teach-pendent


After the development and testing of the algorithms, they have to be optimized for a particular
assembly.

2. Industrial Robot
This section describes Industrial Robot, first in general and then more detailed about this
specific project.

2.1 Industrial robots


There is a difference between robots in general and industrial robots. When talking about
robots in general, that includes all types of robots which can have special purposes, humanoids
(robots that are constructed to act as a human), cleaning robots, etc. However this report
focuses on robots used within the industry.

By ISO definition, an industrial robot is an automatically controlled, reprogrammable


multipurpose manipulator that can be programmed in three axes or more [8]. The
expression”field of robotics” is often used and that includes everything related to the robots,
normally when used in industrial applications typically for production. Robots can be adapted
and used for a lot of different tasks. Typical applications for industrial robots can be welding,
assembly, painting, inspection, etc. performed at a high speed and with high precision.
Normally the robot is connected through a controller to a laptop, or desktop computer in which
the programming takes place. Almost all industrial robots come with a so called teach‐pendant
which can be used in order to move and control the robot. In other words the robot can be
taught to go to specific points in space and then perform the trajectory that it has been taught.

2.1.1 History
The first real robot patents were applied by George Devol in 1954 [9]. Two years later the first
robot was produced by a company called Unimation, founded by George Devol and Joseph F.
Engelberger. In these first machines the angles of each joint were stored during a teaching
phase and could then later be replayed. The accuracy was very high 1/10000 of an inch. Worth
to notice is that the most important measure in robotics is not accuracy, but repeatability. This
is due to the fact that normally the robots are taught to do one task over and over again. The
first 6‐axis robot was invented in 1969 by Victor Scheinman at Stanford University, named "the
Stanford arm" and could substitute the motions of an arm [9]. The Stanford arm made it
possible to make arbitrary paths in space and also permit the robot to perform more
complicated tasks, such as welding and assembly. In Europe the industrial robots took off
around 1973 when both KUKA and ABB entered the market and launched their first models.
Both came with six electro‐mechanically driven axes. In the late 1970's the consumer market
started showing interest and several America based companies entered the market. The real
robotics boom came in 1984 when Unimation was sold to Westinghouse Electric Corporation.
Few Japanese companies managed to survive, and today the major companies are Adept
Technology, Staubli‐Unimation, the Swedish‐Swiss Company ABB ( Asea Brown Boveri) and the
German company KUKA Robotics.

2.2 KUKA KR-6 Robot


The robot used in this project is a KUKA KR 6 (see Figure 2.1). Its versatility and flexibility make
the KUKA KR 6-2 one of the most popular robots. It is a masterful mover (avoid such wordings
that looks like an advertisement) and has a payload of 6 kg. See Figure 2.2 for dimensions of the
robot and Figure 2.3 for its specifications. You may summarise here major observations about
the specs(workspace. Accuracy, repeatability etc).

Figure 2.1: KUKA KR-6 Industrial Robot


Figure 2.2: KUKA KR-6 Dimensions
Figure 2.3: KUKA KR-6 specifications

2.2.1 Robot controller


The robot controller used in this project is a KR C2 edition 2005 (see Figure 2.4). The KR C2 uses
an embedded version of Windows XP as operating system and VxWorks real-time operating
system (see Figure 2.5). KRL (???) and RSI (???) are installed on the PC of this controller. It has
single-axis servo-amplifiers, PC-based controller and electronic safety system.
Figure 2.4: KUKA KR C2 controller

Figure 2.5: Internals KUKA KC2 controller


2.2.2 3-COM Card
A 3-Com LAN card (see Figure 2.6) is required to establish communication between KUKA KR C2
controller and external system. This configuration has advantage over other communications like OPC
communication because card is directly connected with RTOS VxWorks. Thus execution latency is
avoided and robot can be controlled in real-time from an external computer.

Figure 2.6: 3-Com LAN card

2.3 Robot Control


In this section we shall explain the common practice in industrial robot control. First sub-
section, explains how to make the robot perform a task by writing a robot code that are read by
the robot controller computer. Further down in the section 2.3.2 (Programming of KUKA
robots) it will be explained more specifically about how the controlling is done with KUKA
robots and how the robot language code works. Then it will be explained more about what type
of controlling method was used in this project, and what type of tools that were used under the
sections 2.3.3(Real‐time control).

2.3.1 Conventional control


The common practice for industrial robot control is either to move the robot manually with the
robot’s teach pendant, or with a program written in the robot’s language, in this case KUKA
Robot Language (KRL) that is run on the robot controller. Either the program has been written
directly in the specific robot language or the programmer has used some kind of post processor
in combination with CAD/CAM software.

The programming of an industrial robot can be divided into two categories, online, and offline
programming. Offline programming means that the robot code is created offline with no
connection to the physical robot. The generated program is later transferred to the robot and
executed in the real environment. Normally a verification of the code is made in some sort of
simulation software before transferring the program. Online programming includes
programming when the software is directly connected to the physical robot. This can be done
with a teach pendant used for moving the robot through certain positions which are stored,
and then a trajectory is created between the stored points. This process of adjusting a position
in space is commonly referred to as "jogging" or "inching" the robot.

Most industrial robots have a similar programming structure, telling them how to act. One
defines points in space and then says how the robot is supposed to reach these points, normally
called P1, P2, P3 etc. An example of program syntax is shown below in Figure 2.7.

Many industrial robot manufacturers offer a simulation software package for their robots
which eases the programming and at the same time makes it possible to perform offline
programming.

The benefits from using offline programming are many; it prevents costly damages that could
happen in the real world, it can save money since you don’t have to interrupt the production
while programming. It speeds up for example ramp‐up time when switching to production of a
new model etc.

Figure 2.7: Example of short robot program syntax

2.3.2 Programming of KUKA robots (KRL)


For programming of a KUKA robot, a specific language called KRL (KUKA Robot Language) is
used. The structure of the code is similar to most other robot languages; Points are specified in
space and commands are used to express how the robot is supposed to reach these points.
When programming in KRL, a .SRC‐file is created containing the real program code that will be
read by the robot controller and a .DAT file containing the declaration of variables, and other
prerequisites that are read and stated automatically before executing the .SRC‐file. The name of
the two files shall always be the same to be identified as the same program by the robot
controller. A small example of the contents in a .SRC‐file is shown in Figure 2.8.

The declaration and initialization part can also be located in the .DAT file, but are here showed
in the same program to more clearly explain the different steps. The PTP HOME in the main
section makes the robot perform a point‐to‐point motion to the defined home position, which
is defined in the initialization part. The first instruction to the robot is normally called a BCO‐run
(Block Coincidence), which makes the robot go to a predefined point in space.

This is done in order to set the correspondence between the current real robot position and
the programmed position. The next line makes the robot go to the specified point, which in this
case is defined directly in the same row as a point in the base coordinate system specified by six
values. The six values are; X, Y, Z that defines the point in a 3D‐space and A (rotation around Z‐
axis), B (rotation around Y‐axis) and C (rotation around X‐axis) that defines the tool orientation.

Figure 2.8: An example of KRL‐code


2.3.3 Real-time control (KUKA.Ethernet RSI XML)
Normally industrial robots are programmed to perform a single task, for example welding the
same part over and over throughout the day; hence it is not controlled in real‐time. However,
real‐time control can be used for tasks that are none‐repeatable and require the user to alter
the robot’s movement from time to time. Example of this can be inspection with a camera,
place a gripper as the end‐effector in order to be able to pick up and move different objects etc.

The KUKA.Ethernet RSI XML software package is required to control the robot from an external
computer in real time. The term external system is often mentioned and refers to the
computer(s) that are connected to the robot controller (in this case the computer that holds
the server that the robot’s controller is communicating with). The KUKA.Ethernet RSI XML is an
add‐on technology package with the following functions [12]:

 Cyclical data transmission from the robot controller to an external system in the
interpolation cycle of 12ms (e.g. position data, axis angles, operating mode, etc
 Cyclical data transmission from an external system to the robot controller in the
interpolation cycle of 12ms (e.g. sensor data)
 Influencing the robot in the interpolation cycle of 12ms
 Direct intervention in the path planning of the robot

The characteristics of the package are the following:

 Reloadable RSI‐object for communication with an external system, in conformity


with KUKA.RobotSensorInterface (RSI)
 Communications module with access to standard Ethernet
 Freely definable inputs and outputs of the communication object
 Data exchange timeout monitoring
 Expandable data frame that is sent to the external system. The data frame consists
of a fixed section that is always sent and a freely definable section.

The KUKA.Ethernet RSI XML enables the robot controller to communicate with the external
system via a real‐time‐capable point‐to‐point network link. The exchanged data is transmitted
via the Ethernet TCP/IP or UDP/IP protocol as XML strings. For this thesis, the Ethernet TCP/IP
was used since it makes a more secure communication with response from the destination to
the host, while the UDP/IP lacks this response (explanation / reference???).

Programming of the KUKA Ethernet RSI XML package is based on creating and linking RSI
objects. RSI‐objects are small pieces of pre‐programmed code that can be executed and has
additional functionalities than the normal KRL‐code. To be able to communicate externally
through Ethernet, a specific standard object (ST_ETHERNET) needs to be created.

The code line for creating for example the ST_ETHERNET is typically: err =ST_ETHERNET
(A,B,config_file.xml)
Where,

err = type of string used by the RSI XML (called RSIERR) containing the error code produced
when creating the object (normally #RSIOK when it works)

A= an integer value that contains the specific RSI‐object ID so that it is possible to locate and
refer to

B= an integer value for the container to which you want the RSI‐object to belong,

config_file.xml is a configuration file located in the INIT folder (path C:/KRC/ROBOTER/INIT) on


the robot controller that specifies what should be sent and received by the robot controller. The
content of this file will be explained further down.

ST_ETHERNET is the object that can be influenced by external signals, and also send data back
to the external system in form of XML files, containing different tags with data. The data can be
for example information about the robot’s actual axis positions, Cartesian actual positions etc.
This data shall be sent to the server and back within each interpolation cycle of 12ms. In this
project, the communication object ST_ETHERNET was used. When one of these objects is
created and linked correctly, the robot controller connects to the external system as a client.
There are many different types of RSI‐objects and depending on what you want to do, you have
to create and link the correct objects to each other. Besides the standard Ethernet card, an
additional card (3-COM) was needed, to be able to handle the speed of the transferred data
[10].

Figure 2.9: Functional principle of data exchange

The robot controller initiates the cyclical data exchange with a KRC data packet and transfers
further KRC data packets to the external system in the interpolation cycle of 12 ms. This
communication cycle is called a IPO‐cycle (Input Process Output) and can be seen in Figure 2.9
above. The external system must respond to the KRC data packets with a data packet of its
own.

Figure 2.10: Data exchange sequences

To be able to influence the robot, one needs to initiate an RSI‐object for the movements. There
are mainly two objects used for this. First an object called ST_AXISCORR(A,B) for specific
movements in axis A1 to A6, where A is the specific ID of the created object, and B is the
container that the object belongs to.

The second object is called ST_PATHCORR(A,B) for movements in Cartesian coordinates, where
A, B are the same as for the ST_AXISCORR object. A coordinate system is also needed (normally
BASE, TCP, WORLD) as a reference for the movements. This is done by creating a RSI‐object
called ST_ON1(A,B) where the parameter A is a string containing the coordinate system that is
supposed to be used (expressed as #BASE, #TCP or #WORLD) and B is an integer value, 0 if the
correction values sent to the robot shall be absolute or 1 if they shall be relative. A schematic
picture of the data exchange sequence with the different RSI‐objects is shown in Figure 2.10
above.

When the robot is delivered from the factory, the BASE coordinate system is the same as the
WORLD and both are located in the base of the robot by default. BASE is normally moved to the
base of the work piece on which the robot is working. The differences between the different
coordinate systems can be seen in Figure 2.11 below.
NOTE: For the robot used in this thesis, all three systems; WORLD CS, ROBOT CS and BASE CS
have the same origin located on the base of the robot.

Figure 2.11: The different coordinate systems

When the robot controller communicates with the external system it interchanges XML Strings.
The content in the XML strings for the demo program provided by KUKA, is decided and defined
in a file called ERX_config.xml (the configuration file), which is located in the robot controller,
inside the INIT folder. A typical structure of this file is shown in Figure 2.12 below.

Figure 2.12: The structure of configuration file


Figure 2.13 CONFIG block of configuration file

As can be seen in Figure 2.13 above, the IP address and port to which the robot controller will
connect, when establishing the connection, are set in this file.

Figure 2.14 SEND block of configuration file


The sub tags under <SEND> are what the robot controller sends to the external system (see
Figure 2.14). The most important tags for sending to the external system for this project are the
tags called DEF_RIst, which is the real position of the robot’s end‐effector.

Figure 2.15 RECIEVE block of configuration file

Under the tag <RECEIVE> is described what the robot controller expects to receive from the
external system(see Figure 2.15). In this case corrections in six values (X, Y, Z, A, B and C) are
included, tagged as RKorr.
3. The Force/Torque Sensor
This section describes the working of a general 6-axis Force/Torque Sensor and then provides
details of the sensor used for our work.

3.1 A General 6-Axis Force/Torque Sensor


A 6- axis Force/Torque sensor is an electromechanical device that can sense the forces in the
vector space i.e. Fx, Fy, Fz as well as the torques or moments in the vector space i.e. tx, ty, tz.
Such sensors are strain-gauge based and their working is explained below.

3.1.1 Stress and Strain


When a material receives a tensile force P, it has a stress σ that corresponds to the applied
force. In proportion to the stress, the cross-section contracts and the length elongates by ΔL
from the length L the material had before receiving the tensile force (see upper illustration in
Figure 3.1).

Figure 3.1 : Effects of a tensile and compressive force


The ratio of the elongation to the original length is called a tensile strain and is expressed as
follows:

ε = ΔL/L
ε: Strain
L: Original length
ΔL: Elongation

See the lower illustration in Figure 3.1.


If the material receives a compressive force, it bears a compressive strain expressed as
follows:

ε = –ΔL/L

For example, if a tensile force makes a 100mm long material elongate by 0.01mm, the strain
initiated in the material is as follow:

ε = ΔL/ L = 0.01/100
= 0.0001 = 100 x10–6

Thus, strain is an absolute number and is expressed with a numeric value plus x10–6 strain, με
or μm/m. The relation between stress and the strain initiated in a material by an applied force
is expressed as follows based on Hooke's law:

σ = Eε
σ: Stress
E: Elastic modulus
ε: Strain

Stress is thus obtained by multiplying strain by the elastic modulus. When a material receives a
tensile force, it elongates in the axial direction while contracting in the transverse
direction. Elongation in the axial direction is called longitudinal strain and contraction in the
transverse direction, transverse strain. The absolute value of the ratio between the longitudinal
strain and transverse strain is called Poisson's ratio, which is expressed as follows:

v=|E2/E1|

v=Poisson’s ratio.
E1=Longitudinal Strain
E2=Transverse Strain
Poisson's ratio differs depending on the material. For reference, major industrial materials have
the following mechanical properties including Poisson's ratio (see Figure 3.2).

Figure 3.2 : Mechanical Properties of some materials

3.1.2 Working Of A Strain-Gauge


Each metal has its specific resistance. An external tensile force (compressive force) increases
(decreases) the resistance by elongating (contracting) it. Suppose the original resistance is R
and a strain-initiated change in resistance is ΔR. Then, the following relation is concluded:

ΔR/R = Ks ΔL/L = Ks.ε

where, Ks is a gage factor, the coefficient expressing strain gage sensitivity. General-purpose
strain gages use copper-nickel or nickel-chrome alloy for the resistive element, and the
gage factor provided by these alloys is approximately 2.

3.1.3 Principle of Strain Measurement


Strain-initiated resistance change is extremely small. Thus, for strain measurement, a
Wheatstone bridge is formed to convert the resistance change to a voltage change. Suppose in
Figure 3.3 resistances (Ω) are R1, R2, R3 and R4 and the bridge voltage (V) is E. Then, the output
voltage eo (V) is obtained with the following equation:

eo = ((R1R3 – R2R4) /(R1 + R2) (R3 + R4)). E


The strain gage is bonded to the measuring object with a dedicated adhesive. Strain occurring
on the measuring site is transferred to the strain sensing element via the gage base.

Figure 3.3 : Converting measured Strain into Voltage value

For accurate measurement, the strain gage and adhesive should match the measuring material
and operating conditions including temperature.

eo =(( (R1 + ΔR)R3 – R2R4)/ (R1 + ΔR + R2) (R3 + R4)). E

If R1 = R2 = R3 = R4 = R,
eo = ((R2 + R ΔR – R2)/ (2R + ΔR) 2R) . E

Since R may be regarded extremely larger than ΔR,


eo ≒ 1/4 . ΔR/R . E = 1/4 . Ks . ε . E

Thus obtained is an output voltage that is proportional to a change in resistance, i.e. a change in
strain. This microscopic output voltage is amplified for analog recording or digital
indication of the strain.
So, we can see that the output voltages of the sensor are proportional to the forces and
torques being applied to the sensor’s gages.
A Force/Torque sensor, in principle, consists of multiple such gages so as to provide the values
for forces and torques in multiple axis.
Figure 3.4 shows a dismantled Force/Torque Sensor.
Figure 3.4 : Converting measured Strain into Voltage value
Finally, the three components of forces and the three components of torques can be calculated
as under :

[Fx Fy Fz Tx Ty Tz]T= [A] 6x6 X [V1 V2 V3 V4 V5 V6]T

V1..V6 are the 6 output voltages and A is a 6X6 matrix also known as the calibration matrix.

3.2 ATI 660-60 F/T Sensor


The sensor used for ourf work is ATI 660-60 Delta Force/Torque Sensor (see Figure 3.5).
The specifications for our sensor are given in Table 3.1.

Figure 3.5 ATI 660-60 Delta Sensor

Table 3.1: Specifications For ATI 660-60 Delta Sensor (Last Row)
4. Hybrid Position/Force Control
This section describes the theory of Hybrid Position/Force Control that is widely used in Force-
Controlled applications with active sensing.

4.1 The Theory


The theory was given by M. H. Raibert and J. J. Craig in 1981 [4].The approach taken by them is
based on a theory of compliant force and position manipulator control.
Every manipulation task can be broken down into elemental components that are defined by a
particular set of contacting surfaces. With each elemental component is associated a set of
constraints, called the natural constraints, that result from the particular mechanical and
geometric characteristics of the task configuration. For instance, a hand in contact with a
stationary rigid surface is not free to move through that surface (position constraint), and, if the
surface is frictionless, it is not free to apply arbitrary forces tangent to the surface (force
constraint). Figure 4.1 describes a task configuration for which compliant control is useful along
with the associated natural constraints.
In general, for each task configuration a generalized surface can be defined in a constraint
space having N degrees of freedom, with position constraints along the normals to this surface
and force constraints along the tangents. These two types of constraint, force and position,
partition the degrees of freedom of possible hand motions into two orthogonal sets that must
be controlled according to different criteria.
Additional constraints, called artificial constraints, are introduced in accordance with these
criteria to specify desired motions or force patterns in the task configuration. That is, each time
the user specifies a desired trajectory in either position or force, an artificial constraint is
defined. These constraints also occur along the tangents and normals to the generalized
surface, but, unlike natural constraints, artificial force constraints are specified along surface
normals, and artificial position constraints along tangents – consistency with the natural
constraints is preserved.
Figure 4.1 : Example of a Force-Controlled task-turning a screwdriver

Once the natural constraints are used to partition the degrees of freedom into a position-
controlled subset and a force-controlled subset, and desired position and force control
trajectories are specified through artificial constraints, it remains to control the manipulator.
The controller is shown in Figure 4.2 below.

Figure 4.2 : Schematic Of A Hybrid Controller


5. Preliminary Experiments using Force
Sensing and Control
This section describes the initial stage experiments done so as to get acquaintance with the
force control scheme.

5.1 Recalibrating the Force/Torque Sensor


Just before this project started, our F/T sensor was partly damaged in collision, and one of its
output voltages (V1) showed signal saturation. We decided to recalibrate it and use it, for some
simpler applications that require force and torque values in lesser dimensions than six, till the
new sensor arrives.

We recalibrated the sensor by keeping the force in one direction as constant.


We assumed that for our applications, Fz would remain constant. The calibration Matrix for the
working sensor is shown in Table 5.1

Table 5.1: Calibration Matrix For Working Sensor

- 0.04609 6.64228 - - 86.5577


0.03943 6 7 82.7549 3.25622 9
- 89.4500 4.51024 - 2.26898 -
7.92787 6 1 46.9585 8 50.2394
145.241 - 144.941 - 146.044 -
6 5.80604 6 6.48073 2 8.00946
0.05202 -0.1381 - 0.30425 4.96702 -0.2009
6 5.00965 2
5.76917 -0.2505 - 0.01104 - 0.29545
4 2.82034 3 2.88219 1
0.16080 - 0.21388 - 0.20599 -
3 2.62329 2.78102 7 2.92291

So, [Fx Fy Fz Tx Ty Tz]T= [A] 6x6 X [V1 V2 V3 V4 V5 V6]T

We assumed that Fz would remain constant for our experiments and eliminated one variable
i.e. Fz and also the dependency on V1 from the system of equations given above. Finally, we
come up with a new Calibration Matrix as shown in Table 5.2.
Table 5.2: Calibration Matrix For Damaged Sensor

0.04451 6.68164 - - 86.5556


9 82.7566 3.21656 1
89.1331 12.4217 - 10.2406 -
5 3 47.3123 7 50.6765
- - 0.30657 4.91470 -
0.13602 5.06157 4 7 0.19803
- - 0.26846 - 0.61359
0.01988 8.57759 5 8.68325 7
- 0.05340 - 0.04430 -
2.61686 9 2.77385 5 2.91404

And now, [Fx Fy Tx Ty Tz]T= [A’] 5x5 X [V2 V3 V4 V5 V6]T

5.2 Kuka Surface Scanner (KSS)


We developed a discrete-step surface scanner that uses one-dimensional force feedback and
generates the profile of any arbitrary surface over which it is used. Figure 5.1 shows the GUI for
KSS and Figure 5.2 shows the robot scanning a surface.
Figure 5.1: GUI for KSS
Figure 5.2: Kuka Scanning a surface using KSS

The Kuka Surface Scanner works as explained under:

The KSS has to be supplied with two Cartesian points (Start Point and End Point) (see Figure 5.1)
that are the two extremes of a volume enclosing the surface that has to be scanned.
The robot is having a probe and will start from the Start Point and move down until it
experiences some threshold force value in upper direction due to the contact of the probe with
the surface (see Figure 5.3). The robot will scan the surface in a grid like fashion discretely till
the End Point, where the step size or quanta being the value specified by the user (see Figure
5.1).
Figure 5.3: Schematic for working of KSS

These discrete points at which the probe stops are stored and plotted so as to generate the
profile of the surface scanned. The profile generated for the surface shown in Figure 5.1 is
shown in Figure 5.4.
Figure 5.4: Profile generated by KSS for the surface shown in Figure 5.1

5.3 Kuka Stiffness Finder (KSF)


After KSS, we developed another application that also needs 1-Dimensional Force Feedback.
The Kuka Stiffness Finder finds the stiffness of any unknown material placed under the probe.
The stiffness of a material is its extensive mechanical property and is defined as (see Figure 5.5):

Figure 5.5: Defining Stiffness Of A Material

The stiffness, k, of a body is a measure of the resistance offered by an elastic body to


deformation. For an elastic body with a single Degree of Freedom (for example, stretching or
compression of a rod), the stiffness is defined as
Where,

F is the force applied on the body


δ is the displacement produced by the force along the same degree of freedom (for
instance, the change in length of a stretched spring)[11].

Kuka Stiffness Finder first tries to identify contact with the material surface and afterwards, it
presses the material with some predefined force. In the process, it records the penetration
caused and calculates the stiffness value as defined in the equation above.

KSF also maintains a log of stiffness values for all the materials that had been tested with it.
It then uses its log to compare the new material placed under it with the materials already
within the log and tries to predict the new material.
We tested KSF over a mouse pad (see Figure 5.6 and Figure 5.7) two times and the second time,
it correctly predicted the material as the mouse pad.

Figure 5.6: KSF working over a mouse pad


Figure 5.7: GUI for KSF

5.4 Lead Through Programming (LTP) Through Force Control


Lead Through Programming or LTP is an application which requires that the manipulator be
driven through the various motions needed to perform a given task, recording the motions into
the robot’s computer memory [12]. LTP through force control is the easiest way for LTP
because the operator can easily lead the robot, which is otherwise rigid, through any by his/her
push or pull. This leads to the direct teaching of any arbitrary path to the robot that the
operator has in mind.
This is also useful when some heavy load has to be transferred from one place to another
(Payload Assist). The robot need not be taught the destination positions again and again but the
operator can lift the load in the robots gripper and take it to any place by his slight push or pull.
In our application, we have kept the distances traversed by the robot as proportional to the
force applied by the operator so that harder is the push/pull, more is the displacement.
Figure 5.8 shows a heavy mass being transferred by an operator very easily using this
application.
Figure 5.8: Payload Assist

5.5 A Continuous Tracing Algorithm (TUSS)


TUSS or Trace Using State Space is an improvement over the KSS. Since KSS was discrete step
analyzer, we moved on to develop an algorithm that can guide the robot to continuously trace
an arbitrary surface while maintaining a contact with that surface using force control.

5.5.1 Introduction
Most of the work done in robot force control assumes that a model of the environment is
known apriori ( Demey, Bruyninckx and De Schutter [13], Masoud and Masoud [14] ). But
generally, it is difficult to get the correct model of environment. Also sometimes force control is
required to be implemented on an unknown surface. TUSS does not assume any model of the
environment. We have implemented hybrid Force/Position control as defined by M.H. Raibert
and J.J. Craig [4] . Here we have kept the orientation of the robotic tool as constant. The
contour tracing of the surface is done by applying a constant force in downward x direction
while robot motion is in y direction. The tracing tool makes a point contact with the surface.
Hence here, out of six Cartesian degrees of freedom, x is force controlled whereas y, z, a, b, c
are position controlled. Many applications require the orientation of the tool to be normal to
the surface of the work piece. This type of work is stated by many authors using Velocity/Force
control ( Kazerooni and Her [15], Goddard, Zheng, and Hemami [16] ) . Our work is based on
position controlled industrial robot. The orientation control in normal direction is not
considered here.

5.5.2 Human Approach


If a human-being has to move his finger over a surface while maintaining a contact, he or she
does it by sensing the forces and torques in various directions. But, suppose the person has to
move the finger over a surface keeping the same orientation of the finger, the two-dimensional
force feedback suffices as shown in Figure 5.9:

Figure 5.9: A human finger tracing a surface from right to left while maintaining the contact

This approach can be stated in steps as under:

Move down until touch is sensed or there is obstacle in downward direction. This indicates that
surface to be traced is reached.
Move left, feeling the touch in downward direction and there is no obstacle in the direction of
motion. Move slightly up if there is any obstacle on the surface. The three steps stated above
are executed in parallel.

So, to implement this approach, we have to create a black-box as shown in Figure 5.10:

Figure 5.10: Black Box depicting the human approach

5.5.3 TUSS
We have to implement the black box stated in Figure 5.10 on the robot side.
For our algorithm, we have used the hybrid control approach in which the force control is done
in x axis and the position control is done in y axis. Suppose, we have to move over a surface as
shown in Figure 5.11:
Figure 5.11: Robot with a probe, moving over a surface while maintaining the contact

We have fixed up a 2-D coordinate system with our aluminum probe i.e. the tool (Figure 5.11).
Some basic elements of the TUSS algorithm are:

1. Inputs:

FX:

To sense the feeling of touch, we need to monitor the force in upward direction. FX is a binary
variable that, when set, represents the feeling of touch or the force in the positive X direction.
So, we can say that if the force in the positive X direction, i.e. the Force in +X, crosses a
threshold value, FX=1 else FX =0.

FY:

To identify or sense an obstacle while in motion, we need to detect the force in the positive Y
direction (remember that we are moving in the negative Y direction). FY is a binary variable
that, when set, represents the feeling of obstacle or the force in the positive Y direction. So, we
can say that if the force in the positive Y direction, i.e. the Force in +Y, crosses a threshold value,
FY=1 else FY=0. This can be given as under:

FY=1, when Force in +Y > FYThreshold else,

FY=0.
1. Outputs:

UP:

UP is a binary variable that, when set, commands the robot to move in the upward direction i.e.
the positive X direction. When UP is reset, there is no motion of the probe in the positive X
direction.

DOWN:

DOWN is a binary variable that, when set, commands the robot to move in the downward
direction i.e. the negative X direction. When DOWN is reset, there is no motion of the probe in
the negative X direction.

LEFT:

LEFT is a binary variable that, when set, commands the robot to move in the left direction i.e.
the negative Y direction. When LEFT is reset, there is no motion of the probe in the negative Y
direction.

So, our black-box now looks like as shown in Figure 5.12:

Figure 5.12: Black-Box depicting the TUSS Algorithm

Now, we can simply state our algorithm as:

Move down until touch is sensed and there is no obstacle, i.e. DOWN=1 when FX=0 and FY=0,
else DOWN=0.

Move left if touch is felt and there is no obstacle, i.e. LEFT=1 when FX=1 and FY=0, else LEFT=0.
Move up if there is any obstacle, i.e. UP=1 when FY=1, else UP=0. Our algorithm can be
presented using a truth table as shown in Table 5.3:

Table 5.3: TUSS Algorithm depicted in the Truth Table

Therefore, the downward motion is controlled by monitoring the force in +X direction and the
left and upward motions are controlled by monitoring the force in +Y direction. One more
parameter is the distance moved in up, down and left directions. The RSI will monitor the forces
in each IPOC of 12 m.s. and will set or reset the values of UP, DOWN and LEFT binary variables.
But how much should the probe move in these directions, in each IPOC of 12 m.s., will decide
the speed of motion.

Suppose we give constant values for the distance that will be moved in each direction. This will
lead to a motion in contact with constant speed. The speed in all three directions may be
different .In our experiment, we have not kept the motions in each IPOC as constant values.

But, we have made the distance to be moved dependent on the corresponding forces, i.e.:

When DOWN=1, move in –X direction by:

KDOWN X ( FXThreshold - Current Force in +X direction )

When UP=1, move in +X direction by:

KUP X ( Current Force in +Y direction - FYThreshold )

When LEFT=1, move in -Y direction by:

KLEFT

( Note: The motion in -Y direction is kept a positive constant so as to have a net motion from
Right-To-Left while the forces in X and Y being maintained by moving UP and Down. )

Where, KDOWN, KUP and KLEFT are respective positive constants.

This leads to a constant speed motion until some touch or obstacle is sensed and subsequent
motion is like that of a spring.

5.5.4 Analysing Current Algorithm under various cases


Any arbitrary surface can be divided into four basic kinds. So, we will be discussing the working
of TUSS algorithm in these four cases using State Space approach.

Case 1: Flat Surface Fully Horizontal. This case is shown in Figure 5.13:

Figure 5.13: Probe moving on a fully horizontal flat surface

(Note: The arrows depict the motion of the probe.)

The state diagram for this case is shown in Figure 5.14:


Figure 5.14: State Diagram for a fully horizontal flat surface

In this case, the probe moves on the surface smoothly while maintaining a continuous contact.

Case 2: Flat Surface Fully Vertical. This case is shown in Figure 5.15:

Figure 5.15: Probe moving on a fully vertical flat surface


The state diagram for this case is shown in Figure 5.16:

Figure 5.16: State Diagram for a fully vertical flat surface

In this case, the probe moves on the surface smoothly while maintaining a continuous contact.

Case 3: Slant Surface Type 1. This case is shown in Figure 5.17:

Figure 5.17: Probe moving on a slant surface of Type 1

The state diagram for this case is shown in Figure 5.18:


Figure 5.18: State Diagram for a slant surface of Type 1

From the directions of the arrows in Figure 5.18, we can see that the probe is not in continuous
contact with the surface. It tries to maintain the contact in steps (after each IPOC).Even the
human finger loses the contact in such a case and as soon as it detects it (the reaction time of
the brain), it tries to make the contact again. Here, the IPOC time of the RSI relates to the
brain’s reaction time (To be more precise, The Fingertip Reaction Time [17].)

Since the IPOC time is very less (12 m.s.), this approximation of the continuous contact motion
is good enough for various applications.

Case 4: Slant Surface Type 2. This case is shown in Figure 5.19:

Figure 5.19: Probe moving on a slant surface of Type 2


The state diagram for this case is shown in Figure 5.20:

Figure 5.20: State Diagram for a slant surface of Type 2

This case is the most interesting one. Although the probe will maintain a continuous contact,
but there is a problem of Deadlock Cycles. Here, state A and state B can lead to a deadlock cycle
of ABABAB......... causing a cycle of Up and Down motions at the same point thereby leading to
no motion from right to left. So, reaching either of the states A & B may lead to a deadlock that
may last for indefinite time.

The State-Space solution we devised is a recurrent Markovian system (or Markov Chain) [18]
[19] due to following characteristics:

1. Since the model of the environment is unknown apriori, the occurrence of next state is
completely stochastic and does not depend on the current state.
2. Since the robot has repeatability of 0.1 mm., even in the cyclic motion UP-DOWN-UP-
DOWN may or may not lead to the graph-cycles [20][21].

Due to the unknown surface environment and repeatability error, the next state is
independent of the past states which satisfies the memoryless property.
Explanation of Deadlock Cycle

In a slant as shown in Figure 5.19, the probe will experience forces in both the +X as well as +Y
directions.

Now, the probe being in state A will go up and reach state B. From B, it will move down and
reach state A. There is a probabilistic chance that state A transits to state D or state B to state C
(depending upon the Threshold forces and the constants KDOWN, KUP and KLEFT), which can lead to
motion, otherwise if the probe gets stuck in the Deadlock Cycle ABABABAB......, the probe will
perform up and down motion at the same point.

Similarly, BDBDBDBD........ constitutes another Deadlock Cycle. To avoid this problem, we


suggest an improvement over this algorithm, by considering the previous history, in the next
section.

5.5.5 Improved TUSS to avoid the problem of Deadlock Cycles


There are basically two Deadlock Cycles identified. One is ABABABAB............. and the other is
BCBCBC......... So, our aim is to break these two deadlock cycles i.e. there should be no contiguous Up
and Down motions. To accomplish this, we incorporate two binary flags in our algorithm.

These are:

LastWasUp and

LastWasDown.

LastWasUp=1 denotes that the last motion command was an Up motion command. LastWasDown=1
denotes that the last motion command was a Down motion command. So, whenever we get an Up
motion command, we should check the flag LastWasDown. If it is reset, then we can perform the Up
motion (also setting the LastWasUpFlag and resetting the LastWasDown flag), otherwise we will perform
a Left motion and reset both the flags.

Similarly, when we receive a Down motion command, we should check the flag LastWasUp. If it is reset,
then we can perform the Down motion (also setting the LastWasDown flag and resetting the LastWasUp
flag), otherwise we will perform a Left motion and reset both the flags.

This can be stated through the binary equations shown in Figure 5.21:
Figure 5.21: Binary equations for Improved TUSS

Therefore, finally our system becomes a Mealy Machine [22] whose output depends upon the
current state as well as the inputs (LastWasUp, LastWasDown) and is shown in Figure 5.22:

Figure 5.22: Block Diagram for the Improved TUSS Algorithm without Deadlock Cycles

5.5.6 Experimental Results


We chose a semi - circular contour ( since a circle has a continuously varying slope ) to test the
Improved TUSS algorithm. The experimental - setup is shown in Figure 5.23:
Figure 5.23(a): Experimental Setup to test Improved PVK Algorithm

Figure 5.23(b): GUI To Collect Data From The Improved PVK Algorithm
The probe is a champhered one with radius = 5.00 mm. ; The direction of motion is from Right –
To – Left ; FXThreshold = 4.0 N, FYThreshold = 4.0 N, KDOWN = 0.01 mm./N, KUP = 0.005 mm./N and KLEFT =
0.1mm. Figure 5.24 compares the actual contour with the path followed by the Improved TUSS
algorithm:

Figure 5.24: Comparing the actual contour with the path followed by Improved PVK Algorithm

Since we have taken a champhered probe, the point of contact changes while tracing, so there
is an initial mismatch of 5.00 mm., i.e. the radius of the probe, between the actual contour and
the path traced.

Figure 5.25 shows the path traced when the point of contact almost remains the same:

Figure 5.25 – Comparing the actual contour with the path followed when the point of contact remains
same
Part 1 of Figure 5.25 can be compared with Figure 5.17 and Part 2 of Figure 5.24 can be
compared with Figure 5.19.
Figure 5.26 & Figure 5.27 show the force profiles:

Figure 5.26: Force Profile in X – Direction


The average value of FX = 4.83 N as FXThreshold = 4.0 N.

Figure 5.27: Force Profile in Y – Direction

The average value of FY = 5.9 N ( for positive values of FY where Force Control was done ) and
FYThreshold = 4.0 N.

Validation of Force Data :


We found the slope of the tangents at each point of the contour using the formula :

tan (θ) = FY/FX,

and plotted the θ as shown in Figure 5.28:

Figure 5.28: Slope Of Tangents (In Degrees) derved from the force values

It can be easily seen that the slope is continuously varying from -90 o to + 90o as expected ( from
Left – To – Right ) . The slope was smoothened using locally weighted scatter plot smoothening
and plotted and compared with the slope found geometrically in Figure 5.29 and Figure 5.30
respectively :
Figure 5.29: Smoothened Slope Of Tangents (In Degrees) derved from the force values

Figure 5.30: Comparing Geometrically found Slope with that found from the Forces
Using the slope found from the force values, we tried to reconstruct the contour as shown in
Figure 5.31:

Figure 5.31: Reconstructing the contour using force values ( i.e. the slope obtained from force values)

So, we see from Figure 5.31 that a good inference of a contour can be made from the force
values obtained while tracing the contour using The Improved TUSS Algorithm.
6. Peg In The Hole
This section describes the development of algorithms to achieve a Peg In The Hole assembly.
Peg-In-Hole Problem is the benchmark problem for Robotic Assembly; Given the nominal
position and orientation of the hole, we have to use the signals from F/T sensor to position and
align the peg for insertion into the hole.
In general, there is a three step solution to the problem:

1. Initially, the peg can be guided either through the position control or vision control to
approximately reach and hit the hole. Then the search for the hole’s center begins so as
to make the center of the peg within the clearance area around the centre of the hole
[2]. This removes the positional error between the peg and the hole (see Figure 6.1).

Figure 6.1: Hole Search

2. Now, since the peg is sufficiently at the centre of the hole, the directional or
orientational error between the peg and the hole has to be removed so that the peg
easily inserts into the hole. The first case involves the removal of Large Directional Error
where the peg may still be outside the clearance region of the hole [5] (see Figure 6.2).
Figure 6.2: Depiction of Large Directional Error

3. Then comes the removal of Small Directional Error, where, the peg is accurately within
the clearance range of the hole and only the directional manipulation needs to be done
in the peg’s orientation so as to have a smooth peg insertion without any jamming [5]
(see Figure 6.3).
Figure 6.3: Depiction of Small Directional Error

6.1 Hole Search


The aim of hole search is to place the peg’s center within the clearance region of the hole’s
center which is a circular area of diameter=The clearance between the Peg and The Hole (see
Figure 6.1).

There are basically two types of Hole search strategies:


a) Blind Search: That deals with exhaustive search within the search space until the goal is
reached.
b) Intelligent Search: That deals with the intelligent and decision based search to reach the
goal without exhaustive search.

6.1.1 Blind Search Strategies


We have implemented two blind search strategies, viz. Grid Search and Spiral Search. These
strategies use the Hybrid Control Scheme (discussed in Section 4), where the Force Control is
done in the direction of the hole and the position control is done in the plane of the surface
containing the hole.

6.1.1.1 Grid Search


In this type of search, a continuous tracing of the surface, where hole is assumed to be present,
is done in a grid like fashion (see Figure 6.4).
Figure 6.4: Search Points for Grid Search

For the search to be exhaustive and to ensure that the peg does not miss the hole, the spacing between
the search points should not be greater than √ 2∗c [2] where c is the clearance between the peg and
the hole and is defined as:

c=(D-d)/2, where

D=Hole Diameter
D=Peg Diameter
This kind of search can be done using 1-Dimensional Force Feedback (assuming that the hole
surface is plane). The Peg needs to be moved down until it touches the hole surface and then
either a discrete or a continuous trace path can be followed until the peg descends into the
hole.
This search can be used for both the cases:

a) When the peg is parallel to the hole surface: The peg will descend into the hole when it
comes within the clearance range of the hole.

b) When the peg is tilted: The peg may descend into the hole even when it has not reached
the center of the hole; In such a case, the goal achievement is identified when the peg
descends the most. Please note that the tilted peg may hit the hole walls, so a two-
dimensional force feedback continuous tracing algorithm like TUSS is required for this
search in tilted peg case.

We implemented a Grid search with a clearance of 0.5 mm. The path traced is shown in Figure
6.5.
Figure 6.5: Grid Search with clearance=0.5 mm.

The dip at the center denotes that the peg has reached the center of the hole.

6.1.1.2 Spiral Search


Spiral search is another kind of blind search that involves the tracing of surface in a spiral
fashion. The Spiral Search is better than the Grid Search because it involves a much shorter path
to be searched and it has no sharp changes in directions of search both the factors leading to
lesser time of search.

For our work, we chose the Archimedean Spiral. In polar coordinates (r, θ) it can be described
by the equation

with real numbers a and b. Changing the parameter a will turn the spiral, while b controls the
distance between successive turnings.

The pitch of such a spiral is defined as P=2 π∗b. The pitch refers to the space between the
turns of a spiral. For the Spiral search to be exhaustive, the criterion is that the pitch should be
less than or equal to the assembly clearance c [2].

Figure 6.6 shows the rate at which the spiral needs to be progressed so as to move at a
constant path speed. Also pitch P is equal to the clearance so as to make the search exhaustive.
Figure 6.6: Spiral Search criterion for constant speed search
Again, This search can be used for both the cases:

a) When the peg is parallel to the hole surface: The peg will descend into the hole when it
comes within the clearance range of the hole.

b) When the peg is tilted: The peg may descend into the hole even when it has not reached
the center of the hole; In such a case, the goal achievement is identified when the peg
descends the most. Please note that the tilted peg may hit the hole walls, so a two-
dimensional force feedback continuous tracing algorithm like TUSS is required for this
search in tilted peg case.

Figure 6.7 shows the path traced in Spiral search with clearance=0.5 mm.

Figure 6.7: Spiral Search with clearance=0.5 mm.

Again, the dip at the end denotes that the peg has reached the centre of the hole.

6.1.2 Intelligent Search Strategies


The intelligent search strategies provide the estimate of the hole’s center as soon as the hole is
sensed. We have implemented two intelligent search strategies, viz. Search using Torque
Information and Neural Networks, and Search using Precession Strategy.
The First one is implemented in simulation while the second one actually on the robot.
6.1.2.1 Neural Peg In The Hole

6.1.2.1.1 Neural Networks


For a general system, there are some internal relations among its different states and
measurable features. These relations can be written in the mathematical form of

y = f (x)

y={Rm} is an m-dimensional vector that denotes the state of the system. x={Rn} is an
n-dimensional vector of measured physical quantities. The mapping from x to y can be
represented by a function f. If x is measurable and if y is observable then we can estimate the
state y from a model f* of the function f. If a goal state is given, we can attempt to control the
system to the desired state by some control strategy using the identified mapping f*. But in
practice, the system may be too complex for an analytic approach to succeed. The mapping
may be highly nonlinear and difficult to model mathematically. Due to the powerful nonlinear
computational ability of neural networks, we choose to use a neural-net to construct an
approximate mapping for function f instead of attempting an analytic derivation. We expect a
neural net will generate a mapping from meas1ured features to the system state like

y’= g(x)

y’ is an estimate of the state from the neural-net mapping g, which is an approximation of


function f. When this neural-net mapping is used in control, we get the system measurements x
from sensors and then estimate the current states of the system by the neural-net mapping.

1
(Note: Till now we were able to use the damaged F/T sensor keeping the assumption that one force component is kept constant, but
further experiments are free from such assumption, therefore, we used the new F/T sensor for further work)
Figure 6.8: Basic structure of neurocontroller

By computing the difference between the current state and the goal state, we can derive a
control action from an action generator. In response to the appropriate control actions, the
system state will evolve to converge on the goal state. The combination of the neural-net
mapping and the action generator is called a neurocontroller in this thesis. The structure of the
neurocontroller is illustrated in Figure 6.8. The mapping in which we are interested is that from
the moments or torques to the position of the peg with respect to the hole.

The basic processing unit of a neural network is called a neuron or node. A neural network is
formed through weighted connections among the neurons. A neuron consists of multiple
inputs, an activation function and an output, as shown in Figure 6.9.

Figure 6.9: Structure of a neuron

The neuron’s inputs are from external inputs or outputs of other neurons. The weighted sum of
these inputs drives the neuron’s activation function. An output is produced by the activation
function, which will have different forms for different kinds of neural networks.
The weights shown in Figure 6.9 are the storage elements of the neural network. Before the
neural net is trained, they are assigned random values. Training the neural net consists of
adjusting these weights according to some example data from the system. The example data is
called a pattern or training set for the neural network. Each pattern is a pair of input and output
vectors. The input is a vector of measurable features, and the output, in our case, is a vector
that describes the location of the hole. After learning, the weights store the information of the
system resulting in an approximate mapping from the input space domain to the output space.
It is thus useful to rewrite our mapping in a different form, recognizing a vector of weights as
another input

y’= g(x,w)

where w is the vector of weights. There exists a variety of methods for seeking an optimal set of
weights to best approximate the desired mapping. In all cases, though, the goal is to adjust the
weights w to model the system as precisely as possible. Here, we introduce some neural-net
methods used in this thesis. The most commonly used neural-net method is the traditional
multi-layer feedforward neural net with a backpropagation learning algorithm [23]. The
architecture of this kind of neural net is shown in Figure 6.10.

Figure 6.10: 3-layer Backpropagation Neural Network

It consists of an input layer, one or more hidden layers, and an output layer. The neurons
between any two adjacent layers are fully interconnected in the feedforward direction. The
weight of each connection is adjusted during training. The activation function can be Gaussian,
logistic or a sigmoid function for the hidden layers. We choose a linear function for the output
layer nodes. To simulate the functional mapping precisely through a BP neural network, we
must select the proper number of hidden nodes, the parameters for the activation function and
the connection weights.
6.1.2.1.2 Mathematical Model for Parallel Peg
The basic peg-in-hole problem is shown in Figure 6.11. In this model, we assume that both the
surface of the subassembly and the peg bottom surface are parallel to each other. So when the
peg moves in contact with the subassembly, there is surface-to-surface contact (except for some
conditions we will discuss later). With the peg moving towards the hole, the contact state will
change. This change will be reflected through the reaction forces and moments.

Figure 6.11: The Parallel Peg case

As shown in Figure 6.12, when the center of the peg is “outside” the line between points A and
B, the reaction moments and forces provide no information about the peg’s location relative to
the hole. Here, “outside” the line means the distance from the hole center to the peg center is
greater than the distance from the hole center to the chord AB. As the peg moves inside this
line, the reaction force due to contact must be off-center with respect to the peg center,
leading to a measurable reaction moment. (The peg will tilt slightly relative to the subassembly
surface under this condition as shown in Figure 6.13). As the peg position changes, the direction
and value of the moment will be different. The neural-net controller can use this moment
information as clues to indicate the peg position and then guide the peg to move towards the
desired destination.
Figure 6.12: Case when the center of peg lies outside the chord of contact

Figure 6.13: Case when the center of peg lies inside the chord of contact

Given an arbitrary position of the peg, we want to know how large the moments are in this
position. If our torque sensor is located at point rsensor, and if contact force vector fcontact acts
through point rcontact, then the resulting moment at the sensor, msensor, will be:

m sensor = (r contact - r sensor) X f contact

If we ignore the friction forces fx and fy and let the downward force fz exerted by the robot be
a constant, then the moment is related to the distance between point P (peg’s center) and E
(the middle point of the two contact points) as shown in Figure 6.14. Here we will deduce this
relationship.
In Figure 6.14, point P, (xp, yp), denotes the peg’s center and point H, (xh, yh), denotes the hole
center. A, (xa, ya), and B, (xb, yb), are the two points at the intersections between the circular
boundaries of the peg and the hole.
Figure 6.14: Computing the moments
To compute a reaction moment, we need to know the location of the resultant contact force.
The distribution of the contact pressure over the region of overlap between the peg and
subassembly is unknown. However, if the peg tips even infinitesimally into the hole, then the
contact forces must be concentrated at points A and B. In this case, the resultant contact force
must act through a point lying on the line A-B. If the concentrated reaction forces at A and B are
balanced, then the resultant force will be at point E, mid-way between A and B. Under these
assumptions, we can compute the relationship between measured moments and relative
location of the center.
To obtain the forward mapping, we assume knowledge of the coordinates of P and H then
derive the moment based on computed coordinates of point E. First, we compute the area of
triangle APH:

Where lPH is the distance from point P to point H and the value “s” is defined as half the
perimeter of triangle APH. We can compute the area of triangle APH, ADAPH, by Heron’s
formula [24] as follows:

Because the area of ∆APH is also equal to half of lAE times lPH, we derive lHE as follows:
Then, we can get the coordinates of point E.

Finally, we can get the moments in the x and y directions [7].

Moments in the x and y directions are non-zero only within a limited region. If the peg moves
out of this range, the moments are both zero or at least provide no information regarding the
hole location. One boundary of this region corresponds to the peg falling into the hole, which
occurs when:

A second boundary corresponds to the center of the peg moving outside the line A-B (i.e. points
H and P lie on opposite sides of line AB), which occurs when
Under the second condition, the moments are both zero. The plots in Figure 6.15 show the
computed moments as a function of peg coordinates relative to the hole center. The
computations are based on parameters rhole=50mm, rpeg=47mm, and fz = 1N. We can see that
moments in the x and y directions have similar maps except for a 90-degree rotation about the
z axis.

a)

b)

Figure 6.15: Moments resulting from the Mathematical Model For The Parallel Peg Case.
a) Moment In X. b). Moment In Y.
6.1.2.1.3 Simulation Results
The neural network was trained using mathematical model as shown in Figure 6.16:

Figure 6.16: Neural Network Training

The result of using the neural network is shown in Figure 6.17:


a)

b)

Figure 6.17: Simulation results for Parallel Peg Case. a) In 2-D. b) in 3-D.

We can see from Figure 6.17 that as soon as the moments are sensed, the peg directly jumps to
the center of the hole.
6.1.2.1.4 Tilted Peg Case
The previous model assumed an ideal condition. In fact, the surface of the subassembly can not
be perfectly parallel to the bottom surface of the peg. There is always a tilt between the peg
and the subassembly surface. Thus in most positions there is only one contact point between
the peg and subassembly (as shown in Figure 6.18).

Figure 6.18: Tilted Peg Case


In this case, when the peg moves around the surface, the moments are not zero even if the peg
does not overlap the hole. But this moment information cannot be used to guide the assembly,
because it does not change unless the contact point moves relative to the peg. This can only
occur when the peg overlaps the hole, as illustrated in Figure 6.19. For positions 1 and 3 of
Figure 6.19, the moments are identical, although in position 1, the peg is above the hole while
in position 3 it is not. In position 2, however, the contact point is at a different location relative
to the peg, which results in a different reaction moment.

Figure 6.19: Contact States for Tilted Peg Case


From Figure 6.19, we also see that there is a lowest point in the Z direction on the peg’s bottom
surface. This lowest point will contact the subassembly surface unless it is within the region of
the hole. In the latter case, there are two possible contact points between the peg and the hole.
The one that is lower in the Z direction will be the actual contact point. Note that we made an
assumption here. The peg’s tilt angle should not be too large. The tilted peg’s projection on the
assembly plane is actually an ellipse, which we approximate as a circle for small tilt angles.
We could not find the precise inverse model for the tilted-peg case, because in this model there
are different peg positions corresponding to the same contact point. For example, consider a
position of the peg for which the contact point on the peg coincides with a point on the rim of
the hole. Call this point “E” on the peg. If we subsequently move the peg in a circular arc such
that point E traces the rim of the hole, then over at least part of this arc point E will remain the
contact point. Thus we see that over a range of positions we would detect identical moments,
and therefore the moment function is non-invertible. As a result, the training error is relatively
large and the control result is not as good as the parallel model [7].

Since this solution is not appropriate for tilted case (tilted case most often is present), we
dropped the idea to implement this solution. We moved to another approach that is robust and
suited for the tilted case.

6.1.2.2 Precession Strategy

6.1.2.2.1 Need for Precession


Precession is a change in the orientation of the rotation axis of a rotating body [25]. The main
emphasis is on the state when the peg makes two-point contact with the hole. In this state, the
peg is oriented in the direction of the center and if the peg is moved maintaining two-point
contact, the center of the hole will be reached. Thus we use precession to make a two-point
contact.

6.1.2.2.2 The Precession Strategy


The precession strategy is an intelligent localization strategy based on measurements of the peg
position as it precesses while maintaining contact with the hole [2]. To execute a precise
precession trajectory, the robot needs to be under stiff position control in the (x, y, Thetax,
Thetay) dimensions, i.e., positions along and rotations about the x- and y-axes. On the other
hand, to maintain soft contact between the peg and hole, the robot needs to be under
compliant control along the vertical (z) axis. This combination of position and compliant control
on selective axes is achieved using the hybrid control scheme described in Section 4. The
precession strategy is described below in the context of a circular peg-in-hole assembly with
hole position uncertainty in (x, y).
The first step is to tilt the peg about a tilt axis, by the tilt-angle, Thetatilt. As shown in
Figure 6.20 (a), the tilt-axis, initially aligned with the negative x-axis, passes through the bottom
center of the peg. Next, the peg is lowered into contact with the hole surface (Figure 6.20(b)).
Using the hybrid compliant controller described in Section 4, a steady downward force is
applied by the robot through the peg, while the tilt axis is rotated about the vertical axis, so the
tilted peg precesses as shown in Figure 6.20(c).

Figure 6.20: The precession strategy: (a) The peg is initially tilted by Theta tilt.
(b) The peg touches the hole surface, and peg height h1 is recorded. (c) As
the tilt axis is rotated, the peg precesses. (d) The peg dips into the hole,
height h2 < h1.

Consider the initial condition shown in Figure 6.20(b). The point of contact between the peg
and the hole is on the hole surface. As the peg precesses, the contact point moves along the
perimeter of the peg, and on a corresponding circular path on the hole surface until it reaches
the hole edge. During this interval, the nominal height of the peg is constant. As the peg dips
into the hole, the peg height decreases until a critical point where the peg is in contact with the
hole edge in two places. With further precession, the peg rises out of the hole and the peg
height increases.
This change in peg height reveals not only the direction of the hole center relative to the peg-
position, but also the distance. The direction of the hole center is given by the vector
perpendicular to the tilt-axis at the moment of minimum peg height. The distance of the hole
center from the peg-center can either be calculated analytically from the decrease in peg
height, or looked up from a table of sampled peg height values corresponding to peg-hole
distance. A visualization of such a table is shown in Figure 6.21. The minimum peg height
values recorded during precession for different relative peg-hole positions are plotted. Hence,
for the experiment, the minimum peg height would be matched to this table to obtain the
possible peg positions relative to the hole. This is shown in Figure 6.21.
With the relative peg-hole position localized to two possible values, we can proceed in a variety
of different ways. One option is to select one of the two values and utilize it to compute the
hole-configuration w.r.t. the peg, and attempt assembly. If assembly fails, then we know for
sure that the other value is the actual relative peg-hole position. Another option is to move the
peg to a different position and repeat the precession strategy. The results from the two
experiments analyzed together will be sufficient to localize the relative peg-hole position.
For the precession strategy to be successful, the precessing peg has to pass over the hole. For
this to happen, we can initially use any of the blind search strategies described in Section 6.1.1.
As soon as a specified value of dip occurs, the hole is sensed and the precession starts.

Figure 6.21: Peg height w.r.t. distance from hole’s center


6.2 Peg Insertion
After the search is complete and we are confident enough that the peg has reached to the
center of the hole, the next step that comes is its insertion. The peg needs to be inserted
smoothly into the hole without any jamming. This requires the correction in the orientation of
the peg w.r.t. the hole. How this is achieved is explained in further subsections.

6.2.1 Gravity Compensation


Till now, whatever experiments we performed, the orientation of the robot’s Tool Center Point
(TCP) was kept constant. But now for the peg insertion purpose, we need to vary the
orientation of the TCP. Since the F/T Sensor is mounted with the robot tool i.e. the peg in our
case, the sensor also orients along with the peg. This leads to the change in the readings from
the sensor as it feels the forces and torques occurring from the load on the sensor, i.e. due to
the robot gripper and the peg, redistributed in some different orientation. The load due to mass
m (see Figure 6.22) remains same w.r.t. the Base Coordinate System (BCS) but changes w.r.t.
the Tool Coordinate System (TCS) of the sensor. The load that was in Pure Y’ of Tool Coordinate
System, after rotation of the sensor, becomes the load in pure X’ of the Tool Coordinate
System. Since the F/T Sensor is mounted along with the tool, it gives the readings in the Tool
Coordinate System. Therefore the readings change with varying sensor or tool orientation.
This calls up for some correction that makes the sensor readings independent of the orientation
of the sensor. This correction process is known as Gravity Compensation.

Figure 6.22: Need For Gravity Compensation


Kuka uses Euler angle ZYX convention for the representation of its TCP’s orientation i.e. if the
orientation of the TCP is given by the triplet {a,b,c}, it means that the tool has rotated first in
the Z-axis by a degrees, then in the Y –axis by b degrees and finally in the X-axis by c degrees all
rotations performed w.r.t. the Base Coordinate System. This is similar to the rotation in the X-
axis by c degrees, then in the Y –axis by b degrees, and finally in the Z-axis by a degree w.r.t. the
Tool Coordinate System.

Since the load vector always remains the same w.r.t. the Base Coordinate System, to find the
load in new TCS, we need to find out the new vector components of the load as viewed from
the new TCS.
The load can be considered as a point in the coordinate space with (x,y,z) representing the
three components of net force or the net torque. Now, if we wish to find the new coordinates
of the same point in new TCS, it is done as follows:

T2xyz =Rz(a)Ry(b)Rx(c)T1xyzRx(c)Ry(b)Rz(a)

Where,
Rx,Ry,Rz are the Rotation Matrices in X,Y and Z respectively given by:

And, T1xyz and T2xyz is the Translation Matrix given by:


In T1xyz, we have to place the three components of the Net Force or Torque when the
TCS was orientationally aligned with the BCS, then T2xyz will provide the new
components of the Net Force or Torque in the new TCS.

If we subtract these new components from the sensor readings, what we get are the
Gravity Compensated Force and Torque readings purely due to the Contact Forces and
Torques.

6.2.2 Large Directional Error (LDE) Removal


Even after the Hole Search is complete, we cannot say for sure that the Peg Center lies within the
clearance range of the Hole Center. Such a case is shown in Figure 6.23 along with the forces
experienced by the Peg. This usually occurs when the Peg having large amount of tilt (Large Directional
Error) w.r.t. the Hole or the clearance value is very small.

Figure 6.23: Large Directional Error Case

In such a case, the peg is outside the hole and makes the three-point contact with the hole.
In such a case, not only orientational but positional correction is also required to put the peg
inside the hole.

This can be done in two ways:

6.2.2.1 LDE Removal Using Information From Hole Search


If we use the information from the Hole Search, we know the direction in which the center of the hole
lies. This is shown in Figure 6.24.
Figure 6.24: Large Directional Error Removal Using Information From Search

From the direction of the hole center, we can get the direction (perpendicular to the direction
of the hole center) in which the peg needs to be rotated to align it with the hole. Now using the
Hybrid Control Scheme (discussed in Section 4), the Position Control is done in the rotational axis
defined by the Direction Of Rotation in Figure 6.24 and the Force Control is done in X and Y of TCP (or
the Peg) (see Figure 6.23) to maintain the three-point contact. Gradually, the Three-Point Contact will
convert into The Two-Point Contact as shown in Figure 6.3 and it calls up for the next step i.e. The Small
Directional Error Removal.

6.2.2.2 LDE Removal Using Moment Information


Since the peg makes the three- point contact with the hole, the direction of the moment gives us the
direction of the hole center. The hole center lies in the line perpendicular to the direction of the net
Moment (see Figure 6.25).
Figure 6.25: Large Directional Error Removal Using Moment Information

Then we can use the same approach as in Section 6.2.2.1.

6.2.2.3 Stopping Criterion For LDE Removal


To test if the peg is completely inside the hole and is making Two-Point Contact with the hole, we
perform a Back-Hit Test. We perform back-hits in –X direction of the peg (see Figure 6.23) to test any
wall at back. If there is some wall (when the peg is completely inside the hole), it will feel force in +X
direction of the peg which denotes the end of LDE Removal and calls up for the Small Directional Error
Removal.

6.2.3 Small Directional Error (SDE) Removal


When the peg is completely inside the hole and makes two-point contact with the hole, it needs to be
oriented according to the hole for smooth insertion without jamming.
The directional error can be corrected using the direction of the moment sensed from
the sensor. However, the direction of the moment changes as the tilted angle of the peg varies.
When the tilted angle is small, the direction of the peg to be moved is the same
as the direction of the moment sensed. However, for a large tilted angle the peg must be
moved in the opposite direction of the sensed moment [5]. Figure 6.26 shows the relations
between the direction of the moment sensed and the direction to be moved for the six possible
cases.
Figure 6.26: Direction Of Moment Sensed and The Direction in which The Peg needs to be moved

Since the direction of moment does not specify the direction in which the peg needs to be
moved, we try to rotate the peg in both the directions about the line of net moment and then
record the moments obtained in both rotations. Then we compare both the moments and the
direction in which the moments are decreasing is the direction in which the peg needs to be
rotated to align with the hole. Again we are using the Hybrid Control Scheme (discussed in Section
4), the Position Control is done in the rotational axis defined by the Direction of Net Moment and the
Force Control is done in the downward direction (see Figure 6.27).

Figure 6.27: Removing Small Directional Error


But this approach requires lot of active sensing and is time consuming. Therefore, we follow a
different approach (See Figure 6.28).

Figure 6.28: a) Rotate until first wall hits. b) Rotate until second wall hits. c) Insert at the mid-point.

The BackJump is taken to avoid jamming while rotation and the wall hits are sensed by the
force values.
The value of BackJump can be calculated as shown in Figure 6.29.
Figure 6.29: Calculating BackJump

To avoid the peg’s corner hitting the wall, the backjump can be taken as the arc length l.
So, BackJump = l = θ*r, where r is the radius of the peg and θ is the tilt angle.
The BackJump should not be too large so as to avoid the peg getting out of the hole. The
maximum limit of the BackJump can be calculated as (see Figure 6.30):

Figure 6.30: Calculating The Upper Limit On BackJump


According to Figure 6.30, the BackJump should be lesser than l.

Since l = 2(R-r*Cos θ) , and BackJump= θ*r,


Sin θ

θ*r <= 2(R-r*Cos θ)


Sin θ

Therefore,

θ*r*Sin θ<=2(R-r*Cos θ)

=> θ*Sin θ+2 *Cos θ<=2*R/r,

where R and r are the Hole and Peg radii respectively.

Figure 6.31 and 6.32 show the experimental results obtained while using SDE Removal on a
Hole of diameter 57 mm. and a Peg of diameter 56.5 mm.

Figure 6.31: Insertion with Time using SDE Removal


Figure 6.32: Insertion and Upward Force Due to Jamming

We can see from Figure 6.31 that there are more than one back jump and insertion steps in SDE
Removal. This is due to the inaccuracy in the direction of moments sensed. As soon as we get
the correct direction of the moments, the insertion is done in a single step. As we see from
Figure 6.32, the insertion is allowed until the upward force Fx crosses some threshold value (see
red-marked points in Figure 6.32). As soon as upward force exceeds the threshold value set due
to jamming, back jump is taken and the proper orientation for smooth insertion is searched for.

Stopping Criterion For SDE Removal

SDE Removal stops when a pre-specified amount of depth is reached by the Peg inside the Hole.
7. Optimization(why fonts have changed
here??
We have used Design Of Experiments (DOE) for optimization of our assembly task. Since the statistical nature of
the assembly task and DOE’s increasing popularity in manufacturing quality control, DOE has been used in the
robot assembly parameter optimization [26].
DOE includes designing our experiments in such a manner that we can analyze the direct effects as well as
interaction effects of some factors or parameters over the optimization goal.
DOE has various kinds of designs like Full Factorial designs, Custom Designs, etc.
Full factorial designs include all the possible combinations of the factors’ values to make the design. Thus if there
are n factors each with two levels of values (High & Low), there will be in total 2 n number of trials in the design.
We have used DOE to optimize the time of search as well as insertion. We have used a statistical analysis tool that
creates design as well as analyzes the data to get the optimal value for the parameters affecting the time of search
and insertion. Since we have implemented the search using Precession Strategy and insertion using LDE Removal
and SDE Removal, these three algorithms are different and have different factors affecting their time of
completion.
Thus, we optimize these three tasks separately.

7.1 Optimizing The Search


The factors affecting the search are:

a) Dip: The Dip refers to the amount by which the peg descends into the hole to sense it
and after that start the precession.

b) Contact Force For Search: This is the force that is maintained by the peg to maintain the
contact with the hole.

c) Angular Speed For Precession: This is the speed with which the peg precesses.

Now we need to consider each of the parameters above as a two-level parameter. For that, we define the lower
and upper limits for each of the above parameters (Lower Limits are given by subscript L and Upper Limits by
subscript H, L and H standing for Low and High respectively).

a) DipL = 2.0 mm., DipH = 2.5 mm.

b) ContactForceL = 2 N, ContactForceH = 5 N

c) AngularSpeedL = 0.05 o per command, AngularSpeedH= 0.2 o per command.

Now we design our experiments to analyze the direct effects as well as the interaction effects of the
factors affecting the Time Of Search.

So, we perceive The Time Of Search As:


TimeOf Search =

a1*Dip+a2*ContactForce+a3*AngularSpeed+a4*Dip*ContactForce+
a5*ContactForce*AngularSpeed+a6*Dip*AngularSpeed+a7*Dip*ContactForce*AngularSpeed+
a8*Dip*Dip+a9*ContactForce*ContactForce+a10*AngularSpeed*AngularSpeed.

Here, a1-a10 are constants and a8, a9, a10 represent quadratic effects.

For such interactions to be analyzed, we made a Design for our experiment that constituted 16 runs with
three replicates for each run. The three replicates were taken to note the Time data consistently and finally
we take the average of the times recorded in these three runs.

Table 7.1 shows the DOE for search:

Table 7.1: DOE for Precession Search

DipToBeSensed ContactForceForHoleSearch AngularSpeedForPrecession Mean Search


(mm.) (N) (degrees per command) Time
(milliseconds)

2.5 2 0.2 79411.33333


2 2 0.2 84062.66667
2.25 2 0.125 126124.6667
2 2 0.05 133260.3333
2.5 2 0.05 223963.3333
2.25 3.5 0.2 71536.33333
2 3.5 0.125 80932
2.25 3.5 0.125 85526
2.25 3.5 0.05 143328.3333
2.5 3.5 0.125 92734.33333
2.25 3.5 0.125 88458.33333
2.5 5 0.05 172797
2.5 5 0.2 84994.66667
2 5 0.2 55869.66667
2.25 5 0.125 110723.6667
2 5 0.05 112729

(Remove all places after decimal for mean search time)


Then we analyze the results using our Statistics Tool that has been provided with Prediction
Profiler that helps us to visualize the effects of various parameters on our optimization goal.
The results are shown in Figure 7.1.

a)

b)

Figure 7.1: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler.
The prediction profiler (see Figure 7.1(b) ) predicted that the search time will be minimum at:
Dip=2 mm., ContactForce (Fx)=3.9 N and AngularSpeed=0.2o per command and the minimum
search time was predicted to be 50,210 millisecs. When we used the above given values for
running the assembly, the Search Time came out to be 80,000 milliseconds.

7.2 Optimizing The LDE Removal


The factors affecting the LDE Removal are:

a) Contact Force: The force maintained by the peg with the hole.

b) Angular Speed: The degrees by which the peg rotates per command.

The limiting values for these factors are:

ContactForceL=2.0 N, ContactForceH=5.0 N

AngularSpeedL=0.05o per command, AngularSpeedH=0.2o per command.

Again we want to analyze the individual effects as well as the interaction effects of these
parameters. For that we designed our experiments as shown in Table 7.2.

Table 7.2: DOE for LDE Removal


ContactForce AngularSpeed MeanTime
(N) (degrees per command) For LDE
Removal
(millisecond
s)

3.5 0.125 77646


2.51 0.2 58817.67
3.5 0.05 174750
5 0.05 204541.7
2 0.125 89052
5 0.2 61328
3.5 0.125 78609.33
2 0.05 158083
a)

b)

Figure 7.2: a)The plot of Actual V/S Predicted Search Time. b) The Prediction Profiler.

The prediction profiler (see Figure 7.2(b) ) predicted that the LDE Removal time will be
minimum at: ContactForce =3.75 N and AngularSpeed=0.2o per command and the minimum LDE
Removal time was predicted to be 50,660 millisecs. When we used the above given values for
running the assembly, the LDE Removal Time came out to be 58,969 milliseconds.
7.3 Optimizing The SDE Removal
When we analyze SDE Removal, we can see that as soon as the peg gets the correct direction of
moment, it will search for the two wall-hits and the insertion will be complete in a single step
(But there may be more than one insertion steps as quoted in Section 6.2.3 due to inaccuracies
in the moments sensed). So we could not find some meaningful parameters that could affect
SDE Removal.
We tried to take two parameters and examine the effects of these parameters on SDE Removal.
These were ContactForce and BackJump. We got the results as shown in Table 7.3.

Table 7.3: Experimental Runs for SDE Removal


ContactForce BackJump Mean Time
(N) (mm.) For SDE
Removal
(millisecond
s)

8 6 39427
8 5 39531.33
8 7 32385.33
10 6 39302.33
10 6 38442.67
10 5 32588.33

We can see from Table 7.3 that these two parameters had almost no effect on the time for SDE
Removal, so for final assembly we took one of the combinations randomly from Table 7.3.
8. Results & Conclusions

8.1 Results For Hole Search


Before directly jumping to the Hole Search problem, we had a tryst with the
working of Force /Torque sensor as well as some simple applications using Hybrid Force-
Position Control. We also successfully recalibrated the damaged F/T sensor to work for the
applications where one force component could be kept constant.
We also implemented a continuous tracing algorithm (TUSS) that was an elementary part of the
hole search. The algorithm was successfully tested on some standard surfaces.
For the Hole search part, we tried Blind Strategies (Grid Search & Spiral Search) as well as
Intelligent Strategies (Neural Network Based & Precession Based). We found that Neural
Network approach is only suited for Parallel Peg Case that is very much ideal to be possible.
Therefore to have the real-world assembly task done, we moved on to the Precession Strategy
that proved to be very accurate for the hole search.
Finally, we found some parameters that could affect the Precession based hole search and
optimized our assembly task for minimal search time using DOE technique.

8.2 Results For Insertion


Insertion task required changing the orientation of peg. Initially the insertion algorithms written
were not giving successful results. Then we found that there was the need for Gravity
Compensation. We then added to our algorithms, the gravity Compensation module developed
by us and then successfully tested the insertion algorithms.
Both the LDE Removal and SDE Removal algorithms were successfully tested for various Peg
sizes, Hole sizes and clearances. We then found some parameters affecting the performance of
insertion algorithms and optimized the time for insertion again using the DOE technique.
References
1. W. Haskiya, K. Maycock and J. Knight, "Robotic assembly: chamferless peg-hole
assembly", Robotica (1999) volume 17, pp. 621–634.

2. S.R.Chhatpar and M.S.Branicky, “Localization for robotic assemblies with position


uncertainties”, Proc. 2001 IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems.

3. Sir's Russian Book on Force Control along with the page number.

4. M.H. Raibert and J.J. Craig, "Hybrid Position/Force Control of Manipulators",


Transactions of the ASME, Vol. 102, June 1981.

5. In-Wook Kim, Dong-Jin Lim, "Active Peg-in-hole of Chamferless Parts using


ForceMoment Sensor", Proceedings of the 1999 IEEEiRSJ International Conference on
Intelligent Robots and Systems.

6. Shashank Shekhar and Oussama Khatib, "Force strategies in Real Time Fine Motion
Assemblies", ASME Winter Annual Meeting, 1987.

7. Wyatt S. Newman, Yonghong Zhao and Yoh-Han Pao, "Interpretation of Force and
Moment Signals for Compliant Peg-in-Hole Assembly", Proceedings of the 2001IEEE
International Conference on Robotics & Automation.

8. ISO Standard 8373:1994, Manipulating Industrial Robots – Vocabulary

9. The Editors of Encyclopedia Britannica Online, 2008, Article: Robot (Technology),


Available at : http://www.eb.com

10. http://www.kuka-robotics.com/usa/en/products/industrial_robots/low/kr6_2/

11. http://en.wikipedia.org/wiki/Stiffness

12. http://www.britannica.com/EBchecked/topic/333644/lead-through-programming

13. Sabine Demey, Herman Bruyninckx, Joris De Schutter, "Model-Based Planar Contour
Following in the Presence of Pose and Model Errors", I. J. Robotic Res., 1997: 840~858.
14. A. Masoud and S. Masoud, “Evolutionary action maps for navigating a robot in an
unknown, multidimensional, stationary, environment, part II: Implementation results”,
in IEEE Int. Conf. Robotics and Automation, NM, Apr. 21–27, 1997, pp. 2090–2096.

15. H. Kazerooni and M.G. Her , “Robotic deburring of two dimentional parts with unknown
geometry”, IEEE International symposium on Intelligent Control, August, 1998.

16. Ralph E. Goddard, Yuan F. Zheng, and Hooshang Hemami, “ Dynamic Hybrid
Velocity/Force Control of Robot Compliant Motion over Globally Unknown Objects”,
IEEE Transactions on Robotics and Automation, VOL. 8, NO. 1, February 1992

17. http://hypertextbook.com/facts/2006/reactiontime.shtml

18. Statistics, Probability and Random Processes by Jain and Rawat, CBC Publications,
Jaipur, India.

19. Statistics and Probability Theory by Dr. Y.N. Gaur and Nupur Srivastava, ISBN 978-81-
88870-28-8, Genius Publications, Jaipur, India.

20. Discrete Mathematical Structures by Jain and Rawat, CBC Publications, Jaipur, India.

21. Discrete Mathematical Structures by Dr. V.B.L. Chaurasia and Dr. Amber Srivastava,
ISBN 81-88870-12-9, Genius Publications, Jaipur, India.

22. Theory of Computer Science by K.L.P. Misra and N. Chandrasekaran, ISBN 81-203-1271-
6, Prentice Hall of India, New Delhi, India.

23. Rumelhart, D.E.; Hinton, G.E.; Williams R.J. 1986a. Learning Representations of Back-
Propagation Errors, Nature (London), vol.323, pp533-536, 1986.

24. Zwillinger, D. CRC Standard Mathematical Tables & Formulae 30th edition, pp.462,
1996.

25. http://en.wikipedia.org/wiki/Precession

26. Dave Gravel, George Zhang, Arnold Bell, and Biao Zhang, "Objective Metric Study for
DOE-Based Parameter Optimization in Robotic Torque Converter Assembly", Advanced
Manufacturing Technology Development, Ford Motor Company, Livonia, MI.

You might also like