Ch3 Robot Vision, Programming, Applications

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

CHAPTER 3: ROBOT VISION , PROGRAMMING & APPLICATIONS

PART-I: ROBOT VISION:

The process of deriving, featuring and analyzing the details from the 3 dimensional
object in the form of 2D picture is known as Robot Vision or Machine Vision, As the application
utilizes the computer for processing it is known as Computer Vision.

VIDICON CAMERA
Construction:
Lens-Focuses the image of the object on to the camera.
Face plate-Glass cover at front end of the camera
Transparent metal coating-Acts as electrode which derives electrical video signal.
Photo sensitive layer- is the layer of resistive particles whose resistance is inversely proportional
to the light intensity.
Wire mesh- Decelerates the beam of electrons so that they reach photo sensitive layer with zero
velocity.
Electron gun-Generates beam of electrons that scan the photo sensitive layer.
Beam deflection coil- Deflect the electron beam vertically and horizontally for scanning.
Focusing coil-The electron beam is focused by this coil.
Tube pins-Acts as connector to the electric supply source.
Glass Envelope- Provides housing for above elements.
Working Principle:

 The metal coating of the face plate is applied with positive voltage.
 The photo sensitive layer acts like a capacitor with negative charge on the inner surface
and positive charge on the opposite side, as electron beam strikes.
 The light striking the photo sensitive layer reduces the resistance and the current starts
flowing and neutralizes the positive charge.
 As the image is formed on the target layer, the concentration of electrons is high in dark
area and low in lighted area.
 The electrons so formed flow through metal layer and through the tube pins.
 Variations in current during the electron beam scanning motion produces a video signal
proportional to the intensity of input image.
LIGHTING TECHNIQUE AND DEVICES

 The fundamental types of lighting devices used in robot vision are classified into the
following:
a) Diffuse surface devices: are exemplified by the Fluorescent Lamps & Lighted
tables.
b) Condensor projectors: Transforms an diverging light source into focusing light
source.
c) Flood or Spot projectors: Used to illuminate object surface areas from all angles.
d) Collimator: is a device which produces parallel beam of light on the object whose
image is to be captured.
e) Imagers: Example slide projectors and optical Enlargers produce at the object plane
real form of an image.
 There are basically two major type illumination techniques:
o Front lighting : Here, source of light is on the same side of the camera.
o Rear / Back lighting: The source of light is on the opposite side of the camera.

ANALOG TO DIGITAL CONVERSION


It has 3 phases.
1) Sampling 2) Quantization 3) Encoding

Sampling

Let the function f(x, y) denote the two dimensional image pattern on the image device.
The geometric co ordinates x and y of the image plane are digitized to get information by the
process known as ‘Image sampling’

After sampling the digitized function f (x, y) in the spatial co ordinates is generated
which can be easily stored in the computer memory.

Let, N= number of lines in the face plate of the camera.

S = Sampling capability of ADC in sec.

R = Scanning rate in sec for complete face plate.

Rd = Line change over delay for the electron beam.

R
Hence the scanning rate per line, RL = + Rd
N

R + NRd
Number of pixels that can be processed / line, Pn =
N .S

Generally the scan line change over is given in the percentage of the scan rate for a line.

Quantization

The digitization of the amplitude of the image function f(x , y) depending on the intensity
of the pixel is known as Quantization.

The number of Quantization level, Q = 2n

Where n is the number of memory bits in ADC.


Fr
The Quantization level spacing is given as L =
Q

Where Fr = full scale range of the camera.

Fr
Then L =
2n

The digital approximation of the analog signal gives the error in quantization as

1
Quantization error, eq = ± ( L ) = ±
2 ( )
1 Fr
2 2n
.

Encoding

Depending on the image created on the face plate of the camera , the intensity of the
different pixel would be different.

The digitized amplitude code is reperesented by the binary sequence of digits , which is
known as encoding.

Image Storage

The storage of the digitized image in the computer memory is done by the frame buffer
which can be made a part of the computer or frame grabber.

The access and acquisition of the image is done in 1/30 second by the frame grabber of a
video camera.

An average camera can produce a disturbance free data with a 6 bit buffer where as the
quantization of the frame has a specification of 8 bits. The lower level bits are rejected in the
operation of noise cleaning.

As the resolution of general human eye 26 (64), the image grabbed by the video camera
would be sufficiently good. The row and column of the frame grabber are synchronized by the
electron beam of the camera. The signal sent by the computer to the position address (x, y) of the
face plate reads the information stored in the frame buffer , uniquely addressed by sampling and
quantization.

Problems

1. A roaster scan system of vision has a frame of 256 lines, having 1/3 sec. as the scanning
rate. It may be assumed that the electron beam takes 10% of the scan time to move from
one line to other line. If there are 256 pixels per line , determine the sampling rate.
Solution:
Given data:
R = 1/3 sec, N= 256 lines, Pn = 256 pixels, Rd = 10% of R/N = 0.1R/N

R + NRd R + NRd
Using the formula, Pn = , from this S = = 5.6 × 10 -6 s / pixel.
N .S N .Pn

( Substitute the values and check the answer)

2. The maximum voltage range for a 8 bit capacity of ADC is 18 V. Calculate the
Quantization levels, Quantization level spacing & Quantization error.
[ Answer : Q = 256, L = 0.0703 V, eq = 0.03515 ]

IMAGE PROCESSING AND ANALYSIS

It refers to all operations and manipulations that are carried out on the captured image
based on application specific needs.

The various Image process techniques are

1. Preprocessing or Data reduction


2. Segmentation
3. Edge Detection
4. Feature Extraction
5. Object Recognition
6. Interpretation
1. PREPROCESSING / DATA REDUCTION
o Used to reduce the volume of data to be processed which help to occupy smaller
memory.
o Data Compression Algorithms are used to reduce overall size of the image for
storage purpose.
o The analog video signal is digitized with lower bit width in order to reduce the
size of the image.
o Data compression is of two types:
1. Lossless Compression 2. Lossy Compression
o In lossless compression, the image is compressed without losing any information
and when image is decompressed the image obtained will be IDENTICAL pixel
by pixel to the Original image.
o In lossy compression, the image is compressed with the objective of only
reducing the size without caring the loss in the image quality.

2. SEGMENTATION
o It is the process of separating the regions that has distinctive features in the image
based on the intensity level, colors, edges or boundaries with continuities among
different elements in the image.
o Segmentation is the first step in the machine vision for identifying an object from
its background and its surrounding objects.
o Segmentation is done by using techniques like THRESHOLDING, EDGE
DETECTION and REGION BASED TECHNIQUES like Region growing and
Region Splitting.

Threshoding: “Thresholding is the process of setting a minimum amplitude limit for the
intensity level to identify the object from its background and the minimum value set for
comparison is known as Threshold limit,”

o A threshold limit is selected by analyzing the intensity levels in the entire image
and plotting the histogram using the statistical distribution of the pixel values in
the image.
o When the intensity level exceeds the threshold limit , the point is identified as an
OBJECT and as the BACKGROUND if it is below the threshold level.
o Let Ii ( x, y) be the intensity of a point (x, y) in an image and ti be the selected
threshold limit for the same image then the threshold image is given as,
Ti (x, y) = 1, if Ii(x, y) > ti
= 0, if Ii (x, y) < ti
o In the threshold image Ti (x, y), pixels with values ‘1’ are identified as pixels of
‘OBJECT’ and pixels with values ‘0’ are identified as ‘BACKGROUND’.
o Global threshold : Common threshold limit for entire image.
o Local threshold: For each pixel in relation to its neighboring pixels.

Edge Detection:

It is a technique used to detect the outer most edge of an object of interest in a given
frame.

o The edge of an object needs to be differentiated from the background in order to


extract some useful information for identifying the object.
o Some of the edge detection techniques are DERIVATIVE OPERATOR, SOBEL
OPERATOR and CONTRAST OPERATOR.
o All these techniques requires minimum of two pixels to determine whether the
pixel under test belongs to OBJECT or the BACKGROUND.
o INTENSITY LEVEL is compared between two pixels for identifying the presence
of an object.
 Region based Segmentation is done by dividing the image into smaller regions made up
of pixels with similar properties like intensity level and colour.
o This is done by methods like ‘Region Growing, Region Splitting and merging’
o Region Growing is a segmentation technique where a small initial area is grown
into larger area by aggregating or grouping the neighbouring pixels with similar
intensity and colour properties.
o This is done by selecting an initial seed pixel with properties of similar to the seed
pixel are grouped together to form a larger area. Grouping of pixels is stopped
when the properties of pixels do not match with that of the seed pixel.
o Region Splitting and Merging is done by dividing the entire image into a set of
disjoint regions and merging the regions with similar properties to form a single
region. Like this different set of regions are created for segmenting the image.

3 . FEATURE EXTRACTION or OBJECT DESCRIPTION

 It means extracting the feature from an object for the purpose of recognition.
 Some features can be used in Robot vision are length, breadth, diameter, perimeter, area,
centre of gravity, angle of slant edges etc. which are used for comparison and
IDENTIFICATION of object.

4 . OBJECT RECOGNITION

* To identify the object by its real world names. The process of identifying an object and
assigning an individual name to it is known as ‘RECOGNITION’

* For example, identifying the SPANNER and GEAR in a given frame is possible technically as
two different objects. But it is required to identify which one is a gear and which one is a spanner
and each of the objects should be given a suitable name.

* This is achieved by the following techniques:

1) TEMPLATE MATCHING 2) STRUCTURAL MATCHING

* In ‘Template Matching’, the segmented window is compared digitally with a model image
stored in the memory. The model image which is known as ‘TEMPLATE’ is generated during
Training session conducted for the vision system. * A no: of templates of an object are captured
at different angles in order to recognize the object from any direction. Therefore the memory
required for storage is VERY LARGE and processing time increases dramatically even for
recognizing single object. The object should be simple and have clear and distinctive shape to
employ this technique.
‘Structural Matching’ technique is carried out by comparing the object’s pattern’s physical
relationships between two or more entities of the same object.

o The segmented image is reduced to form a skeletal structure from which required
features are extracted for comparison.
o When the features extracted from the segmented image matched with the
parameters stored in the memory, then the object is identified an d assigned its
name. For object recognition , the vision system should have enough information
for comparing the image captured during real time operations.

5. INTERPRETATION

* It endorses a vision system with a higher level of recognition about its environment than that
offered by any of the concepts discussed before and encompasses all these methods as integral
part of understanding a visual scene.

____________________________________________________________________________

CHAPTER 3 (PART II) : ROBOT LANGUAGES AND PROGRAMMING

INTRODUCTION:

Robot languages have been developed for ease of control of motions of robot’s arms
having different structures and geometrical capabilities. The following are the major contributors
to develop robot languages.

 Stanford Artificial Intelligence Lab.


 Research Lab. of IBM Corporation under US Air force Sponsorship.
 General Electric Company,

Bonner classified robot programming languages into 5 levels

1. Microprocessor / Microcomputer level: Uses ‘C’ or Keil software.


2. P-to-P level(Point to Point level) :
o The robot joints are moved through a series of points in the work envelope by
either a TEACH PENDANT or by manual movement.
o The points are stored and stored program if required can be edited.
o External signals can also be interfaced.
o A simple program as follow:
1. GO TO POINT C, STOP.
2. GO TO POINT D, STOP.
3. OPEN GRIPPER.
3. Motion level:
o Here p-to-p motions can be implemented.
o Besides branching, sub routines and sensing capabilities representing a motion to
a frame have some of the features.
o RPL, SIGNAL, VAL are some of the languages included in this group.
4. Structural Programming Languages:
o AL, MCL, PAL, HELP & AML
5. Task oriented level:
o It uses high level commands such as
a) PLACE (Place and pick up objects)
b) AUTO PASS( Change the state)
c) OPERATE (Tool statements)
d) RIVET ( Fasteners statements)
e) VERIFY( Miscellaneous statements)

VARIOUS ROBOT LANGUAGES AND THEIR FEATURES

1. WAVE & AL:


o WAVE: Machine Vision Language developed by Stanford AI Lab.
o AL: Used to control robot arms.
o Commands used: SIGNAL, WAIT.
2. VAL:
o Popular Textual robot language-developed by Unimation Inc.for PUMA
(Programmable Universal Machine for Assembly) series of robot.
o User friendly
o It provides arm movement in joint , world and tool co-ordinates, gripping and
speed control.
o Commands used: SIGNAL, WAIT.
3. AML:
o Manufacturing language –developed by IBM Corp.
o Very useful for Assembly operations
o Also used in automated manufacturing system.
4. MCL:
o Modification of popular APT(Automatic Programming Tool)
o Used in CNC machine tools
o Used to control machine tools in CAM applications.
o LINES, CIRCLES, PLANES, CYLINDERS and many other complex
geometrical features can be identified.
5. RAIL:
o Developed by auto matrix for robot assembly, Inspection, Arc welding &
Machine Vision /Robot Vision / Computer Vision.
o A variety of data types as used in PASCAL.
o An Interpreter is used to convert languages into machine language commands.
o It uses MOTOROLA 68000 Micro computer system.
o It supports many commands and used to control machine vision system.
6. HELP:
o Developed by General Electric Company
o Similar to RAIL
o Capability to control TWO ARMS AT A TIME.
o Structure of language like PASCAL.
7. JARS:
o Developed by NASA’s JPL.
o PASCL JARS can be interfaced with PUMA 6000 robot for running robot’s
programs.

8. RPL:
o Developed by SRI International.
o A Compiler is used to convert program into Codes that can be interpreted by an
Interpreter.
o Unimation PUMA 500 can be controlled with the help of RPL.
o RPL uses syntax like FORTRAN.
o It is modular and flexible.
9. AUTO PASS:
o Proposed by IBM to facilitate a human assembler to perform the activities.
o Sophisticated World modeling system and helpful to keep track of the objects.

However in all the robot languages, features like EDITOR, INTERPRETER,


COMPILER, DATA MANAGEMENT AND DEBUGGING are common.

CLASSIFICATION:

1. First Generation Language


2. Second Generation Language
3. World modeling & Task oriented Object level Language.

First Generation Language:

o It provides offline programming in combination with the programming through


robot pendant(related) teaching. Eg: VAL.
o The capability of I generation language is limited to handle sensory data(except
ON/OFF Binary signals)
o However, branching, Input / Output Interfacing and commands leading to a
sequence of movements of arm, body and opening and closing of grippers are
possible.
o In case of faults / errors, I generation robot will stop the functioning and it cannot
cope up with the situation.

Second Generation Language:

o It includes AML, RAIL,MCL, VAL II etc.


o They Structural Programming Languages performing complex tasks.
o Apart from straight line interpolation, complex motion can be generated.
o They can handle both analog and digital signals.
o Force, Torque, Slip and other sensors can be incorporated on the joints, wrist or
the gripper fingers and the robot controller is capable of communicating with such
sensors, so that better motion control can be effected.
o II generation robot can recover the event of mal function probably by activating
some other programs.
o It has low level intelligence because enhanced sensory capabilities.
o It has added advantage of better interfacing facilities with other computers
o Data processing. File management and keeping all record of events in work cell
can be done more efficiently.

World modeling & Task oriented object level Language:

o Here, the task is defined through a command , say ‘ TIGHTEN THE NUT ’
o The robot should be capable of performing step by step functions to accomplish
the objective of tightening the nut.
o This is possible only the robot must have 3D Vision, Intelligence for capable of
making decisions such as robot must find the NUT & SPANNER, pick them up
and place them in a sequential manner and finally tighten the nut.

Future generation robot languages involves Artificial Intelligence technology and


hierarchical control systems.

COMPUTER CONTROL & ROBOT SOFTWARE

The program and control methods are actuated through software running on an operating
system in which manipulation of data takes place.

o Monitors are used to activate control functions.


o In a robot, there are 3 basic modes of operation
1. MONITOR MODE
2. EDIT MODE
3. RUN or EXECUTIVE MODE.

The above modes constitute the operating system.

1. MONITOR mode:
The programmer can define locations, load a particular piece of information on a
particular register, store information in the memory, save, transfer programs from
storage into computer control memory, Enable or Disable and move back and
forth into its edit and run mode.
2. EDIT mode:
o Here, the programmer can edit or change a set of instructions of existing program
or introduce a new set of information.
o Any error shown by the monitor can be corrected.
o To come out of Edit mode, command ‘E’ should be given.
3. RUN or EXECUTIVE mode:
o The programs to carry out a predefined task can be executed in the run mode. The
sequential steps as written by the programmer are followed during run mode.
o DRY RUN can be used to test the program by making the switch instruction
DISABLE.
o The signals are made non operational by the disable switch.
o After Dry run, the switch may be made operational by the instruction ENABLE.
o By DEBUGGING, the errors in the program can be rectified.
o The path or coordinate points of locations are to be redefined and corrected in
EDIT mode. Then after ending of EDIT mode, the RUN mode may be actuated.
The robot will run following the correct trajectory.
o The operating system for implementing robot language program uses either an
INTERPRETER or a COMPILER.
o An Interpreter take the Source program one line at a time and generates
equivalent code that is understood by the controller.
o If any error in source program, the user can correct the source program and line is
reinterpreted. Every line is executed by the interpreter as and when it is written.
o VAL is the language of PUMA class of robots and source codes or instructions
are processed by an interpreter.
o Compiler is the software in the operating system that converts source code into
the object code / machine code after compilation of the whole program.
o The robot controller can then read and process machine codes.
o Execution time for compiled program is fast while editing of an interpreted
program is very fast.

VAL SYSTEM and LANGUAGE

1. INTRODUCTION
VAL is an example of robot programming language and an operating system
which regulates and controls a robotic system by following the user commands
and instructions. It is designed to be highly INTERACTIVE to minimize the
programming time.
o The VAL O.S is stored in ROM, when the controller is powered ‘ON’, VAL is
immediately available.
o It is flexible and versatile, can perform most of the commands even while a user
program is being executed.
o Normally, the motions and actions of the robot are controlled by a program stored
in RAM, called USER Programs or VAL Programs.
o VAL also contains editor to edit the program. In addition, the editor has a simple
mode of operation whereby a program step and location definition are
automatically generated each time a button, RECORD on the manual control unit
( TEACH PENDANT) is pressed. It can also includes SUBROUTINES.

2. LOCATIONS
o Robot location refers the position and orientation of end effectors or tools.
o VAL has two ways of robot location
1. Precision point
2. Transformation
o Precision point is to express a location in terms of the positions of the individual
joints.
o Transformation is to express in terms of Cartesian co-ordinates( X,Y,Z) and
Orientation angles of the robot tool relative to a reference frame fixed in the robot
base.

3.TRAJECTORY CONTROL

VAL uses two different methods to control the path of a robot from one location to
another.

o The methods either (i) Interpolate between initial and final position of each point,
producing a complicated tool-tip curve in a space or (ii) move the robot tip along
a straight line path.
o For the first case , called Joint interpolated motion, the total motion time is set to
that of the joint requiring the longest time to complete its motion. This provides
the fastest response of the robot.
o Straight line motion is produced by applying an interpolating function to the
world co ordinate location of the robot tool and rapidly transforming the
interpolated tool location to joint commands.
o The motion speed of tool-tip can be accurately controlled but slower than
corresponding joint interpolated motions.(disadvantage)

Example: VAL Program

Program: DEMO. RG

1. APPRO PART, 50
2. MOVES PART
3. CLOSEI
4. DEPARTS 150
5. APPROS BOX, 200
6. MOVE BOX
7. OPENI
8. DEPART 75
END

Explanation:

The name of the program is DEMO. RG

1. Move to location , 50 mm above the location PART (Location to be defined)


2. MOVE along a straight line to PART.
3. Close the gripper jaws to grip the object Immediately ( I-Immediately)
4. Withdraw 150 mm from PART along a straight line path.
5. Approach along a straight line to a location 200 mm above the location, BOX.
6. Move to BOX.
7. Open the hand and drop the object immediately.
8. Withdraw 75 mm from BOX.

In this example,

Steps1, 6 &7 are examples of joint interpolated motions.

Steps 2, 4, & 5 are examples of straight line motions


Steps 3 & 7 are hand control instructions

MONITOR COMMANDS

To enter and execute a program, one has to give certain VAL Commands called as
‘Monitor commands’

VAL commands can be divided into the following categories:

 Defining and Determining Locations


 Editing Programs
 Listing Program and location data.
 Storing and Retrieving program and location data.
 Program control.

Defining and Determining Locations:

o Location variables can be set equal to the current location or previously defined
location by HERE and POINT command. The current location can be displayed
by using WHERE command.
o TEACH command records a series of location values under the control of
RECORD button on manual control unit.

Example: How to teach a position PART by HERE command, and the resulting positions
displayed on the screen.

HERE PART or HERE P1

X/JT 1 Y/JT 2 Z/JT 3 O/JT 4 A/JT 5 T/JT 6

248.63 592.97 148.53 141.141 30.822 1.225

To define position, a command like POINT PART = P 1 may be given where a location
variable PART is equal to the value P 1.

A command say TEACH P 1 is used to record a location variable P1 when the record button on
the Teach Pendant is pressed.

Successive locations can be assigned as P1,P2…..so on, by teaching new locations on the path
and pressing record button each time.

The motion path is taught by command TEACH.

Editing Programs:
EDIT permits to modify the program. Several sub commands in EDIT to properly edit a
program.

A typical command for editing is EDIT SRD

.
E
E means exit of Editing mode and return to the monitor mode.

Listing Program and Location Data:

The command DIR displays the names of all user programs in the system memory.

The commands are ‘LISTL’ and ‘LISTP’ display the value of location variables and the steps of
user programs respectively.

Storing and Retrieving Program and Location –data:

o The command that displays the file directory is ‘LISTF’


o The specified programs , location and both programs and locations can be stored
respectively in a program file and a location file by entering the commands

STORE P, STORE L, STORE

The commands that can be used for Loading the programs, Locations and both programs
and locations contained in a specified disk into the system memory are

LOAD P , LOAD L, and LOAD

o In VAL II, the additional command can be constructed as


FLIST- For listing the file names kept on a disk.
o Besides VAL & VAL II can accept commands
COPY – For copying the program
RENAME- For renaming the files.
DELETE- For deleting the files.

Program Control:

o The command to specify the speed for all subsequent robot motions, SPEED 30
(30% of monitor speed)
o The commands that executes a specified user program for once, any number of
times or indefinitely are
EXECUTE
EXECUTE, 5 (Execute 5 times)
EXECUTE -1 ( Execute indefinitely)
o The command that terminates program execution after completion of the current
step is ‘ABORT’
o In VAL II, a single joint (JT2) may be changed by diving it , say 60֯ at a speed of
30% of the monitor speed.

DRIVE 2, 60, 30.

o ‘DO’ command allows a robot to execute a program instructions


o ‘ALIGN’ which is used for motion control to align the end effectors.

In order to grip some objects, the end effectors is required to align such that its Z axis is
parallel to the nearest axis of the world co-ordinate system. The command is

‘DO ALIGN’

Similarly other DO command may be ‘DO MOVE PART’ (Part is in variable location)

PROGRAM INSTRUCTIONS

Program instructions are divided into following categories:


(i) Robot configuration control
(ii) Motion control
(iii) Hand control
(iv) Location assignment and modification
(v) Program control, Interlock commands, I/O control
 Robot Configuration control

Any robot configuration change is accomplished during the execution of the next motion
instruction other than straight line motion.

RIGHTY or LEFTY – Change the robot configuration to resemble a right or left human arm
respectively.

ABOVE or BELOW commands make the elbow of the robot to point up or down
respectively.

 Motion Control

MOVE- moves the robot to specified location.

MOVES- moves the robot in a straight line path.

DRAW- moves the robot along the straight line through specified distances in X,Y & Z
directions.
APPRO- moves the robot to a location which is at an offset ( along tool Z axis ) from a
specified point.

DEPART moves the tool along the current tool Z axis .

APPROS, DEPARTS do the same as APPRO & DEPART instructions but along straight line
paths.

CIRCLE moves the robot through circular interpolation via three specified point locations.

Hand Control

OPEN and CLOSE-Indicate respectively the opening and closing of the gripper during the
next instruction

OPENI & CLOSEI- carry on the same functions, but immediately.

CLOSEI 75 In VAL II, if a servo controlled gripper is used, then this command causes the
gripper to close immediately to 75mm.

GRASP 20, 15 –Causes the gripper to close immediately and checks whether the opening is
less than the amount of 20 mm. If so, the program branches to the statement 15.

MOVES PART, 30 –Indicates servo controlled end effecter causes a straight line motion to
a point defined by PART and the gripper opening is changed to 30 mm.

MOVET PART, 30 – causes the gripper to move to position, PART with an opening of 30
mm by joint interpolated motion.

Location Assignment and Modification

The instructions that do the same as the corresponding monitor commands are

SET and HERE.

Program control, Interlock commands and Input / Output control

SETI sets the value of an Integer variable to the result of an expression.

TYPEI displays the name and value of an Integer variable.

PROMPT In VALII, this command often helps the operator to respond by typing in the
value requested and pressing the return key.

For example, PROMPT “ Enter the value:”, Y1

Indicates the quotations on the CRT and the system waits for the operator to respond by
assigning some value to variable name Y1 and there the program is executed.
GOTO 20-performs an unconditional branch to the program step identified by a given level,
20.

GOSUB and RETURN are necessary to transfer control to a subroutine and back to the
calling program respectively.

A VAL subroutine is a separate user program which is considered as a subroutine when


GOSUB instruction is executed.

IF…THEN transfers control to a program step depending on a relationships (conditions)


being true or false.For example,

IF ROW LT 3 THEN

(A number of instruction steps)

ELSE

(A number of instruction steps)

END

If the logical expression ( say Row is less than 3 in a matrix) is TRUE, then the instruction
steps between THEN and ELSE are executed.

If the logical expression is FALSE then the instruction steps between ELSE and END are
executed.

The next program steps after END are continued.

PAUSE- terminates the execution of user program.


PROCEED- The user program that is held back by PAUSE command can be returned from
the point by entering this command.

SIGNAL- Turns the signals ON or OFF at the specified output channels. And it is also
helpful to communicate with the peripheral equipments interfaced with robots in the work
cell.

IFSIG and WAIT -test the states of one or more external signals.

RESET turns OFF all external output signals.

The command say, SIGNAL 2, -3 indicates that output signal 2(positive) is to be turned ON
and output signal 3(negative) is to be turned OFF.

WAIT SIG (-1,2) will prevent the program execution until external input signal 1 is turned
OFF(Negative) and external input signal 2 is turned ON.(positive)

The additional command, REACT –VAR 2, SUB TRAY indicates that the reactions are
invoked if the external binary signal identified is a negative variable , VAR 2.

If the current state of the signal is OFF , signal is monitored for a transition from OFF to ON
and then again OFF.

When the reaction or specified signal transition is deleted, the program control is transferred
to the subroutine named as TRAY.

REACTI interrupts robot motions immediately.

VAL II may communicate with either digital or analog signals through input / output
modules.

IOPUT and IOGET are the commands that are used either to send or receive output
respectively to a digital I / O module.

Analog signals can also be communicated through analog input/output modules by analog to
digital converter(ADC) or digital to analog converter (DAC).

For example, the commands

DAC 1 = SENSOR 1 (real variables)

DAC 1 = CONST(constant)

DAC 1 = 3 + (5*V) (Arithmetic expression)


Indicates that the analog output voltage is proportional to the value indicated on the right
hand side. DAC sets the voltage proportional to the binary value in the range from say -2048
to +2047 depending upon the analog output voltage setting.

Similarly the command VAR 1 = ADC (1) returns the current input at analog channel
number 1 as an integer value in the range from -2048 to +2047.

WELDING INSTRUCTIONS

WSET- Sets the speed, welding voltage and current as a welding condition identified by a
number (1-4).

For example, WSET 1 = 13, 54.3,63 sets a welding speed of 13 mm/s , welding voltage of
54.3% and welding current of 63% as welding condition 1.

WVSET sets a weaving pattern, setting some or all of the following parameters: cycle
distance, amplitude, right end stop distance, right end stop time, center stop distance, left end
stop distance and left end stop time.

The weaving pattern for the instructions

WVSET 1 = 10, 7, 2, 0, 1, 3, 0

WSTART- starts welding under present welding condition and weaving condition

WEND inactivates a welding start signal.

CRATERFILL instruction is used when a crater filler is required at a welding end.

CHAPTER 3 ( PART-III): ROBOTICS APPLICATIONS

1. MATERIALS HANDLING
o Parts Transfer
o Parts Sorting
o Heat Treatment
o Palletizing
2. MACHINE LOADING & UNLOADING
o Die Casting
o Injection Moulding
o Farming, Stamping & Trimming Process
o Metal cutting machine tools like Lathe, machine tools etc.
3. MACHINING
o De burring
o Drilling
o Grinding
o Milling
o Threading, shaping etc.
4. MAINTENANCE
o Assembly : Mating parts or Parts inserting problems
o Inspection, Welding (Spot welding, Arc welding, Seam tracking)
o Spray Painting / Finishing

THE ART OF ASSEMBLY PROCESS


Assembly

Handling Composing Checking Adjusting

Storage / Buffer

Hopper

Magazining

Transporting

 Moving
 Feeding
 Escaping

Positioning

 Orienting
 Aligning
 Inserting

DESIGN FOR ASSEMBLY

For any automated or flexible robotic assembly certain basic rules and procedures are
required to be followed for ease of assembly

1. The components for the assembled product should be selected for ease of assembly.
2. Parts should be designed for feeding and orienting for automated assembly for which the
product simplification and if necessary, redesigning the products are necessitated.
3. For robotic assembly, a suitable gripping device should be designed for ease of assembly.
 Problems and characteristics of Assembly:
o The basic problem considered in assembly is ‘ Peg in the hole’
o During insertion of the peg into the hole, vision and tactile sensing work in
coordinated and integrated manner.
o For robot to assemble, such characteristics of vision and force sensors are very
useful.
 The following 3 main types of robots suitable for assembly operations.
o Cartesian robots
-It has 3 D.O.F (PPP)
-Suitable for simple assembly operations
-They have high accuracy and repeatability.
o Revolute robots
-are also used for assembly tasks, that operate on high level languages VAL &
VAL II and have 6 D.O.Fs.
-There are World & Tool co ordinate systems
Eg: PUMA Robot( Programmable Universal Machine for Assembly)
o SCARA robots
-Selective Compliance Assembly Robot Arm(SCARA) are suitable for assembly.
-These robots are provided with direct drive motors that allow high speeds with
accelerations and backlash free, fast and accurate motions.
-The accuracy is around +/- 0.076 mm and repeatability is +/- 0.025 mm.

WELDING
Robots are find wide applications in welding and the robots used for the purpose
of welding are tool handling robots.
1. SPOT WELDING
 Spot welding is widely used in fastening sheet metals and in automobile body assembly,
frames, panels, fabrication of metal furniture , domestic appliances etc.
 Spot welding is done by fusing two metals at the spots where heat is generated by
allowing an electric current to pass through the electrons for a specific duration of time
and pressing the joining surfaces with the electrodes.
 A spot welding robot has
o A robot manipulator with several degrees of freedom.
o A welding gun held on a robot wrist.
o Controller and power source
o Input / Output Interfaces.
 The operations involved in Spot welding are,
o Squeezing the two metal surfaces between the electrodes.
o Welding by passing current for specific duration of time depending on the type of
materials and its thickness.
o Releasing the grip.
 In order to perform the spot welding,
o a robot should have enough Payload capacity 50-100 kg, good repeatability, good
linear speed (30-90 m/ min) and good angular speed ( 60֯-180֯ / sec)
o The weld gun should be properly oriented and positioned on the product.
o Power rating varies from 30-150 KVA and secondary voltage varies from 5 to
22V.
o Robot welding produces weldment of better quality and provides greater safety
for workers.
o The welding parameters like cycle time, voltage, current etc can be changed and
different trajectories can be programmed.
2. ARC WELDING
Arc welding is a continuous welding process in which the work pieces are joined
by an airtight seal between the pieces to be joined.

o An electric arc struck between welding electrode and the work piece produces
necessary heat that causes fusion of two metal surfaces.
o A high temperature of about 3000-3300֯ C melts the metal.
o Arc welding is done by direct current of 100 A to 300 A at 10 -30 V.
o In order to prevent Oxidation, Inert gases are used and the electrodes are coated
with some flux.
 Basic Components of Robotic Arc welding system
1. Power Source
o For better monitoring of welding parameters, the power source shall incorporate
Voltmeter and Ammeter to check the fluctuations in voltage & current.
2. Wire feed Unit
o It consists of a feed motor, sets of feed, drive rollers, speed regulator and wire
guiding arrangement.
o The wire feed rates vary from 100 mm / min to 3000 mm / min.

3. The welding Gun


o It has a START-STOP switch and ensures uniform transport of wire. A hallow
cable consisting of a wire spiral leads the wire electrode from feed rollers to the
gun.
o The weld gun is mounted on the robot end effecter in a robotic welding. The gun
may be gas cooled or water cooled.
4. Mains and Control unit
o The mains unit consists of protective transformer and rectifier to supply current to
the control unit.
o The control unit contains weld ON/OFF switch, Inching speed switch, CO 2 gas
testing switch, Solenoid Valve for releasing CO2 gas during welding process, Wire
feed regulator and timers for spot welding and automatic welding process.
5. Gas supply Unit
o It consists of a CO2 gas cylinder with a suitable gas flow meter for regulated
supply.
o The flow rate of CO 2 gas depends on welding current, welding speed, electrode
diameter, Joint geometry and local conditions.
o It generally varies from 15 to 22 lit/min.
6. Robot main body
o The robot has 6 D.O.Fs and a jointed structure similar to human arm.
o The drive system used are DC SERVO MOTOR, an Incremental Encoder and a
Potentiometer.
o A magnetic brake is attached to maintain the robot posture when power is turned-
off.
7. Controller
o It consists of Microcomputer for every joint and an overall computer to supervise
the work relating to all the arm motions and interface with external world.
o It consists of various switches like ON/OFF switch, robot initializing switch, arm
power switch etc.
o It directs and controls the sequence of arms, wrist and end effectors.
8. Teach box
o It is equipped with switches for operating the robot manually and indicators for
communicating messages of the system conditions from the controller.
9. Terminal
o This is the key board accompanied by either a video display unit (CRT) or a
printing device and is used for communicating in robot language say VAL.
10. Floppy disc:
o This device stores robot work programs and positional information and transfers
these data to RAM of the controller.

11. Input-Output modules


o The system is equipped with signal lines for receiving signals to determine
conditions for external equipment and sending commands to external equipment.
o Also capable of relay drive and relay output reception.
o The I/O module can be incorporated in the controller.

MATERIALS HANDLING

o The jointed arm robots with 3-5 degrees of freedom can serve the material
handling application.
o Hydraulic or pneumatic drive with manual or powered lead through teaching
would give motion in the present robot designs.
o The next generation robots are expected to use servo motors with Programmable
Automation control(PAC).

MACHINE LOADING & UNLOADING ROBOTS

o Polar (P2R / PRR) robots , Cylindrical (2PR / PPR) robots and Jointed arm
(3R/RRR) robots with 4-5 degrees of freedoms are used for such applications.
o Electronic and servo drives are the future trends in the drives as compared to
present electrical and hydraulic drives.
o PAC can replace the powered lead programs.
o Micro motion controllers with vision can be control system of next generation
rather than the present point to point or limited response systems.

SPRAY PAINTING

o The jointed arm robots with 6 D.O.Fs having hydraulic drive are operated in
continuous path controlled by manual lead through, as seen in present robots.
o Adaptive arm and PLC controls operating in complicated and unsafe atmosphere
are future trend.

INSPECTION

o Application of robots in Inspection is a growing area. Robot technology is


expected to play a significant role in making 100% inspection possible.
o Inspection function is required in every stage of manufacturing from raw
materials to finished products.
o Robots can be used to inspect physical dimensions , surface finish and other
characteristics of raw materials, intermediate stages of parts, finished parts, sub
assemblies or finished products.
o To perform the Inspection tasks, robot requires various sensors or vision systems.

Sensor based Inspection:


o Specific physical dimensions can be determined by the robot with the help of
sensors placed on the gripper fingers of a material handling robot.
o In material handling, each part is grasped by the gripper and moved from one
place to other.
o The specific physical size information can be obtained by the sensors fitted on the
fingers of the gripper and the robot can determine whether the size is within the
tolerance limits or not.
o If the size is correct, part is placed on the desired place and if the size is incorrect,
the part may be dropped into the waste bin.

Vision based Inspection:

o A typical robotic vision system is capable of analyzing a two dimensional scene.


o The robot manipulator can be used to present the part to a stationary vision
system(camera), in different orientations. Thus robot’s task is a sort of material
handling task.
 In the design of a robotic vision Inspection system, several factors must be considered.
These factors are:
o Proper Lighting of the work space.
o Camera with required resolution and accuracy.
o The field of view should be large enough to accommodate the part.
o Robot should have sufficient degrees of freedom to manipulate the camera or part
or the case be.

You might also like