Download as pdf or txt
Download as pdf or txt
You are on page 1of 169

UniversityofGhent

FacultyofEngineeringSciences

ActiveSecurityofanIndustrialRobotbasedontechniquesof
ArtificialVisionandArtificialIntelligence

by

BrechtFEVERY

Promotors:

Prof.Dr.Ir.JosRamnLLATAGARCA
Prof.Dr.Ir.LucBOULLART

Thesispresentedtoobtaintheacademicdegreeof
MasterinElectromechanicalEngineeringSciences

Academicyear20062007

Gracias

JosRamnLlata,
pordarmeestaoportunidad,
porescucharmeconpaciencia,
porensearmeymotivarme.

CarlosTorre,
porlaayudaenormeenelcampodelavisin.

DavidCastrilln,
porlacompaiadecadadaenellaboratorio.

misamigosMarcoySarah,
porlosmomentosdecalidad.
Osolvidarnunca.

Dankuwel

professorBoullart,
voordetrouwesteunen
omeenluisterendoortezijn.

Dankjewel

mamaenpapa,
voorhetvertrouwenen
desteuntijdensaldiejaren,
ommetehelpenpratenen
naarmeteluisteren,
ommekansenzoalsdezetegeven.

broerWouter,
voorjehumoren
ommetehelpenontspannen.

Stephane,
omersteedstezijnopdeweg
diewesamenaflegden.

Peter,
voorjefrissekijkopdezakenen
jevriendschapindevoorbije2jaar.

ActiveSecurityofanIndustrialRobotbasedontechniquesof
ArtificialVisionandArtificialIntelligence
by
BrechtFEVERY
Brecht.Fevery@UGent.be

Thesispresentedtoobtaintheacademicdegreeof
MasterinElectromechanicalEngineeringSciences

Santander,June2007

Promotorofreceivinguniversity:Prof.Dr.Ir.JosRamnLLATAGARCA,
UniversidaddeCantabria,E.T.S.deIngenierosIndustrialesydeTelecomunicacin,
GrupodeIngenieradeControl,
DepartamentodeTecnologaElectrnicaeIngenieradeSistemasyAutomtica(TEISA).

Promotorofhomeuniversity:Prof.Dr.Ir.LucBOULLART,
UniversityofGhent,FacultyofEngineeringSciences,
DepartmentofElectricalEnergy,SystemsandAutomation(EESA).

Abstract

ThisMastersThesisproposesanactivesecuritysystemfortheindustrialFANUCRobotArcMate
100iB.Thedesignedsecuritysystemconstituesofthreefundamentalcomponents.
Thefirstoneisanartificialvisionsystemthatusesasetofnetworkcamerastowatchtherobots
workspace and instantaneously signals the presence of foreign objects to the robot. Stereoscopic
methodsforpixelcorrespondencesand3Dpositionalreconstructionsareusedtoobtaintheexact
obstacleslocation.
The second subsystem is an artificial intelligence system that is used to plan an alternative robot
trajectoryinanonlinemanner.FuzzyLogicprinciplesweresuccessfullyappliedtodesignaFuzzy
Inference System that employes a rule base to simulate human reasoning in decision taking. The
Fuzzy Inference System generates translational and rotational actions that have to be undertaken
bytherobotsEndEffectortoavoidcollisionwiththeobstacle.
Thethirdpartoftheactivesecuritysystemisamultitaskorientedrobotapplicationwritteninthe
KARELprogramminglanguageofFANUCRobotics,Inc.Upondetectionofanobstacle,themotion

ofaregularrobottaskishaltedandmotionalonganalternativepathisinitiatedandfinishedwhen
theoriginaldestinationisreached.
The three subsystems of the active security system are connected by an overviewing realtime
communication system. Socket connections between network devicesare setup to exchange data
packagesoverEthernetwiththeTransmissionControlProtocol.
Allsubsystemsoftheactivesecuritysystemweretested,firstseperatelyandtheninanintegrated
way. With special attention for realtime performance, satisfying experimental results were
obtained.

Keywords:ActiveSecurity,ArtificialIntelligence,ArtificialVision,FuzzyLogic,Obstacle
Avoidance,Realtime,RobotControl.

ii

Index

Chapter1.Introduction 1
1.Problemdescription .

2.Realtimeconsiderations .

Chapter2.ArtificialVision . 5
1.Introduction

2.Perspectiveprojectionmethods.

2.1Thepinholecameramodel.

2.2Coordinatesystems .

2.3Correspondencebetweencoordinatesystems.

2.4Perspectiveprojectionontheimageplane 11
2.5Intrinsicandextrinsiccameraparameters 12
2.6Modelingofprojectionerrorsduetolensdistortion... 12
3.Cameracalibrationmethodandcalibrationresults 14
3.1Calibrationprinciples.. 14
3.2Calibrationresults. 16
4.Stereovisionmethodstoreconstructthreedimensionalpositions 19
4.1Inversionofpixeldistortion 19
4.2Calculationoftheprofundity. 20
5.Pixelcorrespondenceandobjectidentificationmethods 22
5.1Epipolarlines 22
5.1.1Pixelequationofepipolarlines 24
5.1.2Trinocularstereoalgorithmbasedonepipolarlines 26
5.2Edgedetection.. 28
5.2.1Gradientoperators. 29
5.2.2DroGoperator. 32
5.2.3Cannyoperator... 33
5.3Objectidentificationalgorithm.. 34
6.Arealtimecameraimplementation.. 35
6.1Principlesofcameraimageprocessing 36
6.2Principlesoftherealtimedetectionsystem 37
6.3Timeevaluationofthevisionsystem 40

iii

7.References.. 41

Chapter3.ArtificialIntelligence 42
1.Introduction 42
2.DesignofaFuzzyInferenceSystem.. 43
2.1Introduction.. 43
2.2Descriptionoftheavoidanceproblem . 46
2.3Basicphilosophyoftheavoidancestrategy. 48
2.4MembershipfunctionsoftheFuzzyInferenceSystem . 54
2.4.1Membershipfunctionsofentrancefuzzysets . 54
2.4.2Membershipfunctionsofsystemoutputs. 56
2.5Constructionoftherulebase. 58
2.5.1Rulesrelatedtotranslationalaction 58
2.5.1.1Repellingrules. 58
2.5.1.2Attractingrules 60
2.5.2Rulesrelatedtorotationalaction 60
2.5.3Continuityoftherulebase .. 62
2.5.4Consistencyoftherulebase .... 62
2.6Thefuzzyinferencemechanism 63
2.6.1Resolutionofrulepremises . 64
2.6.2Implicationofrulepremisesonruleconsequents 64
2.6.3Aggregationofruleconsequents. 65
2.7Defuzzificationprocess .. 65
2.8AlgorithmoftheFuzzyInferenceSystem 66
2.9RealtimeperformanceoftheFuzzyInferenceSystem . 66
2.9.1TimeconsumptionoftheFuzzyInferencesystem 67
2.9.2Sizeofthedistanceincrementsanddecrements .. 67
2.9.3CommunicationbetweentheFISandtherobotcontroller. 69
3.References.. 70

Chapter4.ActiveSecurity 71
1.Introduction .. 71
2.FANUCRobotArcMate100iB... 72
2.1Basicprinciplesofmotionmanipulations 72

iv

2.2Definingauserandtoolcoordinatesystem 73
2.3Memoryspaceofthecontroller. 74
2.3.1FlashProgrammableReadOnlyMemory(FROM).. 74
2.3.2DynamicRandomAccessMemory(DRAM). 75
2.3.3Batterybackedstatic/randomaccessmemory(SRAM) 75
2.3.4Memorybackups.. 75
3.KARELprogramming.. 75
3.1Programmingprinciples. 76
3.2Programstructure 76
3.2.1Variabledeclarations. 76
3.2.2Routinedeclarations.. 77
3.2.3Executablestatements 77
3.3Conditionhandlers.. 78
3.3.1Basicprinciples 78
3.3.2Conditionhandlerconditions.. 78
3.3.3.Conditionhandleractions 79
3.4Motionrelatedprogrammingfeatures. 79
3.4.1Positionalandmotiondatatypes 79
3.4.2Positionrelatedoperators. 80
3.4.3Coordinateframes . 80
3.4.4Motioninstructions 81
3.4.5Motiontypes... 82
3.4.6Terminationtypes . 83
3.4.7Motionclauses 84
3.5Readandwriteoperations. 86
3.5.1Filevariables .. 86
3.5.2Fileoperations 86
3.5.3Input/outputbuffer 87
3.5.4Example:readinginanarrayofXYZWPRvariables 87
3.6Multitasking 87
3.6.1Taskscheduling . 88
3.6.1.1Priorityscheduling .. 88
3.6.1.2Timeslicing 89
3.6.2Parentandchildtasks 89

3.6.3Semaphores. 89
3.6.3.1Basicprinciplesofsemaphores.. 90
3.6.3.1.1Waitoperation 90
3.6.3.1.2Signaloperation . 90
3.6.4Motioncontrol 92
4.Realtimecommunication 92
4.1FullduplexEthernet 94
4.2FastEthernetswitches 94
4.3Socketmessaging. 96
4.3.1KARELsocketmessagingoption 97
4.3.2SocketmessagingandMATLAB. 98
4.3.3Formatofdatapackages 99
4.3.4KARELcommunicationprogramcomm.kl101
4.4RealtimeperformanceoftheEthernetcommunication102
5.Theactivesecurityproblem103
5.1AsolutioninKAREL103
5.1.1Secure.kl104
5.1.2Moves.kl..104
5.1.3Comm.kl..104
5.1.4Taskpriorities.105
5.1.5Semaphoreuseandprogramexecutiondynamics106
6.Experimentalresults 109
7.Acknowledgments110
8.References..110

Chapter5.Conclusion ......................................................................................112

AppendixA.A.1

AppendixB..B.1

AppendixC.C.1

vi

Chapter1

Introduction

1.Problemdescription

Robotshavebeensuccessfullyintroducedinindustrytoautomateproductiontasksofdiverse
nature.Industrialrobotsworkwithexpensivetools,onvaluableproductsandathighspeeds.
Often,asetofrobotsisworkingtogetheronthesameobjecttofinishanassemblytaskinless
time.Nowadays,productioncyclesarerefreshedathighratesandnewrobotapplicationsneed
tobeprogrammedinshortspaceoftime.Theuseofsensorsincreasesandthetimetotestnew
applications decreases as production processes cannot be halted for long time. Collisions of
cooperating robots occurand when they do, they often cause damage to the robots and their
tools.Productionprocessesthenneedtobestopped,reviewedandreinitiated.
From this point of view, the design of appropriate control systems that prevent collisions
betweencooperatingrobotsisofhigheconomicalimportance.Duringthelastyears,therobot
industry and investigation centers have been strongly cooperating to design collision
avoidance software for use by industrial robots. Establishing collision free robot interaction,
withouttheneedofrigidcommunicationproceduresbetweentheinvolvedrobots,isidentified
asaproblemofactivesecurity.
From a non commercial point of view, robot active security can also be seen as guaranteeing
maximumsafetytothehumanoperatorthatisworkingclosetoanindustrialrobot.Especially
whenworkingwithheavypayloadsandmovingathighspeeds,arobotarmhittingahuman
operatorcancausesevereandevenmortalinjuries.Robotmotionisoftenguidedorcontrolled
bysensorinformation,butthetrajectoryalongwhichtherobotmovesdependsinthefirstplace
on what has been programmed by the operator. Therefore, when a robot is executing a
programmedtask,securityprecautionshavetobetakenatalltimes.Typically,robotsoperate
inanindustrialworkcellthatisequippedwithasecurityfenceandsensorstodetectwhena
humanoperatorisenteringtheworkspacewhiletherobotisexecutingitstask.Allactiverobot
motionwillimmediatelybehaltedwhenaforeignobject,e.g.ahumanoperator,isdetectedby
the installed sensor system. In an active security system, upon detection of foreign objects,
robot motion would not be halted. An intelligent operating system would be activated to
assuretherobotcancontinueexecutingitstask.Motionwouldnolongertakeplacealongthe
programmedtrajectory,butalonganalternativepaththatassurescollisionfreemotion.Fora
simplemanipulationtaskthatconsistsofmovingapartfromwhereithasbeendetectedtoa
_
Chapter1.Introduction
1

predeterminedfinaldestination,thisactivesecurityproblemcanbestatedasthesearchforan
alternative path from an initial to the final destination, hereby moving around or above the
signaledobstacle.
The goal of this Masters Thesis is to develop an active security system for the industrial
FANUC Robot Arc Mate 100iB, presented in figure 1.1. We will focus on the field of active
securitywhereonerobothastocontinuecollisionfreemotionwhenanobstacleisencountered
initsworkspace.

Figure1.1:FANUCRobotArcMate100iB

The foundations of the designed active security system are formed by artificial vision and
artificialintelligencesystems.
Anartificialvisionsystemisabletoderivefromtwodimensionalcameraimagesusefulthree
dimensionalinformationaboutacameracoveredarea.Thevisionsystemthatwillbepresented
in this thesis consists of a triplet of cameras that supplies information about the workspaces
currentsetting.Athoroughstudyofstereoscopicvisiontechniqueswasperformedtodesigna
vision system that is able to detect foreign objects in the robots workspace. In chapter 2 we
present the theory about perspective projection methods, camera calibration issues, three
dimensionalpositionreconstructionsandobjectidentificationtechniques.
Activesecuritysystemsrequiretheneedofintelligentdecisiontaking.Consideringthecurrent
position of the robots Tool Center Point, an alternative path has to be generated by taking
_
Chapter1.Introduction
2

logicallyderivedactions.Asystemthatsimulatesintelligentreasoningafteritwastrainedbya
human programmer, is an artificial intelligence system. Out of the large set of artificial
intelligence techniques, we chose fuzzy logic. In chapter 3, the basic principles of fuzzy logic
will be elaborated. We will show how human reasoning can be simulated through the
introduction of linguistic distance descriptions and a rule base that constitutes of ifthen
implications.
After the design and testing of the artificial vision and intelligence systems, a robot active
security application was developed. Upon detection of an obstacle by the artificial vision
system,anormalexecutiontaskisinterrupted.Executionpriorityisthenpassedtoasecurity
task that makes the robot move along an alternative path calculated for with the artificial
intelligence system. A realtime solution to this problem was implemented in the KAREL
programminglanguage ofFanuc Robotics. In chapter4 we willconcentrate on thesetup ofa
communication between the robot controller and a pc, and on the programmed robot
applicationforwhichmultitaskingandsemaphoreprincipleswillbeintroducedandused.
As this project has been the first one on the FANUC Robot Arc Mate 100iB of the research
group TEISA, a lot of primary research was performed to set up the robot control. A
communication protocol using Ethernet sockets was configured and KAREL programs for
motionmanipulationwerewritten,compiledandexecuted.Toassurethatthisthesiscanserve
as a basis for future research work, some practical aspects of the Fanuc robot and KAREL
programmingissueswillbealsohighlightedinchapter4.
Thegoalofthisthesiscanbestatedastheprimaryresearchinmethodologiesofartificialvision,
artificial intelligence and robot control. Since the principles of the methodologies and the
thorough understanding of concepts are more important than industrial performance, no
significant importance will be given to motion speed of the robot arm. Nevertheless, the
possibilitytoapplythedesigninrealindustrialapplicationswillneverbelostoutofsight.

2.Realtimeconsiderations

Since the previously described problem requires the processing of camera images, trajectory
planning and communication between a pc and the robot controller, one can intuitively
understandthatwearedealingwitharealtimeproblem.Letusthereforefocusonthemeaning
ofthetermrealtime.
Wecanbestsituatetheideaofrealtimesystemsbygivingafewexamples.Arealtimesystem
needs to react to stimuliwithin a time that isacceptable for the environment. For example, a
robotthatismovingathighspeedhastoreactquicklytoexternalstopsignals.
_
Chapter1.Introduction
3

Inrealtimesystems,acomputerhastobeabletoreceiveand,ifnecessary,processdataatthe
same rate at which the data are produced. For example, when a vision system is installed to
detectobstaclesincameraviews,theoperationsneededtoperformthedetectionactionhaveto
be completed within the frequency of the image refreshment. When using multiple camera
viewsofamovingobject,fastconsecutivepicturegrabbingisnecessarytoassurethesmallest
timeintervalpossiblebetweentheregistrationsofthedifferentimages.
Duringourinvestigationwork,realtimeperformancehasbeenoneofourtoppriorities.Alot
ofeffortsweredonetoassureahighspeedimagetransferbetweenthecamerasandourmatlab
sessions and to assure a fast communication between the robotcontroller and a pc. Time
performance was also important in the design of our artificial intelligence system, since the
time consumption for calculating positions along an alternative path is considerable. The
environmental factor that determines the realtime performance of the artificial intelligence
system and the installed communication system is the speed ofrobot motion; a new position
needs to be calculated and transmitted to the robot controller before the previous position is
reached.
Attheclosureofchapter2andchapter3,specialattentionwillbegiventotherealtimeaspect
ofthedesign.Inchapter4thattreatsofrobotcontrolissues,therealtimeaspectisconsidered
inthepartsaboutcommunicationandmultitasking.

3.References
[1]Realtimeandcommunicationissues,M.G.Rodd,UniversityofSwansea,Wales,UK,1990.
[2] Multitasking & Concurrente Processen in WareTijd, Prof. dr. ir L. Boullart, Vakgroep
Elektrische Energie, Systemen & Automatisering, Faculteit Ingenieurswetenschappen,
UniversiteitGent.

_
Chapter1.Introduction
4

Chapter2

ArtificialVision

1.Introduction
Determiningaccuratelythethreedimensionalpositionofobjectsintherobotsworkspaceisan
important first stepin this project. In this chapterit will be described how three dimensional
positions of objects can be obtained using information of three vision cameras that cover the
entireworkspaceoftherobotandherebyformanartificialvisionsystem.
Stereovisionmethodswillbeintroducedtoidentifyanobjectandtodetectitspositionusing
threeimagesoftherobotsenvironment,takeninthesamemomentwiththevisioncameras.
Thefirststepintheoverallvisionmethodistheidentificationofanobjectinacameraimage.
This can be done focusing on specific characteristics of the object we want to detect. The
detectionofadeskisanexampleofanidentificationproblemthatisrelativelyeasytosolve,for
adeskischaracterizedbyitsrectangularsurfaceswhichallowedgeandcornerdetection.Both
ofthesetechniqueswillbecommentedinthischapter.
Thesecondstepinthevisionmethodissearchingthecorrespondencebetweenspecificimage
pointsinthethreedifferentimages.Intheexampleofthedesk,solvingofthecorrespondence
problemconsistsofdeterminingthepixelcoordinatesofthecorrespondingdeskcornersineach
oneofthreeimages.Ingeneral,tosolvethecorrespondenceproblem,geometricrestrictionsin
imagepairsortripletsareused.Theconstructionofconjugatedepipolarlinesinimageplanes
isapowerfulrestrictionmethodthatwillbeintroducedinthischapter.
As we intuitively understand, a two dimensional camera image does not longer contain the
threedimensionalinformationthatfullydescribesanobjectinspace,fortheimagehaslostthe
profundityinformation.However,oncethecorrespondingimagepointshavebeendetectedin
a pair or triplet of images, the profundity information can be calculated. This third and final
stepinthevisionmethodwillallowustofullyreconstructtheexactthreedimensionalposition
ofadetectedobject.Aprofunditycalculationmethodbasedonthe2Dinformationofapairof
imageswillbeintroducedinthischapter.
Todevelopstereovisionmethods,astepwecannotgowithoutisthecalibrationprocessofthe
vision cameras. This calibration procedure will provide us with the internal camera
characteristics such as focal length, position of the images principal point and distortion
coefficients that are used to model distortion of image coordinates due to lens imperfections.
Thesecharacteristicsarecalledtheintrinsicparametersofacamera.Thecalibrationprocedure

_
Chapter2.ArtificialVision

also provides us with extrinsic parameters that consist of a rotation matrix and translation
vectorthatlinktheworldreferencecoordinatesystemtothecameracoordinatesystem.

2.Perspectiveprojectionmethods

Inthefollowingparagraphsitwillbedescribedhowanobjectpoint,representedinachosen
referencecoordinatesystem,isprojectedintotheimageplane.Aprojectionmethodaccording
to the pinhole model will be introduced. Besides the world reference coordinate system,
coordinate systems with respect to the camera and with respect to the image plane are
introduced and the transformations between these coordinate systems are described. Finally,
introducing the camera extrinsic and intrinsic parameters, a perspective projection method is
presentedandamethodtomodelprojectionerrorsduetolensdistortionisproposed.

2.1Thepinholecameramodel

Accordingtothepinholemodel,eachpointPintheobjectspaceisprojectedbyastraight
linethroughtheopticalcenterintotheimageplane.Thisprojectionmodelisrepresentedin
figure2.1a.
Y,y
X,x
P(X,Y,Z)

Image
plane
f
Optical
center

Z,z

p(x,y)

Figure2.1a:Thepinholeprojectionmodel

The determining parameter in this pinhole model isthe focal distance f, that displays the
perpendiculardistancebetweentheopticalcenterandtheimageplane.(X,Y,Z)represent
the three dimensional coordinates of P. The projection of P in the image plane is p with
coordinates(x,y).

_
Chapter2.ArtificialVision

In figure 2.1b, a two dimensional view of the pinhole projection model is displayed. The
opticalcenterOcliesatadistancefoftheimageplane.TheprojectionofPcintheimage
planeisUc.
Zc

Pc

xc
zc
Xc

OpticalcenterOc

f
Imageplane
fxc
zc

Uc

Figure2.1b:Coordinatecorrespondenceinthepinholemodel

Knowingthecoordinates(xc,zc)ofPcinacameracoordinateframe(Xc,Yc,Zc)withorigin
in Oc, we can obtain the x coordinate of Uc in the image plane by using the relations in
uniformtriangles.Usingananalogicdrawingofthe(Yc,Zc)planefortheycoordinateofUc,
wecandeterminetherelationbetweenthe(xc,yc,zc)coordinatesofapointinspaceandthe
coordinates(Ucx,Ucy)ofitsprojectionintheimageplane:

xc
z c (2.1)
y
U cy = f c
zc
U cx = f

Inthisintroductionofthepinholecameramodel,weillustratedatransformationbetween
twocoordinatesystems:anEuclideancameracoordinatesystem(Xc,Yc,Zc)andanimage
coordinate system in which the projected point Uc is presented. In perspective projection
methods,anumberofdifferentcoordinateframesaretypicallyused.Inthenextparagraph,
wewillintroducetheinvolvedcoordinatesystems.

_
Chapter2.ArtificialVision

2.2Coordinatesystems

The coordinate systems involved in perspective projection methods are represented in


figure2.2.

Opticalaxis

Euclideanreference
coordinatesystem Y
w
Zw

Pw

Ow

ObjectpointP

Xw

T
Pc

Euclideanimage
coordinatesystem

Zc

FocalpointC
Oc

Zi

Yc

Euclideancamera
coordinatesystem

Xc
Affineimage
coordinatesystem

wa

Opticalray

va
Oi

Yi

~
Uc , u

ua
Principalpoint U0c=[0,0,f]T

U0a=[u0,v0,0]T
Imageplane
Xi

Figure2.2:Imageprojectionrelatedcoordinatesystems

The Euclidean world reference coordinate system is indicated with index w. The object
pointPcanberepresentedinthesecoordinatesasPw.
TheEuclideancameracoordinatesystemindexchasitszaxisZcalignedwiththeoptical
axis, which is perpendicular to the image plane . The origin of the camera coordinate
system is chosen coincident with the optical center Oc, which lies at a perpendicular
distanceftotheimageplane.Thetransformationbetweentheworldcoordinatesystemand
thecameracoordinatesystemisdeterminedbyarotationvectorRandatranslationvector
T.

_
Chapter2.ArtificialVision

The Euclidean image coordinate system index i has its x and yaxis Xi and Yi in the
imageplane.AxesXi,YiandZiareparalleltothethosencameracoordinatesystem.
The fourth coordinate system is the affine image coordinate system index a in which
imagepointscanberepresentedinCartesiantwodimensionalcoordinates.Usingascaling
factor that gives the number of image pixels per unit of length mm in artificial vision
applications we can express image points in pixel units. Axes va and wa are coincident
withYiandZirespectively,whileuahasadifferentorientationthanXi.Themainreasonfor
this is the fact that ua and va pixel axes may have a different scaling factor to represent
imagepoints.Thiswillbetakenintoaccountfurtheroninthecalibrationparametersu(see
paragraph3).Furthermore,ingeneralpixelaxesdonthavetobeperpendicular,although
nowadaysmostcameratypesprovideimageswithrectangularpixels.
TheworldreferencesystemOwXwYwZwwillbeselectedarbitraryinonecertaincalibration
image, where the reference system is fixed to the calibration pattern that we use when
executingthecalibrationprocedure.Thiscalibrationprocedurewillbeexplainedfurtheron
inparagraph3.
Thefinalcoordinatesystemtointroduceisthereferencecoordinatesystemoftherobot,to
which the robot controller refers all Tool Center Point positions and End Effector
orientations. The 3 point teaching method supported by the operational system of the
FANUC Robot Arc Mate 100iB allows the user the generate a so called UFRAME or user
coordinate frame. We taught a user coordinate frame that coincides with the reference
frame attached by the calibration method to one of the images of the calibration pattern.
Thischoiceavoidstheintroducingofanextratransformation,becausethevisionreference
coordinatesystemcoincideswiththetaughtrobotreferenceframe.

2.3Correspondencebetweencoordinatesystems

ToexpresstheobjectpointPinthecameracoordinatesystem,wehavetorotatePwusing
thematrixRandthantranslateitalongthevectorT,asdescribedbyequation(2.2):

xc
xw

Pc = R Pw + T ,with Pw = yw and Pc = y c (2.2)

z c
z w

Pcwillnowbeprojectedintotheimageplanealongtheopticalraythatcrossestheoptical
centerOc.TheresultisUc,expressedinEuclideanimagecoordinates.Asweassumedthe

_
Chapter2.ArtificialVision

projectionaccordingtothepinholemodel,thecoordinatesofUcwillbegivenbyequation
(2.3):

f xc

ui z c
f yc (2.3)
U c = vi =
zc
wi
f

ThexandycoordinatesofUcwerealreadyobtainedinequation(2.1).Thevalueofthez
coordinate of Uc is identified by remarking that the optical axis is collinear with Zc and
perpendicular to the image plane . The perpendicular distance is equal to the focal
distancef.
WecannowcomposeanaffinetransformationtoexpresstheprojectedpointUcinthetwo
dimensional image plane, using homogeneous pixel coordinates: [u~i

v~i

~ ]T . The
w
i

transformationisgivenbya3x3matrix,appliedasinequation(2.4).Thefactorsa,bandc
can be interpreted as scaling factors along the axes of the Euclidean image coordinate
system. The third column can be seen as a translation along a vector containing the
negationofthecoordinatesoftheimagecenterpoint.Intheimageplane,theoriginofthe
coordinatesystemisoftenrepresentedintheupperleftcorner.Therefore,substractionof
theprincipalpoint,U0ainfigure2,isneededtoobtainacorrectrepresentationofpointsin
theimageplane.

u~i a b
v~ = 0 c
i
~ 0 0
w
i

As we are working in homogeneous

f xc

u0 zc
f y c (2.4)
v0
zc

coordinates, we can manipulate equation (2.4),

multiplyingbyzcandrepositioningthefocallengthf:

_
Chapter2.ArtificialVision

10

xc
z
c
~
ui
f a f b u 0 yc

z c v~i = z c 0
f c v0 z c

~
w
0
0
1
i
1

xc
u~i
v~ = K y (2.5)
c
i
~
z c
w
i

Thevector u~ representsthetwodimensionalhomogeneouspixelcoordinatesoftheimage
point.WedefinethematrixKas:

f a f b u 0
f c v 0 (2.6)
K = 0
0
0
1

WecallthematrixKthecalibrationmatrixofthecamera,foritcontainstheinformationwe
willlateronobtainfromthecalibrationprocess.

2.4Perspectiveprojectionontheimageplane

Based on the constructed coordinate transformations we can now compose a direct and
open form transformation between the world reference coordinate system and the affine

imagecoordinatesystem.Let Pw bethehomogeneouscoordinatesoftheobjectpointPin
thereferencecoordinatesystem:

~
T
Pw = [Pw 1] (2.7)

~
Knowing that Pw can be transformed to the camera coordinate system by applying a
rotationandatranslationaccordingtoequation(2.2),wecanrecombine(2.5)toobtainthe
followingtransformation:

u~i
v~ = [K R
i
~
wi

P
~
K T ] w = M Pw (2.8)
1

_
Chapter2.ArtificialVision

11

WedefineMastheprojectionmatrixofthecamerasystem.Misa3x4matrixconsistingof
twosubmatrices:

Thefirst3x3partdescribingarotation

Theright3x1columnvectorrepresentingatranslation.

TheknowledgeofprojectionmatrixMallowsustoprojectanyarbitraryobjectpointinthe
reference system into the image plane, resulting in the two dimensional Cartesian
coordinates of this image point. The numerical form of M will proof to be absolutely
necessaryinthehourofreconstructingthreedimensionalpositionsofobjectsoutofthe2D
information of two images taken in the same moment by two different cameras of our
artificialvisionsystem.

2.5Intrinsicandextrinsiccameraparameters
Intrinsiccameraparametersaretheinternalcharacteristicsofacamera.Withouttakinginto
accountprojectionerrorsduetolensdistortion,theydescribehowobjectpointsexpressed
in the camera reference system are projected into the image plane, according to equation
(2.5). We identify the intrinsic parameters as the focal length f and the principal point
(u0,v0),thatmarksthecenteroftheimageplane.Bothvaluescandifferconsiderablyfrom
theoretical values and therefore an accurate camera calibration is absolutely necessary.
Theseparametersdescribeaspecificcameraandareindependentofthecamerasposition
andorientationinspace.
Theextrinsicparametersofacameradodependofthecameraspositionandorientationin
space, for they describe the relationship between the chosen world reference coordinate
systemandthecamerareferencesystem.Theseparametersaregivenbytherotationmatrix
RandtranslationvectorTmentionedabove.Rcontainsthe3elementaryrotationsthatare
to be executed in order to let the axes of the reference frame align with the axes of the
camera coordinate system, while T represents the translation from the origin of the
referencesystemtothecamerasystem.
The calibration matrix K has the intrinsic camera parameters as its components. The
projectionmatrixMconsistsofbothintrinsicandextrinsiccameraparameters.

2.6Modelingofprojectionerrorsduetolensdistortion

Thepresentedpinholeprojectionmodelisonlyanapproximationofarealcamera model
becausedistortionofimagecoordinatesduetoimperfectlensmanufacturingisnottaken

_
Chapter2.ArtificialVision

12

intoaccount.Whenhighaccuracyisrequired,amorecomprehensivecameramodelcanbe
usedthatdescribesthesystematicaldistortionofimagecoordinates.Afirstimperfectionin
lens manufacturing is the radial lens distortion that causes the actual image point to be
displacedradiallyintheimageplane[3].Intheirpaperoncameracalibration,Heikkiland
Silvnproposeanapproximationoftheradialdistortionaccordingtoexpression(2.9).

ui ( r ) ui (k1ri 2 + k2 ri 4 + ...)
(r ) =
(2.9)
2
4
vi vi (k1ri + k2 ri + ...)

where the superscript (r) indicates radial distortion, k1, k2, are coefficients for radial
distortionand ri = ui 2 + vi 2 .Thecoordinatesuiandviarethefirsttwocoordinatesofthe
projected object point in Euclidean image coordinates Uc, according to equation (2.3).
Typically, one or two coefficients are enough to compensate for radial distortion [3], and
that is the number of radial distortion coefficients we are going to apply in our camera
model.
As a second imperfection, centers of curvature of lens surfaces are not always strictly
collinear. This introduces another common distortion type, decentering distortion, which
has both a radial and a tangential component [3]. The expression for the tangential
distortionproposedbyHeikkiliswrittenintheformofexpression(2.10).

u i ( t ) 2 p1u i vi + p 2 ( ri 2 + 2u i 2 )
(t ) =
(2.10)
2
2
vi p1 ( ri + 2 vi ) + 2 p 2u i vi

where the superscript (t) indicates tangential distortion and p1 and p2 are coefficients for
tangentialdistortion.
Once the distortion terms are defined, a more accurate camera model can be introduced.
Based on the general projection transformation (2.4), Heikkil proposes a camera model
givenbyexpression(2.11).

u~i Du su (ui + ui ( r ) + ui ( t ) ) u 0
+ (2.11)
v~ =
(r )
(t )
i Dv (vi + vi + vi ) v0

The distorted image point [u~i v~i]T is expressed in homogeneous pixel coordinates. We
recognizetheradialandtangentialdistortiontermsandthereferencingofimagepointsto
theprincipalpoint(u0,v0).Asforthesetoforiginalscalingfactorsa,bandcwecanseethat
b is takenequal to zeroimplying orthogonal pixelaxesand a and c arerepresented by

_
Chapter2.ArtificialVision

13

onescalingfactorsualongtheupixelaxis.DuandDvindicatethenumberofpixelsperunit
oflengthinuandvpixeldirectionrespectively.
Thiscameramodelwillbeusedinthecalibrationprocedurethatwillbecommentedinthe
next paragraph. The set of introduced distortion coefficients will have to be estimated
during the calibration process, together with the intrinsic parameters focal length f,
principal point (u0, v0) and scaling factor su along the u pixel axis. Du and Dv are given
camera characteristics that can be obtained from the known image size in pixels, for
example640x480,andthesensorschipsizeexpressedinunitsoflength.

3.Cameracalibrationmethodandcalibrationresults

Byexecutingageometricalcameracalibrationwecandeterminethesetofcameraparameters
that describe the mapping between 3D reference coordinates and 2D image coordinates.
Variousmethodsforcameracalibrationcanbefoundfromliterature.Averyaccuratemethod
is presented in the paper AFourstepCameraCalibrationProcedurewithImplicitImageCorrection
[3], written by Janne Heikkil and Olli Silvn. The method consists of a fourstep procedure
thatusesexplicitcalibrationmethodsformapping3Dcoordinatestoimagecoordinatesandan
implicitapproachforimagecorrection.
AcalibrationofourcamerasystemisexecutedusingaMATLABcameracalibrationtoolboxthat
isbasedonthecalibrationprinciplesintroducedin[3].ThementionedMATLABtoolboxisfreely
availableontheinternet[4].Inthisparagraphthebasicprinciplesoftheperformedcalibration
will be briefly exposed. For a thorough explanation of the calibration steps, the original
Heikkilpublicationcanbeconsulted.

3.1Calibrationprinciples

Inthefirstcalibrationstep,thebasicintrinsicparametersfocallengthfandprincipalpoint
(u0,v0)arecalculatedinanoniterative,leastsquaresfashionignoringthenonlinearradial
andtangentialpixeldistortion.Thescalingfactorsuisassumedtobe1.ForanumberofN
control points we can write down the linear transformation between Euclidean camera
coordinates(xc,yc,zc)andhomogeneousimagepixelcoordinates [u~i

v~i

~ ]T ,according
w
i

toequation(2.8):

u~i
~ (2.8)
v~ = M P
w
i
~
w
i

_
Chapter2.ArtificialVision

14

TheprojectionmatrixMisunknownandcontainstheintrinsicparametersinanimplicit,
closed form. Knowing the image and reference coordinates of a selected set of N grid
points, we can construct a set of 2xN equations with the 12 elements of M as unknown
values[3].Usingconstraintsproposedin[3],thisoverdeterminedsetofequationscanbe
solved for the 12 elements of M. A technique to decompose matrix M into a subset of
matricescontainingfand(u0,v0)inadirectwayispresented.Usingthesoftware,thesetof
N control points is created by indicating manually the four extreme grid corners of the
calibrationpatternineverypicture.Theusedcalibrationpatternhastheformofachecker
board with white and black adjacent squares of fixed and equal dimension. In order to
reconstructthepositionofallgridcorners,thesquaresizehastobeenteredasafunction
variable.
Noiterationsarerequiredforthisfirstdirectandlinearmethod.However,lensdistortion
effectscannotbeincorporated.Therefore,anonlinearparameterestimationisintroduced
in the second calibration step that incorporates lens distortion. In [3], it is suggested to
estimate the camera parameters simultaneously minimizing the residual between the
model(distortedpixelcoordinates [u~i v~i]T )andNobservations [U i

T
Vi ] .ThefunctionF

tominimizeisthenexpressedasasumofsquaredresiduals:

i =1

i =1

F = (U i u~i) 2 + (Vi v~i) 2 (2.12)

Since estimation occurs simultaneously, this method involves an iterative algorithm. As


initialparametervalues,theparametersobtainedfromthedirectlinearmethodareused.In
this model we use the projection model presented before in equations (2.10) and (2.11).
Executionoftheiterationresultsin8intrinsicparameters:f,(u0,v0),k1,k2,p1,p2andsu.
Perspectiveprojectionisgenerallynotashapepreservingtransformation.Twoandthree
dimensionalobjects witha nonzero projectionarea are distorted if they are not coplanar
withtheimageplane[3].HeikkilandSilvnpresentathirdcalibrationsteptocompensate
for this perspective projection distortion by calculating a distortion formula for the pixel
coordinates.Thedistortionformulaandcoefficientsareelaboratedbasedonconsiderations
for distortion of circular objects, that tend to be depicted in the image plane as ellipses,
dependingontheangleanddisplacementbetweentheobjectsurfaceandtheimageplane.
In the fourth step of their calibration procedure, Heikkil and Silvn present how the
distortion of the pixel coordinates can be undone. This is a very important step in the

_
Chapter2.ArtificialVision

15

inverse mapping problem that will be presented in paragraph 4 of this chapter. We will
explaintheprincipleofundoingthepixeldistortioninthatparagraphfurtheron.
Applying the calibration procedure provided in the MATLAB toolbox, we use calibration
patterns as described before. These patterns are printed and fixed to a flat, coplanar
structure, e.g. a wooden board. The fact that the scheme of calibration points is coplanar
introducesasingularityandcausesthenumberofparametersthatcanbeestimatedfroma
singleviewtobelimited[3].Therefore,multipleviewsarerequiredinordertosolveallthe
intrinsic parameters. Typically, a set of 20 pictures of the calibration pattern, in different
positionswithrespecttothecamera,isused.Areferencecoordinatesystemisfixedtothe
patternineachpicture,byselectingthefourmostoutergridcorners,asdescribedbefore.
Foreachofthesereferencecoordinatesystems,arotationmatrixRandatranslationvector
T,relatingthereferencesystemtothecameracoordinatesystem,willbecalculatedduring
the calibration procedure. As a result of the execution of the calibration functions we get
thesetof8calibrationparametersandasmanysetsofextrinsiccameraparameters(matrix
RandvectorT)astherearepicturesandthusreferenceframes.

3.2Calibrationresults

ThecamerasofourvisionsystemareAxis205networkscameras.Theopticalcharacteristics
ofthesecamerasareprovidedbythefabricant:

nominalfocallength=4mm

effectiveCCDchipsize=0,25inch

Thenominalfocallengthisatheoreticvalue.Theeffectivefocallengthofallcameraswill
in practice slightly differ from the nominal focal length. It can only be obtained through
camera calibration and will also differ from one camera to another. This is due to the
uniquelensmanufacturingofeverycamera.
TheeffectiveCCDchipsizeisgivenasthediagonaldistanceoftherectangularchipandis
assumedtobeequalforallcameras.Weneedthevalueofthischaracteristictoobtainthe
conversion rate between pixel units and length units. These conversion rates are
determined by firstly calculating the effective CCD chip size in horizontal and vertical
directionand then dividing these chipsizes by thenumber ofimage pixels in horizontal,
respectively vertical direction. The resolution of the images worked with in our vision
systemis480x640pixels.DenotingtheconversionrateinhorizontaldirectionasDuandthe
conversion rate in vertical direction as Dv, we obtain for our vision cameras the values

_
Chapter2.ArtificialVision

16

accordingtotable2.1.Ascanbeseen,cameraresolutionsarechoseninawaythatallows
ustoworkwithequalconversionratesinhorizontalandverticalpixeldirection.

Du[pixels/mm]

Dv[pixels/mm]

125,98

125,98

Table2.1

Fromthecalibrationprocessweobtaintwosetsofcameraparameters.Thefirstsetconsists
oftheintrinsiccameraparameters:effectivefocallengthf,scalingfactorsualongtheupixel
axis, image center point (u0,v0), radial distortion coefficients p1 and p2 and the tangential
distortion coefficients k1 and k2. For each one of the three calibrated cameras, this set of
intrinsic parameters is indicated in table 2.2. These parameters are stated in conformity
with the formulations of Heikkil. For the reader who is planning to use the MATLAB
cameracalibrationtoolbox,wementionthattheHeikkilparametersdifferfromtheoutput
ofthetoolbox.Conversionconventionscanbefoundonthecameracalibrationwebpage[4].

Camera1

Camera2

Camera3

5.5899mm

5.574mm

5.6109mm

su

1.0001

1.0012

0.9989

(u0,v0)

(354,203)

(355,244)

(359,262)

p1

2.5476x103

2.5554x103

2.5227x103

p2

4.325x105

4.367x105

4.3771x105

k1

4.1428x105

1.1523x104

8.3651x105

k2

3.4789x105

6.2488x105

1.4069x104

Table2.2

To give the reader an idea of the mentioned pixel coordinates displacements due to lens
imperfections, the typical view of pixel distortions is represented in figures 2.3 and 2.4.
Figure2.3displaystheradialdistortions.Thearrowsindicatethemagnitudeoftheradial
displacementsinpixelunits.

_
Chapter2.ArtificialVision

17

Figure2.3:Visualisationofradialdistortioncomponents

Figure2.4:Visualisationoftangentialdistortioncomponents

The arrows in figure 2.4 indicated how pixels aredeformed due to tangential distortions.
Thetangentialcharactercanbeidentifiedfromthetangentialdirectionofthearrowsinthe
upperrightandthelowerleftcorneroffigure2.4.
Inbothfigures,thedisplacementoftheimagecenterpoint(u0,v0)awayfromthetheoretic
center point (240, 320) is depicted. The calculated and theoretic center points are
representedasacircularspotandcrossrespectively.

_
Chapter2.ArtificialVision

18

4.Stereovisionmethodstoreconstructthreedimensionalpositions

Oncethecalibrationofacameraisperformed,theperspectiveprojectionofacamerasystemis
fully determined,for theprojection matrixM canbe constructed using intrinsicand extrinsic
cameraparameters.However,theprojectionofobjectpointsisnotwhatweintend.Whatwe
aimtodoisreconstructingthreedimensionalpositions.Thisproblemisdenotedastheinverse
mapping.Westartfromknownimagepixelcoordinates,whichtendtobedistortedduetolens
imperfections. In a first step of the inverse mapping, pixel coordinates will have to be
undistorted. Next, calculation of the lost profundity in 2D pictures can be realized once we
havetwoorthreesetsofcorrectedpixelcoordinatesofcorrespondingimagepoints.
Toobtainthecorrespondingpixelsets,anobjectfirsthastobedetectedinapairortripletof
camera images taken in the same moment by different calibrated cameras. In the next
paragraph, we will focus on the identification of objects in images and on methods to
determinethecorrespondencebetweencharacteristicpixelsindifferentimages.
Not considering the prior step of searching for pixel correspondences, we can separate the
inversemappingofpixelcoordinatestoreferenceframecoordinatesintotwosubproblems:

Undoingdistortionofpixelcoordinates

Calculationoftheprofundity

4.1Inversionofpixeldistortion

As can be seen from the projection model (2.11), the expressions for the distorted pixel
coordinates [u~i v~i]T arefifthordernonlinearpolynomialsinuiandvi,theEuclideanimage
coordinates.Thisimpliesthatthereisnoexplicitanalyticsolutiontotheinversemapping,
when both radial and tangential distortion components are considered [3]. Heikkil and
Silvn present an implicit method to recover the undistorted pixel coordinates [u~i

T
v~i ] ,

knowingthedistortedcoordinates [u~i v~i]T startingfromthephysicalcameraparameters


obtained from the calibration process. With the results of the three calibration steps
describedinparagraph3.1,itispossibletosolvefortheunknownparametersofaninverse
modelbygeneratingadensegridofNxNEuclideanimagepoints (ui

vi ) andcalculating

thecorrespondingdistortedimagecoordinates (u~i v~i) byusingthecameramodel(2.11).


Subsequently, an implicit inverse camera model, describing the distorted image
coordinates (u~i v~i) asafunctionofdistortedEuclideancoordinates (u i

vi ) isintroduced.

The parameters of thisinverse modelare thensolved foriterativelyusingaleast squares

_
Chapter2.ArtificialVision

19

technique.Typically,asetof10002000createdgridpoints,e.g.40x40,isenoughtoobtain
satisfyingresults.Foranextendedexplanationofthismethod,wereferto[3].
In our artificial vision application, pixel coordinates are undistorted using the MATLAB
functions invmodel and imcorr provided by Heikkil. As entrance to invmodel we give the
intrinsicparametersthatdescribetheprojectionmodelobtainedbycameracalibration.The
functioninvmodelthenreturnstheparametersofaninversecameramodelthatcanbeused
toundistortpixelcoordinateswiththefunctionimcorr.Thisfunctionallowsustoconverta
set of N pixel coordinates, entered as a Nx2 vector, which will support us in the hour of
onlineconversionofpixelcoordinates.

4.2Calculationoftheprofundity
Ageneralcaseofimageprojectionintoanimageplaneispresentedinpicture2.5.Thesame
objectpointPisprojectedintotheimageplaneofaleftandrightpositionedcamera.These
twocamerasystemsarefullyidentifiedbytheirprojectionmatricesMl andMroftheleft
andrightcamerarespectively.Theopticalcentersofbothprojectionschemesaredepicted
asClandCr,whiletheprojectionsofPinbothimageplanesareplandpr.

Ml
Cl
pl

P(X,Y,Z)

Mr
Cr
pr

Figure2.5:Objectpointprojectionintwoimageplanes

According to the notations of equation (2.8) we can calculate the projections of the object
pointPintheimageplanes:

u~l m1l
~ ~
vl = m2l P
~
~
~
pl = M l P

(2.13)
wl m3l
~
~
~
pr = M r P
m
u
r 1r
~
v~ = m P
~r 2 r
wr m3r

_
Chapter2.ArtificialVision

20


wheremklandmkr(k=1,2,3)aretherowsofmatricesMlandMrrespectively.
Having the pixel coordinates of pl and pr as given, we can calculate the associated
homogeneouscoordinatesbytakingintoaccountascalingfactor:

(u~l

v~l

~ ) = ( u , v , ) (2.14)
w
l
l
l

Equation (2.14) shows the homogeneous coordinates of pl. We can now rewrite equation
(2.13)fortheleftprojectionpl:

~
ul = m1l P

~
ul = m2l P (2.15)
~
=m P
3l

Substitutionofthescalingfactorinthefirsttwoequationsandrearrangingtermsgivesus:

~
~
m3l P ul = m1l P

~
~
m3l P vl = m2l P

(ul m3l m1l ) P~ = 0 (2.16)


(vl m3l m2l ) P~ = 0

Forprwecancomposesimilarequations.Eventuallyweget:

ul m3l m1l

~
~
(2.17) vl m3l m2l P
= A P = 0
u r m3r m1r

vr m3r m2 r

~
The solution P of this equation gives us the homogeneous coordinates of the three
dimensionalobjectpointintheworldreferencesystem:

~
X X
~
Y (2.18)
~ Y
P = ~ =
Z Z

WeidentifymatrixAasa4x4matrix.Thesolution P of(2.17)istheonethatminimizesthe

~ .Thesolutiontothisminimizationproblemcanbeidentified
squareddistancenorm A P
2

as the unit norm eigenvector of the matrix AT A , that corresponds to its smallest
eigenvalue[5].

_
Chapter2.ArtificialVision

21

Dividing the first three coordinates by the scaling factor, we obtain the Euclidean 3D
coordinatesoftheobjectpointP:

X
P = Y (2.19)
Z

5.Pixelcorrespondenceandobjectidentificationmethods

Ifadeskcornerisdetectedinoneimageatlocation(u1,v1)inpixelcoordinates,weneedtofind
outwhatpixelcoordinates(u2,v2)inasecondimagecorrespondtothesamedeskcorner.We
denominate this problem as the search for pixel correspondences. Before any 3D positional
reconstruction can be performed, the correspondence of characteristic image points has to be
searched for in all images involved in the reconstruction process. Typically, geometrical
restrictions in the considered image planes will be used. We will focus on epipolar lines, for
they form the key in the solution of many correspondence problems. We will introduce the
concept of epipolar lines and illustrate their power through the introduction of epipolar line
equations in pixel coordinates. A trinocular vision algorithm will be introduced to solve the
pixelcorrespondenceproblem.
Often used in combination with epipolar lines, specific detection methods are employed to
identify objects that have certain characteristics. E.g. an object that is constituted of clearly
separated surfaces will be easy to detect using edge detection methods. Because separated
surfacesaretouchedbythelightinadifferentway,regionswithdifferentcolorintensitywill
bedisplayedintheobjectsimage.Gradientoperatorsareusedtodetectthetransitionbetween
regionswithdifferentpixelcolorintensity.Thetransitionzonesaredenotedastheimageedges
andcanformthecontoursofanobjecttodetect.
Moreadvancedalgorithmsforcharacteristicsdetection,suchascornerdetection,existandcan
befoundinimageprocessingformasMATLABfunctionsonthesupportwebsiteofMATLAB[6].
At the end of this paragraph it will be explained how methods based on epipolar lines and
methods for image characteristics detection can be used in an integrated way to solve the
generalobjectlocalizationproblem.

5.1Epipolarlines

Theideaofepipolarlinescanbeexplainedlookingatfigure2.6.Theprojectionofanobject
point P1 in the left image plane is denoted as pl. The projection line passes through the

opticalcenterCl.AllpointsinspacethatlieonthisprojectionlineClplareprojectedonto
_
Chapter2.ArtificialVision

22

thesameimagepointpl.P2isanexampleofsuchapointinspace.Intherightimageplane
theprojectionofP1isdenotedaspr1.ThepointP2,thathadthesameprojectionintheleft
imageplaneasthepointP1,isintherightplanehoweverprojectedontotheimagepointpr2
thatdiffersfrompr1.
Theepipolarlineassociatedtotheimagepointplisnowintroducedastheprojectioninthe
rightimageofthesetofpointsinspacethatareprojectedthroughtheopticalcenterClof
the left image onto the image point pl. Analogically, the epipolar line associated to the
imagepointprcanbeconstructed.Itcanbeseeninfigure2.6thattheconstructionofthe
twoepipolarlinesassociatedtothepointinspaceP1fullysymmetric.

P2
P1
Er

Cr

Cl

El

pr1

pl
pr2

epipolarline
associatedtopr1andpr2

epipolarline
associatedtopl

Figure2.6:Epipolarlinesprinciple

TothepointP1wecanassociateaplane,denominatedepipolarplane,whichisformedby
pl and the optical centers Cl and Cr. The epipolar plane intersects with the image planes
along both epipolar lines. All other points in space have associated epipolar planes that
alsocontainthelineClCr.Thiscausesallepipolarlinesofanimagesettointersectinone
and the same point in each plane. These special points are denominated epipoles. The
epipoles drawn in figure 2.6 are denoted as El and Er for left and right image plane
respectively.
The geometric restrictionbased on epipolar geometry can now be stated in the following
sentence:

Theimagepointprintherightimagethatcorrespondstothepointplintheleftimagehastobe
situatedontheepipolarlineassociatedtopl.

_
Chapter2.ArtificialVision

23

Inordertousethisrestrictioninthedesignofavisionmethod,wewillhavetoconstruct
the equations of epipolar lines. This equation will be introduced in the next paragraph.
After that, an algorithm based on epipolar lines will be introduced to solve pixel
correspondencesinatrinocularvisionalgorithm.

5.1.1Pixelequationofepipolarlines
ApointPinthe3Dspacecanberepresentedwithrespecttoeachoftwocameracoordinate
systems.Theoriginsofthetwoimageplanesaretheopticalcentersofthecamerasystems
and denoted in figure 2.7 as Cl and Cr. The vectors that connect P with Cl and Cr are
denotedasPlandPrrespectively.Onecameracoordinatesystemcanbetransformedinthe
other by applying a rotation and a translation. In the calibration procedure, we obtained
the extrinsic parameters of each camera system. Those extrinsic parameters described the
relation between each camera coordinate system and a chosen world reference system.
Sinceweknowhowtotransformeachcameraframeintothereferenceframe,wecanalso
transformonecameraframeintotheother.

Pl
Cl

El

P1

Pr

Tc

pl

Er

Cr
pr

Figure2.7:Equationofepipolarlines

Let us denominate the rotation matrix of this transformation as Rc and the translation
vectorasTc=CrCl.Thisway,therelationbetweenPrandPlcanbewrittenas:

Pr = Rc (Pl Tc ) (2.21)

Applying perspective projection to Pl and Pr we obtain the image coordinates (in mm) of
theprojectedimagespointsplandpr:

_
Chapter2.ArtificialVision

24

Pl f l

pl = Z
l
(2.21)

Pr f r
pr =

Zr

LetusconsiderthevectorsPl,TcandPlTcintheepipolarplane.SincethevectorsTcandPl
are coplanar, the vector product Tc Pl results in a vector perpendicular to the epipolar
plane. Therefore, the scalar product of PlTc and Tc Pl is zero. Taking into account
equation(2.21)weobtain:

(R

Pr

) (T P ) = 0 (2.22)
T

WenowintroduceamatrixSsothat Tc Pl = S (Tc ) Pl :

S (Tc ) = t z
t y

tz
0
tx

t x
ty
,with T = t (2.23)
tx
c
y

t z
0

Equation(2.22)cannowbewrittenasfollows:

Pr Rc S (Tc ) Pl = 0 (2.24)
T

The points Pr and Pl in (2.24) are expressed in the camera coordinate system. In equation
(2.5)ofparagraph2.3,wecomposedanaffinetransformationbetweenthecamerareference
frameandthetwodimensionalimageplane,usinghomogeneouspixelcoordinates:

xc
xc

~
u = K yc where Pc = yc (2.5)
zc
z c

Pcisapointinspaceexpressedincameracoordinates(inmm)andKrepresentsthematrix
containing the intrinsic camera calibration parameters. Using the inverse transformation
wecanwritethepointsPrandPlofequation(2.24)asfollows:

Pl = K l 1 ~
pl
(2.25)

1 ~
Pr = K r pr

_
Chapter2.ArtificialVision

25

pl and ~
pr .Thecalibration
TherespectivepixelcoordinatesofPlandPraredenominated ~
matrixofleftandrightcameraaredenominatedKlandKrrespectively.Equation(2.24)can
nowbewrittenas:

( )

T
1 T
1
~
pr K r
Rc S (Tc ) K l ~
pl = 0

T
~
pr F1 ~
pl = 0 (2.26)

( ) R S (T ) K

where F1 = K l

1 T

1
r

(2.27)

pl canbeconsideredas
ThematrixF1iscalledafundamentalmatrix.Inequation(2.26) F1 ~
theprojectionlineintherightimageplanethatpassesthroughprandtheepipoleEr.Ifwe

pl as u~r ,equation(2.26)becomes:
writetheprojectionline F1 ~

T
~
pr u~r = 0 (2.28)

This is the equation of the epipolar line in the right image plane, associated to the
projectionplofapointPinspaceintheleftimageplane.Analogically,theequationofthe
epipolar line in the left image associated to the projection pr in the right image can be
expressedas:

~
pl u~l = 0 (2.29)

( )

~ =F ~
where u
l
2 pr and F2 = K l

1 T

Rc S (Tc ) K r

(2.30)

ThefundamentalmatricesF1andF2definetherelationbetweenanimagepoint,inleftand
rightimagerespectively,anditsassociatedepipolarlineintheconjugateimage,expressed
in pixel coordinates. The construction of the epipolar line equation was taken from the
MastersThesisReconstruccindepiezasen3Dmediantetcnicasbasadasenvisinestereoscpica
byCarlosTorreFerrero[5].

5.1.2Trinocularstereoalgorithmbasedonepipolarlines

Applying the epipolar restriction to a pair of images only restricts the candidate
corresponding pixels in the conjugate image to a set of points along a line. Adding the
image of a third camera view will make it possible to solve the pixel correspondence
probleminauniqueway.Thiswillnowbeexplainedusingfigure2.8.Supposeweusedan
identificationmethod(seeparagraph5.2furtheron)toidentifyineverycameraviewaset

ofcharacteristicpixels.Thesecharacteristicpixelscane.g.bethecornersofadesk.Dueto
_
Chapter2.ArtificialVision

26

imperfectionsinthecornerdetectionmethod,pixelsthatactuallydontbelongtothesetof
corner pixels might have been detected. The goal of this algorithm is to obtain, for every
truecornerpixelintheleftimage,thelocationofthecorrespondingtwocornerpixelsinthe
centralandrightimageplane.
Thedesignedmethodwillnowbeexplainedforthepixelplthatliesintheleftimageplane
Il,andthatistheprojectionoftheobjectpointPthroughtheopticalcenterCl.Theactual
correspondingprojectionsintherightandcentralimageplaneIrandIcwithopticalcenters
CrandCcaredenotedprandpcrespectively.

P(x,y,z)

Cl

.
pl

Cr

Rc1

Ic

Rc

pc

Cc

.
pr

Pr1
Il

Prm

Ir

Rr

Rcj

Rcm

Figure2.8:Trinocularcorrespondencealgorithmbasedonepipolarlines

Knowing the intrinsic and extrinsic parameters of the camera triplet, the epipolar lines
correspondingtotheprojectionplofPintheleftimagecanbeconstructedintherightand
centralimageplane.TheseepipolarlinesaredenotedRrandRcforrightandcentralimage
plane respectively. In the right image plane we now consider the pixels that have been
previously detected as characteristic ones (e.g. possible desk corners pixels) and select
thosethatlieontheepipolarlineRrorsufficientlyclosetoit.Wecancallthesepixelsthe
candidatepixelsofIrandtheyaredenotedinfigure2.8asPri,i=1m.
Inthecentralimageplanewecannowconstructtheepipolarlinesthatcorrespondtothe
pixels Pri. This set of epipolar lines is denoted as {Rci, i=1m}. The correct pixel
correspondenceisnowfoundbyintersectingRcwiththeepipolarlinesoftheset{Rci}and
selectingthecentralimagepixelthatliesontheintersectionofRcandalineRcjintheset
{Rci}.Oncethispixelisdetected,theuniquecorrespondingpixeltriplet{pl,pc,pr}isfound.

_
Chapter2.ArtificialVision

27

Inpractice,correspondentpixelswillneverlieperfectlyontheintersectionoftheepipolar
linesconstructedinthethirdimage.Therefore,wehavetodefinewhatpixeldistancecan
be considered as sufficiently small to conclude a pixel correspondence. The more, extra
attentionhastobepaidtothenoiseeffectinimages,whichtendstopromotethedetection
of untrue characteristic pixels. In theideal case, nopixel correspondence willbe detected
foranuntruecharacteristicpixel,becauseithasntbeendetectedintheotherimagesandits
epipolar line doesnt come close to one of the true or untrue characteristic pixels in the
other images. If a correspondence does have been originated by one or more untrue
characteristic pixels, a correspondent pixel triplet will result at the end of the algorithm.
However, it will be able to discard the untrue correspondence after reconstructing its 3D
location, for most probable this 3D position will lie far from the 3D workspace in which
yousupposedyourobjectwastobedetected.

5.2Edgedetection

Edgedetectionisapowerfulmethodthatcanbeusedtoidentifyincameraimagesthose
objects that have surfaces that are clearly separated in color, orientation or in both color
andorientation.Lighteffectsondifferentlyorientedsurfacesorcolordifferenceswillcause
adjacent image regions to possess significant differences in gray scale level. The image
edges are defined as the transition zones between two regions with significantly distinct
grayscaleproperties.
Becausegrayscalediscontinuitieshavetobedetected,thecommonlyusededgedetection
operatorsarederivativeoperators.Typically,twoderivativeoperatorsappearinliterature.
ThefirstoneistheGradientoperatorandisbasedonthefirstderivative.Itlooksforthe
maximumgrayscaletransitionsinanimage.ThesecondoperatoristheLaplaceoperator
andisbasedonthesecondderivative.Itlooksforzerocrossingsinthegrayscaleimage.In
this introduction on edge detection we will focus on the Gradient operator, for this
operator forms the basis of the Canny operator applied on discrete images. The Canny
operatoristheoperatorwewillapplyinourvisionsystemtodetecttheedgesofobjects.
ForathoroughexplanationoftheLaplaceoperatorforedgedetection,werefertoliterature
[7].
Inthedetectionofedgesanimportantdisturbancefactorisimagenoise.Thisnoisecanbe
causedbythesubsequentprocessesofimagecapturing,digitalizationoftheimageandits
subsequenttransmission[7].Filteroperationsonthediscreteimagesareappliedtoreduce
this noise. We will introduce a filter based on the Gaussian function that will lead to a

_
Chapter2.ArtificialVision

28

specificedgedetectionoperator:thederivativeoftheGaussianfunction,thesocalledDroG
operator.
An edge detection operator can be considered as performing well when it gives a good
edge detection and localization and it doesnt result in multiple responses. A good
detectionmeansthatthechancetodetectanedgeishighwherethereisreallyanedgeand
lowwheretherearenoedges.Agoodlocalizationimpliesthatthedetectededgecoincides
withtherealedge.Multipleresponsesareproducedwhenvariouspixelsindicateoneand
thesameedge.
We will now resume theprinciples ofedge detection operatorsfor discrete images based
on the Gradient operator. An adequate operator that includes a filter will be introduced
andfinallytheprinciplesoftheCannyoperatorwillbeexposed.

5.2.1Gradientoperators
Foracontinuoustwodimensionalfunctionf(x,y)thegradientisavectorthathasasitsfirst
coordinatethepartialderivativeoff(x,y)withrespecttoxandasitssecondcoordinatethe
partial derivative of f(x,y) with respect to y. This is symbolically presented in equation
(2.31):

x f ( x, y ) f x ( x, y )
f ( x, y ) =
=
(2.31)
f ( x, y ) f y ( x, y )
y

Thegradient f indicatesthedirectionofmaximumvariationoff(x,y)anditsmodulusis
proportionaltothisvariation.
Wecanthinkofimagesasoftwodimensionaldiscretefunctions.EverypixelinanimageF
ischaracterizedbyarowandcolumncoordinatei,respectivelyj.Onegeneralimagepixel
is thus symbolically presented as F(i,j). For discrete images, a lot of gradient operator
approximationscanbeintroduced.Theyareallbasedonthedifferencebetweengrayscale
levels of adjacent or separated pixels. Two examples, one of a raw gradient and one of a
columngradientofadiscreteimageF,aregivenbyequations(2.31a)and(2.31b):

f x ( x, y ) GR (i, j ) =

(F (i, j ) F (i, j 1)) (2.31a)


T

f y ( x, y ) GC (i, j ) =

(F (i + 1, j ) F (i 1, j )) (2.31b)
2T

_
Chapter2.ArtificialVision

29

Gradientandfilteroperationsonimagesareoftenrepresentedusingconvolutions.Before
introducing the convolution operations on discrete images, we refresh the definition of a
convolutionproductbetweenadiscretesignalfandafilterfunctionh:

g (i, j ) = f h = f (i m, j n) h(m, n) (2.32)


m

Forimageprocessing,thefunctionhisoftenthoughtofasamaskfunction.Thismaskhas
a certain size, e.g. 3x3 pixels. The result of a convolution operation as in (2.32) is a gray
scale level that is the weighted sum of the gray scale levels of pixel (i,j) and its neighbor
pixels.Whatneighborpixelsareincorporatedinthesumisdeterminedbythesizeofthe
mask.Theweightofeachpixelinthesumisdeterminedbythevaluesofthemaskh(m,n).
Thegrayscalethatresultsisoftenscaledbythenumberofpixelsinvolvedinthesumand
thenstoredinthepixelwithindex(i,j).
Returningtoexpressions(2.31a)and(2.31b)wecannowwritecolumnandrowgradients
astheconvolutionsoftheimageFwithmaskfunctionsHRandHC:

GR (i, j ) = F (i, j ) H R (i, j )


GC (i, j ) = F (i, j ) H C (i, j )

(2.33)

Correspondent to signal processing theory, the masks HR and HC can be seen as the
impulseresponsesofrowgradientandcolumngradientrespectively.Convolutionmasks
canberepresentedinagraphicalformtoclarifytheideaoftheweightedsumoperation.
Figure 2.9 visualizes the generic forms of two 3x3 masks for the calculation of row and
columngradients.

H R (i, j )

1
2+ K

H C (i, j )

1
2+ K

Figure2.9:Structureof3x3gradientconvolutionmasks

_
Chapter2.ArtificialVision

30

Different values for the parameter K in the masks of figure 2.9 can be used. Common
valuesareK=1,K=2andK= 2 .Werefertoliteratureformoredetailsaboutthesegradient
operators.
Once the row and column operators have been calculated for the position (i,j) in the
considered image, the gradient magnitude G (i, j ) and direction (i, j ) can be calculated
asinequation(2.34):

G (i, j ) = GR + GC GR (i, j ) + GC (i, j )


2

GC (i, j )

GR (i, j )

(i, j ) = atan

(2.34)

The approximation in the formula of G (i, j ) has been considered for computational
simplicity.
Adecisioncriteriontodecidewhetherornotapixelbelongstoanimageedgecanbestated
asinfigure2.10.Animportantparameterinthedecisioncriterionisthethresholdvalue.A
too low threshold value can result in the detection of image fluctuations that are due to
image noise and therefore give multiple responses in the edge detection. A too high
thresholdvaluecancausethelossofedgeinformationintheimage.

HR(i,j)

GR(i,j)

F(i,j)

||+||

|G(i,j)| >threshold
in(i0,j0)?

YES

HC(i,j)
GC(i,j)

NO

(i0,j0)=edgepixel

(i0,j0)edgepixel

Figure2.10:Edgedetectiondecisioncriterion

Gradientoperatorsofthetypesasintroducedhereareextremelysensitivetoimagenoise.
A commonly used procedure to lower the influence of image noise on the gradient
operator is softening the image before processing it. An appropriate filter operator to
performthisoperationwillbeintroducedinthenextparagraph.

_
Chapter2.ArtificialVision

31

5.2.2DroGoperator
Anoperatorthatefficientlycombinesimagesofteningoperationsandgradientoperations
istheDerivativeoftheGaussianfunction(DroG).ThestandarddeviationoftheGaussian
functioncanbeadjustedtocontrolthelevelofdesiredfiltering.
AtwodimensionalGaussianfunctionwithequalstandarddeviationinbothcoordinatesis
definedasinequation(2.35):

g ( x, y ) =

1
2

(x + y )
2

1
2

x2
2

1
2

y2
2 2

= g ( x) g ( y ) (2.35)

Takingintoaccountthelinearityofgradientandconvolutionoperators,thecombinedfilter
andgradientoperationonatwodimensionalfunctionf(x,y)canbewrittenas:

[ f ( x, y ) g ( x, y )] = f ( x, y ) g ( x, y ) = f ( x, y ) DroG ( x, y ) (2.36)

TheoperatorDroG(x,y)isavectorthatcanbewrittenas:


x g ( x ) g ( y )
x (g ( x) g ( y ) ) g ( y ) x g ( x)
2
(2.37)
DroG ( x, y ) =
=
= y g

( x) g ( y )
(g ( x) g ( y ) ) g ( x) g ( y )

2
y
y

TobeabletousetheDroGoperatoronimages,thecomponentsxandyofthecontinuous
DroG function have to be discretized. To perform a discretization of DroG(x,y) we take
intoaccountthe chosen standard deviation and we construct two masks DroGR(i,j) and
DroGC(i,j)ofpredeterminedsizeW:

W 3c where c = 2 2 (2.39)

TheparametercisameasureforthewidthofthebellintheGaussianfunction.
ThefirstmaskDroGR(i,j)willcorrespondtothefirstelementoftheDroGvectorandcanbe
seen as a filtering row gradient operator. The second mask DroGC(i,j) is then a column
operatorformedbyevaluatingthesecondelementoftheDroGvector.
OperatingbothdiscretizedcomponentsoftheDroGoperatoronadiscreteimageF(i,j),we
obtainafilteredrowandcolumngradientFR,respectivelyFC:

FR (i, j ) = F (i, j ) DroGR (i, j )


FC (i, j ) = F (i, j ) DroGC (i, j )

(2.33)

_
Chapter2.ArtificialVision

32

Notethat,althoughweperformacombinedfilterandderivativeoperationonthediscrete
image,weonlyhavetoexecuteoneconvolutionoperationtoobtainthegradientimagesFR
andFC.

5.2.3Cannyoperator
InthisparagraphwewillcommenttheprinciplesoftheCannyedgedetectionoperator,for
itwillbeusedinthedesignofourvisionsystem.TheCannyedgedetectorisbasedonan
analyticaloptimizationprocedureandcanbrieflybedescribedasfollows.
Supposewewanttodetectedgesinaonedimensionalfunctionf(x).Edgeswillbedetected
in f(x) searching for local maxima in the functions gradient. This gradient is obtained
through the convolution product operation between f(x) and an antisymmetric impulse
responsefunctionh(x)thatholdsnonzerovaluesinaninterval[W+W].Threeanalytical
criteria are introduced to determine the function h(x). A first criterion is defined based
upontheneedofgoodedgedetectionandmaximizestheamplitudeoftherelationsignal
noise.Asecondcriterionisintroducedtoassurecorrectedgelocalizationofedgesandthe
third criterion is defined to avoid multiple responses. For more details about the used
optimizationfunctionsandprocedure,thereaderisreferredtoliterature[7].
No analytical response to the problem stated be Canny can be derived due to the
complexness of the minimization procedure. However, for digital twodimensional and
discrete images, the h(x) operator proposed by Canny can be approximated by the
derivative of the Gaussian function in the direction perpendicular to the edge, which
correspondstotheknownDroGoperator.
According to the philosophy of figure 2.10, a decision criterion for the detection of edge
pixelscanalsobestatedfortheCannyoperator.Thestepstoundertakeare:

1.

FormingofaDroG(i,j)operatorwithadesiredstandarddeviation .Thisconsists
in the calculation of two convolution masks DroGR(i,j)and DroGC(i,j) of adequate
sizeW;

2.

ConvolutionofeveryimagepixelF(i,j)witheachoneofthemasksobtainingimage
rowandcolumngradientsFR(i,j)andFC(i,j);

3.

CalculationofgradientmagnitudeManddirectionforeverypixel(i,j);

4.

Inthegradientsdirection,eliminatethosepixelsthatdonthaveamagnitudethat
canbeconsideredasalocalmaximum;

_
Chapter2.ArtificialVision

33

5.

Detection of edge pixels based on a hysteresis function with threshold values T1


andT2(T2>T1).ApixelbelongstoanedgewhenthemagnitudeMofitsgradient
exceeds T2. When M doesnt exceed T2, the considered pixel still belongs to the
edgeaslongasitsM>T1andoneofhisneighborpixelshasanMexceedingT2.

5.3Objectidentificationalgorithm
Inthepreviousparagraphweobtainedtheequationsofepipolarlineswhichhaveproven
tobeastrongtooltodeterminepixelcorrespondences.Edgedetectionmethodshavebeen
introduced and their use in image edge detection has been commented. Both techniques
willnowbeappliedintheconstructionofanobjectidentificationalgorithm.
Theidentificationprocedureiscasespecificforourrobotactivesecurityapplication.Itcan
thereforebedesignedwithapairofrestrictionsthatwillmakeobjectidentificationeasier.
Afirstrestrictionisproposedastherestrictionofthesearchareaineachimagetothatarea
that represents the actual workspace of the robot. By actual workspace, we mean the
overall area in which the robot will be programmed to move. Following this criterion,
imagezonesthatdepicte.g.thepartofspaceshowingtherobotsbodycanbediscarded.
Another way of restricting the search area in the images can consist of discarding image
areasthatarenotvisibleineachoneofthethreecameraviews.Efficientrestrictionofthe
fullimagesizeof480x640pixelstosmallersizescanconsiderablyreducecalculationtimes.
This can be understood mentioning that the picture zone around the robots workspace
oftencontainsalotofedgeinformation(robotbody,securityfence,walls,)andthatedge
detectionisatimeconsumingprocess.
Asecondrestrictionisstatedasthepriorknowledgeoftheobjectwewanttodetect.The
moreinformationwehaveaboutthisobject,themorerestrictionswecanimplementinour
detectionalgorithm.Asourapplicationisaprototypeofanactivesecuritysystem,wewant
todetectunexpectedobstaclesintherobotsworkspace.Thisunexpectedobstaclecane.g.
beahumanoperator.Todevelopourexperimentsinasafeway,weworkwithnonliving
obstacles.Asworkingobstacle,wehavechosenarectangularparallelepiped,composedof
foammaterialandofdimension0,80 x 0,50 x 0,50m.Becauseallsurfacesofthisobstacleare

ofrectangularform,somespecificidentificationmethodscanbeapplied.Cubeedgescan
bedetectedusingaCannyoperator.Moreadvancedtechniquescanthenbeappliedtofill
upthegapsinthereconstructedobjectcontours.Continuingfromthereconstructedobject
contours,wecanperformtwoidentificationtechniques:

_
Chapter2.ArtificialVision

34

1. Identificationofsquaredobjectsides
2. Identificationofobjectcorners

The first technique constitutes in calculating the surface surrounded by an identified


contour. If this area has a specific relation to the contours perimeter, the contour can be
consideredasbeingasquare.Forasquare,thisareaperimeterbasedcriterioncanbestated
asindicatedintable2.3.Alowerandupperthresholdhavetobesettoincorporatefore.g.
shadoweffectsthatcancausetherealobjectshapestobeslightlydeformed,whichresults
indeviationsofthecontoursareaandperimeter.

Area

Perimeter

4L

Criterion

Perimeter
14 <
Area

< 18

Table2.3:Criterionforsquaredcontourdetection

Identification of squared object sides can be used to decide quickly if an object is


encounteredintherobotsworkspace.Oncetheobjectissignalled,thesecondidentification
methodcanbeactivated.
In this second identification method, the curvature of identified contours along their
lengths is computed. Local maxima of the curvature are considered as corner candidates.
Afterdiscardingroundedcornersandcornersduetoboundarynoiseanddetails,thetrue
imagecornersremain.Becauseofthecomputationalcomplexityofthismethod,wedont
discuss it in detail. A fully elaborated MATLAB function, composed by He Xiaochen and
freelydistributedonthesupportwebsiteofMATLAB,isusedtodetectobjectcornersinour
visionsystem.

6.Arealtimecameraimplementation
TheAxis205camerasusedinourvisionsystemarenetworkcameras.Eachcamerahasitsown
IP address and is connected as a device to the local area network of the TEISA laboratory.
Entering the IP address of a camera in a web browser allows us to see the registered camera
view.
Fast access to the network cameras is of utmost importance. To obtain correct pixel
correspondences,weneedapictureframeofeachoneofthethreecameras,ideallytakeninthe
very same moment. When registering moving objects, the smallest time interval between

_
Chapter2.ArtificialVision

35

grabbing the three picture frames, will lead to incorrect pixel correspondences and therefore
incorrect3Dpositioncalculations.
Inchapter4onactivesecurity,wewillexplainhowanEthernetswitchisinstalledtoconnect
the robot, the three cameras and a pc in an isolated network in which all collision of data
packagesisavoided.Thespeedatwhichdevicescanbeaccessedherebyincrements.
BecausecameraimagesareprocessedinMATLAB,theprocessthathastobeoptimizedintime
consumption, is the storage of the picture matrix of dimension 480x640x3 to the MATLAB
workspace. The MATLAB function imread allows us to save a picture matrix to workspace by
givingupthecamerasIPaddressasanargument.However,internallyinthefunctionimread,
each time a function call is made, a connection between the pc and the camera is set up and
after the picture has been saved, this connection is closed. Because these processes are very
time consuming, even with aFast Ethernet at100Mbps, using the imread function to save the
imageofanetworkcameratotheMATLABworkspace,sometimestakesupto0.5seconds.
Fromthisobservationweunderstoodthatthecluetofasterpicturestorageismaintainingthe
connection pc camera open as long as we want new images to be processed in MATLAB.
ThankstotheeffortsofCarlosTorrewefoundawaytoachievefasterpicturestorage,byusing
the AXIS Camera Control software, developed by Axis. The ActiveX component of this
software makes it possible to view Motion JPEG video streams from an Axis Network Video
productdirectlyinMicrosoftDevelopmentTools.

6.1Principlesofcameraimageprocessing
TheGraphicalUserInterfaces(GUIs)ofMATLABsupportActiveXcomponents.Apicture
inaGUIcanbedeclaredasanActiveXcomponentandtheURLpropertyofthiselement
canbesetasthecamerasIPaddress.UponcreationofaMATLABGUI,anassociatedmfile
isalsocreated.Whenthismfileisinvokedinthecommandwindow,thevideostreamsof
theActiveXcomponentsareautomaticallyinitiatedandtheimagesintheGUIsfigureare
continuously refreshed. For more details, we refer to the MATLAB help files on the GUI
functions OpeningFcn and OnNewImage. The AXIS Camera Control software supports a
numberofoperationsontheJPEGimagesinthevideostream.ThecommandGetBMP([],[])
isusedtoperformthecriticalactionofsavingthepicturematrixtotheMATLABworkspace.
However, the GetBMP function saves the JPEG image as one raw vector that contains a
header and subsequently the information that describes the image in the redgreenblue
image space. A reshape operation has to be performed to obtain the wanted matrix of
dimension480x640x3.

_
Chapter2.ArtificialVision

36

In the mfile associated to the GUI, named CamAxis.m and included in Appendix A, the
functionactivex_OnNewImageistriggeredeverytimeanewJPEGframeisgeneratedinthe
video stream. When this happens, the reshape operation is performed and the generated
picture matrix can be stored to the MATLAB workspace. We refer to Appendix A for
programming details of this method. Using the ActiveX components, subsequent picture
images are stored to the MATLAB workspace in times never exceeding 80 milliseconds. If
smaller times are needed in future projects of the TEISA research group, more robust
camerasthatareequippedwithframegrabberscanalwaysbeinstalled.

6.2Principlesoftherealtimedetectionsystem
Basedontheprinciplesofthepreviousparagraph,aMATLABGUICamAxiswasdesigned
thatcontainsanActiveXcomponentforeachcamera.ThegoaloftheGUIistowatchthe
robotsworkspaceforobstaclesofrectangularform.
Everytimeanewimageislaunchedbythevideostream,instructionsintheOnNewImage
functions are executed. However, as long as no quiet object has been detected in the
workspace, it is only the image of one camera that is taken out of the video stream,
subsequently reshaped and processed. The camera image that is registered is the one of
GICcam3,whichisthecamerathathasthecentralviewoftheworkspace.Afterstorageto
workspace,thisimageisprocessed.Thedetectionactionisonlyperformedononeimage
tosavetime.Twooperationsareperformedontheimage:

Detectionofanobjectofrectangularform,followingthecriterionofsection5.3;

Detectionofimagemotionwithrespecttothepreviousimage.

The very same moment that a moving obstacle is detected for the first time, a detection
signal will be sent to the robot controller. This detection signal is sent using the socket
messagingcommunicationsystemthatwillbeintroducedinchapter4.IntheOpeningFcn
function call of the GUI CamAxis.m we connect the pc to the robot controller that is
waitingtobeconnected.Oncetheobjecthasbeendetected,thedetectionsignalissentover
theopenedsocketconnection.Thesocketcommunicationfunctionsusedtoperformthese
operationsaremsconnecttoestablishthesocketconnectionandmssendtosendthedetection
signal.Forprogrammingdetails,werefertochapter4andAppendixA.
To determine the 3D position of the obstacle, this obstacle needs to be in a still position.
Therefore, we detect if the obstacle is moving by using a simple criterion based on pixel
differences in two subsequent images. Once the obstacle is found quiet in image 3, the

_
Chapter2.ArtificialVision

37

functions tograb, reshape and store the images of cameras GICcam1and GICcam 3 are
triggered. To realize this, we make use of Boolean flags. The implementation of the
detectionGUICamAxiscanbeschematicallydescribedasfollows:

1.

Establishasocketconnectiontotherobotcontroller;

2.

Startallthreevideostreams;

3.

Grabimagen3outofthevideostreamofGICcam3;

4.

Reshapeandstoretheimagematrixn3;

5.

Detectmotioninimagen3;

6.

Detectobjectofrectangularforminimagen3;
a.

Ifnoobjectisdetected,returntostep2;

b. If an object is detected for the first time, either moving or quiet, send a
detectionsignaloverthesocketconnectiontotherobotcontroller;
c.

Ifanobjectinmotionisdetected,returntostep2;

d. Ifanobjectisdetectedandnomotionisregistered,do:
i. Grabimagesn1andn2ofGICcam1andGICcam3;
ii. Reshapeandstoreimagematricesn1andn2;
iii. Return the 3 stored image matrices as function outputs, together
with the created socket variable for subsequent communication
withtherobotcontroller.

OncetheGUIhasreturnedtheimagesthatcontaintheobstacle,thevisionmethodsearlier
developed are used to obtain pixel correspondences and to calculate the objects 3D
position.Toconcludethissectiononthedesignofarealtimedetectionsystem,weresume
thepreviouslyseenvisionmethodsintheformofanalgorithm.
PriortoexecutionofthedetectionGUIandthevisionalgorithm,thecalibrationparameters
forallcamerasareloaded.CalibrationmatricesKi,projectionmatricesMiandfundamental
matrices Fi are constructed for camera i=1, 2, 3. Starting with the images returned by the
detectionGUI,thefollowingstepsareundertaken:

_
Chapter2.ArtificialVision

38

1.

Application of a corner detection function to detect corner candidates in all 3


images;

2.

Foreveryassumedcornerpixelp1iinimagen1,executionofthefollowingsteps:
a.

ConstructionoftheassociatedepipolarlinesR12iandR13iinimagen2and
imagen3;

b. Searchforcornerpixels{p2i}inimagen2thatlieclosetotheepipolarline
R12i;
c.

Construction in image n3 of the epipolar lines {R23i} that correspond to


{p2i};

d. CalculationofintersectionsbetweenR13iand{R23i};
e.

Detectionofcornerpixels{p3i}inimagen3thatliesufficientlyclosetothe
calculatedintersections;

f.
3.

Formationoftriplets{(p1p2p3)i}ofpixelcorrespondences;

Application of inverse camera projection model to undo pixel distortions of all


pixelcorrespondences;

4.

Reconstruction of 3D positions using the obtained one to one pixel


correspondences;

5.

Elimination of false pixel correspondences by discarding of 3D positions that lie


outsidetheexpected3Drangeoftheobstacle;

6.

Ordering the 3D positions to a structured set that describes the location of the
obstacleintherobotsworkspace.

The proposed vision method has been completely elaborated and tested in MATLAB. The
mathematicaldesignofthevisionsystemwouldnthavebeenpossiblewithouttheMATLAB
functionsdevelopedbyJanneHeikkil,researcherattheUniversityofOulu(Finland)and
CarlosTorreFerrero,researcherattheUniversityofCantabria(Spain).
In general, the functions of Heikkil can be described as functions related to the camera
intrinsic parameters. His functions are used to calculate the parameters of an inverse
camera projection model that takes into account image distortions, to correct distorted
image pixel coordinates and to distort pixel coordinates before reprojection in the image
plane.
ThefunctionsofCarlosTorrearerelatedto3Dpositionreconstructionsandepipolarlines.

_
Chapter2.ArtificialVision

39

AllfunctionsrelatedtocameracalibrationweretakenfromtheMATLABtoolboxforcamera
calibrationthatisfreelyavailableontheinternet[4]anddesignedbyJeanYvesBouguet,
researcherattheCaliforniaInstituteofTechnology.
Nofunctionsdesignedbyoneofthesethreeresearchershavebeenaddedtotheappendices.
Among others, functions that were added to Appendix A are the GUI CamAxis.m, a
function correspondence.m that calculates for pixel correspondences and a function
Pos3D.mthatusespixelcorrespondencestocalculatetheobstacles3Dpositioninspace.

6.3Timeevaluationofthevisionsystem
To conclude this section on the design of a realtime vision system, we will state the
experimentallyobtaineduppertimelimitsintheexecutionofthesubsystemsofthevision
algorithm.Thestepsofthevisionsystemalgorithmthatincorporateimageprocessingare
rather time consuming. Strictly mathematical operations such as pixel correspondence
search and 3D reconstructions tend to be less time consuming. The processing times are
givenintable2.4.Thetotalpicturepackagetimeisthetimeneededtostoretheimagesof
allthreepictureswhentheobjecthasbeendetectedintheimageofGICcam3.

Imageprocessingtask
Detectmovingobjectinimage3
Totalpicturepackagetime
Cornerdetectionin3images
Findpixelcorrespondence
Reconstruct3Dpositions

Uppertimelimit[sec]
0.220
0.350
2.5
0.016
0.016

Table2.4:Timeconsumptionofimageprocessingtasks

Given the tools available for this project, the obtained processing times are acceptable. If
the robot moves at a reasonable speed, object detection can be signaled fast enough to
avoid a collision between the robots End Effector and the obstacle. Only the corner
detection process is extremely time consuming, which can be understood taking into
account the exhaustive curvature calculation procedure that is used in this algorithm.
Because robot motion has already been halted when the vision systems starts calculating
the objects corners, this is easily countered by pausing the robot. Once the robot has
receivedthefirstalternativepositioncalculatedbytheartificialintelligencesystem,itwill
startmovingagain.

_
Chapter2.ArtificialVision

40

7.References

[3] Janne Heikkil and Olli Silvn,AFourstepCameraCalibrationProcedurewithImplicitImage


Correction, Infotech Oulu and Department of Electrical Engineering, University of Oulu,
Finland.
[4]http://www.vision.caltech.edu/bouguetj/calib_doc:CameraCalibrationToolboxforMATLAB,
JeanYvesBouguet,CaliforniaInstituteofTechnology.
[5] Carlos Torre Ferrero, Reconstruccin de piezas en 3D mediante tcnicas basadas en visin
estereoscpica,March2002
[6]http://www.mathworks.com:supportwebsiteofTheMathWorks,Inc.
[7]JavierGonzalzJimnez,Visinporcomputador,Paraninfo,1999.
[8] Carlos Torre Ferrero, Visin 3D, Curso SA.3.5, Visin Artificial, Grupo de Ingeniera de
Control,DepartamientoTEISA,UniversidaddeCantabria.
[9] Carlos Torre Ferrero, Deteccin de bordes (I), Curso SA.3.5, Visin Artificial, Grupo de
IngenieradeControl,DepartamientoTEISA,UniversidaddeCantabria.
[10]Visintridimensional,Algortmosbasadosenestreotrinocular,UniversidadPolitcnicade
Madrid,UPMDISAM/UMH,pp.104108.

_
Chapter2.ArtificialVision

41

Chapter3

ArtificialIntelligence

1.Introduction

Industrial robots often operate in environments that contain static or dynamic obstacles. A
motion planning that guarantees a collisionfree path for robot movement is therefore an
importantstepintheoperationalcontrolofindustrialrobots.
Inthecaseofstaticenvironmentstheplanningofmotionfromastarttoagoalconfiguration
can be realized offline, taking into account the static position of one or more obstacles.
However, robots dont always work in static environments. An example of a continuously
changing environment is one in whichseveral robots cooperate in the same industrial cell. A
robotsenvironmentcanalsochangewhenahumanoperatorenterstherobotsworkspace.In
these cases, the robot has to adapt quickly to the unexpected changes in its environment,
planningtherobotmotioninanonlinemanner.Todoso,ithighlydependsonacontinuous
flowofinformationabouteventsoccurringintherobotsenvironment.Forourapplication,this
informationissuppliedbytheartificialvisionsystemdesignedinthepreviouschapter.
Inthischapterwewillintroduceafuzzylogicbasedtechniquetosolvetheobstacleavoidance
problem for an industrial robot with six degrees of freedom. The introduced artificial
intelligence system controls translational motion in the Cartesian space along x, y and z
directions and also introduces rotational avoidance actions with respect to x and y axes. The
fuzzy rule base combines a repelling influence, which is related to the distance between the
ToolCenterPointandtheobstacle,andanattractinginfluencebasedonthedifferencebetween
actual x, y and z configuration and final position coordinates. In the designed system, the
obstacle is modeled as a rectangular parallelepiped of variable dimension, but with a
predetermined orientation in the Cartesian space. The idea of implementing repelling and
attractinginfluencesforthedesignofafuzzyrulebasehasbeentakenfromanarticlebythe
authorsP.G.ZavlangasandS.G.Tzafestas[11].
Asmainreasonsforimplementinganobstacleavoidancestrategybasedonfuzzylogicwecan
indicatethatafuzzyalgorithmanditsrulebasecanbeconstructedrelativelyeasilyandinan
intuitive,experimentalway.Furthermore,thefuzzyoperatorsthatareusedtolinktheinputs
ofthefuzzysystemtoitsoutputcanbechosenasbasicoperatorssuchassum,product,minand
max. This will turn out to be of utmost importance when alternative trajectories have to be
calculatedasquicklyaspossibleinrealtimemotionapplications.
_
Chapter3.ArtificialIntelligence
42

Inthefollowingparagraphs,thebasicprinciplesofFuzzyInferenceSystemswillbeintroduced
andappliedforthedesignoftheproposedfuzzyavoidancestrategy.

2.DesignofaFuzzyInferenceSystem

2.1Introduction
Infuzzylogicsystemslinguisticlabelsareassignedtophysicalproperties.Thedistanceofthe
ToolCenterPointoftheindustrialmanipulatortoanobstaclecanbedescribedasClose,Far,or
Very Far. Each of the linguistic labels introduced in a fuzzy system is characterized by its
membership function. In a certain domain [a, b] of numerical distance values x, the
membershipfunctionofthelabelFarcanholdnonzerovalues,describingwithwhatgradea
numericaldistancevaluecanbeconsideredtobeFarfromtheobstacle.Infigure3.1thisideais
depictedformembershipfunctionsoftriangularform.Fromthisfigureitcanbeunderstood
that x1 can satisfy the criterion Far with any value in the interval [0, 1]. Depending on the
application and the human interpretation a distance of 1m to the object can be considered as
Farwithavalueof1,andadistanceof60cmcanbeconsideredasFarwithavalueof0.6andas
absolutely not Very Far. These considerations typical for fuzzy logic reasoning contrast with
classical logic, where a statement is either trueor false, indicated by a numerical value that is
either1or0.

Far

VeryFar

0.6

x1

Figure3.1:Principleoffuzzymembershipfunctions

Whenafuzzylogicsystemisconstructed,thebasicelementsareformedbysocalledfuzzysets.
AfuzzysetAconsistsof:

A universe U of the linguistic variable x that is considered, e.g. x = distance to the


obstacle,sothatx U=[Umin,Umax],e.g.U=[0m,5m].

_
Chapter3.ArtificialIntelligence
43

Alinguisticlabelthatcanbeassociatedtothelinguisticvariables,e.g.FarorVeryFar.

AmembershipfunctionMF,denotedasA(x),thatdefinesthecertaintywithwhichthe
discretevaluesofthelinguisticvariablex U,e.g.distancesxtotheobstacle,belong
tothefuzzysetA=Far

Afuzzysetissymbolicallyrepresentedas:

A {(x, A ( x) ) x U } (3.1)

Notonlysysteminputs,e.g.distancedescriptionsforourapplication,arestructuredinfuzzy
sets.Alsosystemoutputsarecategorizedasfuzzysets.Anexampleofanoutputfuzzysetcan
begivenbyamotionactioninthezdirection.Withinadiscreteuniverse[100mm+100mm]of
distanceincrementsinzdirection,thelinguisticlabelBigandPositivecanbecharacterizedbya
membership function, typically denoted as B(y) for fuzzy output sets, that has as its mean
B

valueadistanceincrementof+80mm.
The discrete entrances of the fuzzy system, e.g. distance descriptions, are converted to fuzzy
entrancesetsbeforeenteringtheFuzzyInferenceSystem,shortlydenotedasFIS.Thisprocess
iscalledthefuzzificationofentrancevariablesandisratheranartificialoperationthatmakesit
possibletousefuzzyoperatorsontheentrancesoftheFIS.Theeasiesttocalculatewithisthe
singletonfuzzificator.ThisoperatorassociatesaMF A(x)ofthetypesingletontothediscrete
entrance,sothatthisMFonlydiffersfromzerowhereitsentrancexequalstheentrancex*that
is being fuzzificated. Figure 3.2 represents this idea. A mathematical notation for the
fuzzificator is given by equation (3.2). The fuzzy singleton set associated to the discrete
entrancex*isdenotedas A' .

1, if x = x * (3.2)
0 , otherwise

A' ( x) =

_
Chapter3.ArtificialIntelligence
44

distance(x)
1

0m

x*=3m

5m

Figure3.2:FuzzificationofadiscreteinputvariableofaFIS

Starting from the defined linguistic labels a set of IFTHEN rules, the so called rule base, is
constructed.This rule base can beseen asanartificial intelligence organ, for it represents the
human reasoning in the process of taking control decisions. The decisions that can be
undertaken are also described in linguistic terms, admitting the designer of the fuzzy logic
systemtocreateaclearlyunderstandablerulebase.AnexampleofalinguisticIFTHENrule
couldbegivenby:

IFxtoobstacleisCloseTHENAvoidanceactioninPositivezdirectionisBig(3.3)

The part of a rule after IF and before THEN is the premise and can be composed of several
input variables connected with AND or OR operators, or a combination of both. Fuzzy
operators, Tnorms associated to the ANDoperator and Snorms associated to the OR
operatorsareusedtocomposeandsolvethepremiseofarule.ForathoroughdescriptionofT
andSnorms,werefertothePhDthesisFuzzysets,reasoningandcontrolbyDr.ir.R.Jager[12].
Oncethepremiseisresolved,itimplicateswithacertainamount,calculatedusingTnorms,on
theconsequentoftherule,whichisthepartafterTHEN.Rulesthatinfluencethesameoutput
variablearethenaggregatedusingSnorms.Finally,theresultoftheaggregationoperationis
defuzzificated.Theobjectiveofthisfinalprocessistoobtainonenumericaloutputvalueforthe
controlled variable. To defuzzificate the output of a rule base, a number of techniques are
described in literature. Calculating the center of mass of an aggregated output function is an
exampleofadefuzzificationmethod.
Usingasimpleproductoperatortosolvetheimplicationintherule(3.3),ifthepremise xto
obstacleisCloseis true with a value of 0.7, a control action in positive zdirection will also be
undertaken with a grade of 0.7. Supposing the output MF of Avoidance action in Positive z
_
Chapter3.ArtificialIntelligence
45

directionisBigisasingletonwithvalue+80mm,therule(3.3)willreturnanactionof+56mmin
positivezdirection.Whenotherrulesoftherulebasealsoresultinanactionforthepositivez
direction,theresultoftherulescanbeaggregatedbyusingasanSnorme.g.themaxoperator.
The block diagram in figure 3.3 resumes the operation of Fuzzy Inference Systems. A set of
discrete entrances x* is first fuzzificated in the Fuzzification block. The block depicted as
InferenceMotoristheheartoftheFIS.SolvingforthepremisesoftherulesstoredintheRule
Base this unit calculates the inference towards the defined actions or fuzzy outputs. To
determinethefuzzyimplicationandaggregation,itusesthedefinedfuzzyoperators,thatare
stored in the Database of the system, which also contains information about membership
functionsofallinandoutputs.AfterthefinalDefuzzificationprocess,acontrolactionvector
y*leavestheFIS.Informationaboutfuzzificationanddefuzzificationoperatorsarealsostored
intheDatabaseblock.

Fuzzification

InferenceMotor

Defuzzification

RuleBase
x*

y*
Database

Figure3.3:BlockdiagramofaFuzzyInferenceSystem

2.2Descriptionoftheavoidanceproblem

The industrial robot FANUC Robot Arc Mate 100iB of this project possesses six degrees of
freedom, which allows it to move its Tool Center Point (TCP) towards any position in the
Cartesianspacewithintherobotsrangeinafullydefinedorientation,herebyonlyrestrictedby
therotationallimitsofitsaxes.ThelocationoftheTCPandtheorientationoftheEndEffector
in the robots workspace are fully described by its x, y and z coordinates together with three
rotationanglesalongthex,y,andzaxisoftheEndEffector,respectivelycalledroll,pitchand
yaw angle. A representation of a simplified End Effector in a Cartesian coordinate system is
depictedinfigure3.4.Roll,pitchandyawanglesaredenotedby , and respectively.The
TCPislocatedattheendpointoftheEndEffector.

_
Chapter3.ArtificialIntelligence
46

.(x,y,z)ofTCP

Figure3.4:PositionofTCPandorientationofanEndEffector

The robot can be commanded in terms of joint angles, which means the operator directly
imposes,bothinamplitudeasinsense,theanglealongwhicheveryaxisofthemanipulatorhas
to rotate. Designing an obstacle avoidance strategy that directly imposes the value of every
separate joint angle, would require the calculation of the inverse robot kinematics. This is a
processofveryhighcost,incalculationcomplexity,aswellasincalculationtime.Becausethe
robot can also be commanded in termsof position and rollpitchyaw orientation vectors, we
will therefore develop a strategy that controls the position and orientation of the TCP in the
Cartesian space. After being calculated, position and orientation vectors will be sent to the
robotcontrollerthatmakestheTCPmovealongthecalculatedalternativetrajectory.Formore
details on this transmission and execution procedure we refer to chapter 4 on robot active
security.
To generate an avoidance strategy in Cartesian space, a three dimensional obstacle of
rectangular form turned out to be well suited to design logical and structured distance
descriptionsandavoidancerules,althoughobstaclesof3Drectangularformaregeneralizations
ofrealworldobstacles.Nevertheless,onceanobstacleisidentifiedandsituatedinspacebythe
designed artificial vision system, a cube or parallelepiped can be constructed around this
obstacleallowingustoemploythegenerallydevelopedavoidancestrategy.
Because the designed algorithm would become too complex, the obstacle does not have any
inclinationintheCartesianspace,whichmeanstheedgesoftheobstacleareparalleltox,yand
zaxes.
The3DobstaclelimitsazoneintheCartesianspaceinwhichtheTCPcannotenter,although
the shortest path towards the final destination goes through this zone. Whenever the TCP
closesuptooneofthesidesoftheparallelepiped,anavoidanceactionhastobeundertaken,
temporarycancelingthedirectmovementoftheTCPtowardsitsfinaldestination.Themore,it
_
Chapter3.ArtificialIntelligence
47

is not only sufficient that the Tool Center Point stays out of the obstacle zone. Any collisions
between the End Effector and the obstacle have to be avoided at any time during the
undertakenavoidanceaction.

2.3Basicphilosophyoftheavoidancestrategy
Fuzzy sets of distance descriptions will form the basic elements of our fuzzy logic avoidance
system. A first set of fuzzy sets will describe the distance of the Tool Center Point to the
obstacle.Forexample,alinguisticlabelClosePositiveinywillbeusedtoindicatethattheTCPis
closetooneofthesidesoftheparallelepiped,asshowninfigure3.4.WhentheTCPclosesup
to a surface of the obstacle, two basic actions will have to be undertaken. The first one is
stopping the movement along the considered direction and starting to move in a predefined
avoidance direction. When an obstacle is encountered in y direction, the avoidance direction
canbeprogrammedasbeingthezdirection.TheotherbasicactionistherotationoftheEnd
Effector, so that this End Effector is always in a perpendicular position with respect to the
surfaceitisclosingupto.Onceanedgeoftheobstacleisencountered,theTCPisnotlonger
consideredtobeClosePositiveiny.TheavoidanceactionisstoppedandtheTCPcancontinue
moving closer to its final position. The End Effector is again rotated until it is in a
perpendicularpositiontotheobstaclesurface.

. .

3
1

Final
Position

.
2

Start
Position

Figure3.5:Basicavoidancestrategy

During the movement of TCP and End Effector, two influences will be active. The first
influenceistheattractinginfluenceformedbythedifferenceinx,yandzcoordinatesbetween
the actual position and the destination position. These coordinate differences form one
dimensionalentrancesofthefuzzysystemandwillbereferredtoasfollows:

_
Chapter3.ArtificialIntelligence
48

x1 = x final xactual
y1 = y final yactual (3.4)
z1 = z final z actual

Theattractinginfluencesforeachofthedirectionsx,yandzwillbecharacterizedbyasetof
fuzzysetswithlinguisticlabelsasindicatedintable3.1.Theselabelsaredefinedforx,yandz
directionindependently,whichmeansthattheattractingforceinxdirectionisnotinfluenced
bythedistancedifferencesinyorzdirection,andanalogouslyforyandz.Whenthedesigned
rulebasewillbeintroducedfurtheroneinthischapter,thelinguisticlabelsforeverydirection
willbereferredtoaccordingtotheshortnotationsoftable3.1.
Onedimensional membership functions will be introduced in the next paragraph to
characterizethefuzzysetsoftheseattractinginfluences.

Linguisticlabel
GoalFarNegative
GoalCloseNegative
GoalVeryCloseNegative
GoalReached
GoalVeryClosePositive
GoalClosePositive
GoalFarPositive

xdirection
GFarNegx
GClNegx
GVClNegx
Goalx
GVClPosx
GClPosx
GFarPosx

ydirection
GFarNegy
GClNegy
GVClNegy
Goaly
GVClPosy
GClPosy
GFarPosy

zdirection
GFarNegz
GClNegz
GVClNegz
Goalz
GVClPosz
GClPosz
GFarPosz

Table3.1:Linguisticlabelsforattractinginfluence

The second influence participating in the avoidance algorithm is a repelling influence. The
linguisticlabelsthatdescribethisinfluenceexpresshowclosetheTCPistoeachofthesixsides
of the obstacle of 3D rectangular form. The six sides of the parallelepiped that models the
obstaclearelabeledasPositivex,Negativex,Positivey,Negativey,PositivezandNegativez.The
convention used to assign the marks Positive and Negative is constituted as follows. If the
distance difference robstacle rTCP in the considered coordinate x, y or z is positive, then the
adjectivePositiveisassignedtothelinguisticlabel.Forexample,infigure3.5theEndEffectoris
closinguptothesideofthecubewithlabelPositivey.Furthermore,differentlevelsofcloseness
areintroduced:VeryClose,Close,NotCloseandFar.Theshortnotationsforeverydirectionare
introducedintable3.2.

_
Chapter3.ArtificialIntelligence
49

Linguisticlabel
FarNegative
NotCloseNegative
CloseNegative
VeryCloseNegative
Contact
VeryClosePositive
ClosePositive
NotClosePositive
FarPositive

xdirection
FarNegx
NClNegx
ClNegx
VClNegx
Contactx
VClPosx
ClPosx
NClPosx
FarPosx

ydirection
FarNegy
NClNegy
ClNegy
VClNegy
Contacty
VClPosy
ClPosy
NClPosy
FarPosy

zdirection
FarNegz
NClNegz
ClNegz
VClNegz
Contactz
VClPosz
ClPosz
NClPosz
FarPosz

Table3.2:Linguisticlabelsforrepellinginfluence

For the repelling forces, the linguistic labels of one direction are determined by the distance
differences in all three directions x, y and z. To explain this, let us first introduce the one
dimensionaldistancedifferencesforx,yandzdirection:

x2 = f ( xobstacle , xactual )
(3.5a)
y = f ( y
2
obstacle , y actual )
z 2 = f ( z obstacle , z actual )

Thefunctionfwillbedeterminedfurtheron.Sincetheobstacleisathreedimensionalobject,
wecandeterminewithinwhatrangeofx,yandzcoordinatestheobstaclewillbeencountered.
Generallywecanstate:

xobstacle [xboundleft,xboundright]

yobstacle [yboundleft,yboundright](3.5b)
zobstacle [zboundleft,zboundright]

Theindicesboundindicatethecoordinatelimitswithinwhichtheobstacleispresent.Whenthe

coordinatexactualoftheTCPissmallerthenxboundleft, x2 ispositiveandequaltothedifference
xboundleftxactual.Whenxactualiswithintherange[xboundleft,xboundright], x2 canbesetto0,indicating
it is within the x range of the obstacle. Analog reasoning for an xactual of the TCP bigger then
xboundrightleadstothefollowingexpressionforthefunctionfof(3.5a):

rbound left ractual , ractual < rbound left

f (r
=
)
rbound left ractual rbound right (3.5c)
0 ,
actual
r
bound right ractual , rbound right < ractual

Asforthedistancedifferencesoftheattractingforce,asetoflinguisticlabelsisassociatedwith
every one of the 1D distance differences in (3.5a). These linguistic labels are equal in

_
Chapter3.ArtificialIntelligence
50

terminologytothelabelsoftable3.2andasforthelabelsoftheattractinginfluence,theycanbe
evaluatedindependentlyofthedifferencesintheothertwodirections.
However, it is very important to understand that this independency does not count for the
linguisticlabelsoftable3.2thatdescribethedistancetoeachsideoftheobstacle.Forexample,
when x2isVeryClose,buteither y2or z2isnotVeryCloseatthesametime,theTCPwillnot
be in a dangerously close position to the obstacle. To understand what combinations of 1D
distancedifferencesaredangerousintermsofbeingVeryClosetotheobstacle,letusconstruct
therepellinginfluencelabelVeryClosePositivexoftable3.2.Considerasectionalongthexaxis
where x2 is characterized by the label Very Close Positive. In figure 3.6 such an YZ plane is
depicted. The square representsasection of the rectangular obstacle. As shown in the figure,
ninecombinationsofonedimensionallabelsfory2andz2appear.

VClPosy2
VClNegz2

Contacty2
VClNegz2

VClNegy2
VClNegz2

VClPosy2
Contactz2

Contacty2
Contactz2

VClNegy2
Contactz2

VClPosy2
VClPosz2

Contacty2
VClPosz2

VClNegy2
VClPosz2

VClPosx2

Figure3.6:ConstructionoftherepellinginfluencelabelVeryClosePositivex

ThelabelVeryClosePositivexcanbeconsideredasafunctionofthethreevariablesx2,y2and

z2. Supporting on the representation of figure 3.6, Very Close Positive x can be symbolically
writtenasstatedinexpression(3.6).

VeryClosePositivex=VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

OR VClPosx2

AND
AND
AND
AND
AND
AND
AND
AND
AND

VClPosy2
Contacty2
VClNegy2
VClPosy2
Contacty2
VClNegy2
VClPosy2
Contacty2
VClNegy2

AND
AND
AND
AND
AND
AND
AND
AND
AND

VClNegz2
VClNegz2
VClNegz2
Contactz2
Contactz2
Contactz2
VClPosz2
VClPosz2
VClPosz2

(3.6)

_
Chapter3.ArtificialIntelligence
51


ThefunctionVeryClosePositivexwillonlytakehighvalueswhenoneoftheninecombinations
in(3.6)isfulfilledforaconsiderableamount.
Onedimensional membership functions will be introduced in the next paragraph to
characterize the fuzzy sets of the 1D distance differences of (3.5a). Considered that the
linguisticlabelsoftherepelling forceareconstructedwithcombinationsofexpressionsin x2,

y2andz2,theMFoftherepellinginfluencelabelsarethreevariablefunctionsofx2,y2and
z2.
Oncesixlabelsofonesametypeareconstructed,e.g.VeryClose,theycanactuallybeusedto
construct an imaginary 3D object of rectangular form around the obstacle. The zone between
the obstacle and this imaginary border is then wearing the label Very Close. Bigger
parallelepipeds can be constructed to border the zones with the labels Close and Not Close.
Finally the complement part of the outer zone Not Close could be denoted as the zone Far.
Figure3.7showstheobstacleandtheimaginaryzonescreatedaroundtheobstacle:VeryClose,
CloseandNotClose.TheentireouterzoneisindicatedasFar.

Far

NotClose

Close

Very
Close

Obstacle

Z
Y

Figure3.7:Constructionofimaginarysafetycubesaroundobstacle

_
Chapter3.ArtificialIntelligence
52

The construction of the three variable functions of types Close and NotClose is similar to the
implementationofVeryClosein(3.6).AttentionhastobepaidtothefactthataTCPbeinginthe
zoneVeryCloseisalsointhezoneClose.Thisexplainsthemaxoperatorinexpression(3.7)for
ClosePositivex:

ClosePositivex=max(ClPosx2,VClPosx2)
AND
(max(ClPosy2,VClPosy2 )
OR Contacty2

OR max(ClNegy2,VClNegy2 )

OR max(ClPosy2,VClPosy2 )

ORContacty2

OR

ORmax(ClNegy2,VClNegy2 )

AND
AND
AND
AND
AND

max(ClNegz2,VClNegz2 )
max(ClNegz2,VClNegz2 )
max(ClNegz2,VClNegz2 )
Contactz2
Contactz2

AND

max(ClPosz2,VClPosz2 ))

(3.7)

TheimplementationofthelabelFarincorporatestheuseofthecomplementoperator(3.8):

compl(A)=1A(3.8)

AnexampleofaFarlabelisgivenbyexpression(3.9)forFarPositivex.

FarPositivex=1max(

(1FarPosx2)*(1FarNegy2)*(1FarPosz2),
(1FarPosx2)*(1FarNegy2)*(1FarNegz2),
(1FarPosx2)*(1FarPosy2)*(1FarNegz2),
(1FarPosx2)*(1FarPosy2)*(1FarPosz2),
(1FarPosx2)*(1Contacty2)*(1FarPosz2),
(1FarPosx2)*(1Contacty2)*(1FarNegz2),
(1FarPosx2)*(1FarNegy2)*(1Contactz2),
(1FarPosx2)*(1FarPosy2)*(1Contactz2),
(1FarPosx2)*(1Contacty2)*(1Contactz2))

(3.9)

Foranexplicitelaborationofalllabels,werefertoAppendixB.
Arulebaseforrepellinginfluencecaneasilybeconstructedthinkingintermsoftheimaginary
safetyzonesaroundtheobstacle.Forexample,whenevertheTCPentersthezoneindicatedas
Close,anavoidanceactionwithlabelBighastobeexecuted.
Finally, we mention that repelling and attracting influences cannot be activated at the same
time,fortheyinvokecontradictorycontrolactions.WhentheTCPmovesoutsideofthethree
imaginary cubes around the obstacle, the repelling force is automatically deactivated. The
attracting forces active in that specific moment, will cause the TCP to move towards its final
orientation.Ontheotherhand,whentherepellingforceisactivebecausetheTCPenteredone
oftheimaginarycubesaroundtheobstacle,theattractinginfluencewillhavetobedeactivated.

_
Chapter3.ArtificialIntelligence
53

Thisdeactivationoftheattractinginfluencewillhavetobeincorporatedintherulebaseofthe
artificialintelligencesystem.

2.4MembershipfunctionsoftheFuzzyInferenceSystem
2.4.1Membershipfunctionsofentrancefuzzysets
The first entrance set is formed by the distance differences x1, y1 and z1 between the
actualandfinalpositionoftheTCP.Theseinputshavebeenintroducedinequations(3.4)
andwilldeterminethevalueoftheattractinginfluencesthatactuateintherulebaseofthe
FIS. As membership functions for the fuzzy sets of the attracting influence, MFs of
triangular form have been chosen for they are easy to implement and to calculate with.
Figure3.8representsthesetofonedimensionalMFs.Thecentertriangularistheoneofthe
label Goal Reached. The MFs of labels Goal Far Positive and Goal Far Negative are of
trapezoidal form and hold their value for respectively big positive and big negative
entrancevalues.ThesamesetofMFsisusedforx1,y1andz1.

GFarNeg

GClNeg

GVClNeg

GVClPos

GClPos

GFarPos

r1

Figure3.8:MFsforfuzzysetsofattractinginfluence

ThemathematicalequationofaMFfoftriangularformcanbewrittencompactlyusingthe
expression:

x a c x (3.10)
f ( x ; a, b, c) = max min
,
, 0
ba cb

The trapezoidal functions with open segment to the left, respectively to the right are
mathematicallydescribedasfollows:

x d (3.11)
g ( x ; d , e) = max min
, 1 , 0
ed

_
Chapter3.ArtificialIntelligence
54

The second entrance set constitutes of the distance differences x2, y2 and z2 that
characterizethedistanceoftheTCPtotheobjectasdefinedinequations(3.5a),(3.5b)and
(3.5c).Witheachdifferencer2(r2=x2,y2,z2),asetoflinguisticlabelsisassociated(seetable
3.2).OnedimensionalMFscanbeintroducedtodefinetheselabels.Thisrepresentationis
giveninfigure3.9.ThetriangleinthecenterrepresentsthelabelContact.Accordingtothe
ideaoffigure3.7,theMFsofNotCloseneedtobeimplementedasMFsoftrapezoidalform,
becausethezoneNotCloseoffigure3.7needsaninnerandanouterlimittobeconstructed.

FarNeg

NClNeg

ClNeg

VClNeg

VClPos

ClPos

NClPos

FarPos

r2

Figure3.9:onedimensionalMFsforfuzzysetsofrepellinginfluence

As explained in section 3.3, the actual labels of the repelling influence are composed by
expressionsin x2, y2and z2.Theconsequenceisthatthemembershipfunctionofevery
fuzzy set of the repelling influence is a three variable function which doesnt have a
graphicalrepresentation.
Asintroducedintheexampleoffigure3.5,rotationsoftheEndEffectorof90or+90will
havetobeimplementedtoassuretheEndEffectordoesnthittheobstacle.Torealizethis
implementation,oneextrasetofsystementranceswillbeintroduced.Thisentranceisthe
actual rotational configuration of the End Effector. This information will be used in the
constructionofthepremisesoftherulebase.For,closinguptoasurfaceoftheobstacle,the
systemhastoknowthepresentconfigurationtobeabletodecidewhatwillbetheavoiding
actioninrotation.Thiswillbeclearlyexplainedwhentherulebaseisdesignedfurtheron.
Sinceonlyturnsof90and+90willbeneededtoprogramanaccurateavoidance,three
labelsfortheactualangleconfigurationwillbeenoughforourapplication.Theselabelsare
introducedintable3.3forrotationalangleswithrespecttoxandyaxis.

_
Chapter3.ArtificialIntelligence
55

Linguisticlabel
90Negative
0
90Positive

xaxisangle
90
0
+90

yaxisangle
90
0
+90

Table3.3:LinguisticlabelsforactualorientationoftheEndEffector

The MFs that characterize each fuzzy set of actual angle configuration are of traditional
triangularform.
ValuesfortheparametersofallMFscanbefoundinAppendixB.

2.4.2Membershipfunctionsofsystemoutputs
InarulebaseoftheSugenotype,nooutputfuzzysetshavetobedefinedbecausetheoutputof
one rule is a function of the rules input variables. A Seguno oriented rule base allows the
designerofarulebasetocreateanoutputthatisdirectlyrelatedtothepremiseoftherule.A
secondtypeoffuzzyrulebaseistheMamdanitype.Formoredetailsabouttheserulebaseswe
referto[12].ForourapplicationweconsideredarulebaseofthetypeSugenotobethemost
appropriateone.WewillconstructSegunooutputfunctionsoftype0,whichmeanstheoutput
function is constant over its entire range of input values. The more, we chose singleton
functions,whichmeansacertainoutputwillonlybeactivatedwhentherulespremisetakes
onespecificvalue.Thisideawasalreadypresentedinfigure3.2ofsection2.1.
For our industrial robot application, we consider as system outputs increments in x, y and z
position of the TCP and rotations and of the End Effector with respect to x and y axis
respectively. Linguistic labels to characterize control actions in x y and z direction can be
definedaccordingtotable3.4.

Linguisticlabel
BigNegative
SmallNegative
VerySmallNegative
Nil
VerySmallPositive
SmallPositive
BigPositive

x,y,ztranslational
action[mm]
30
5
3
0
3
5
30

Table3.4:Linguisticlabelsforoutputactions

TheassociatedsingletonoutputfunctionsfortranslationalmovementintheCartesianspaceare
illustratedinfigure3.10.
_
Chapter3.ArtificialIntelligence
56

action [mm]

Big Neg

SNeg

VS Neg

linguistic
label
Nil

VS Pos

S Pos

BigPos

Figure3.10:Singletonoutputfunctionsfortranslationalaction

Asalreadymentioned,rotationalcontrolactionsof90and+90withrespecttoxandyaxis
willbeimplementedtopreventtheEndEffectorfromhittingtheobstacle.Foreveryrotational
direction,theactionsindicatedintable3.5canbeexecuted.Wewouldliketostressthatthese
rotational actions are rotations with respect to the x and y axis of the fixed user coordinate
system. The angles and do therefore not represent roll and pitch angles that are defined
withrespecttothemovingxandyaxisoftheEndEffector.

and rotational
action[]
90
0
+90

Linguisticlabel
Minus90
Zero
Plus90

Table3.5:Linguisticlabelsforrotationalaction

Theassociatedsingletonoutputfunctionsforrotationalactioncanbedepictedsimilarlyasthe
functionsforactioninCartesianspace,seefigure3.11.

action []
linguistic
label

Minus90
Nil

Plus90

Figure3.11:Singletonoutputfunctionsforrotationalaction

_
Chapter3.ArtificialIntelligence
57

2.5Constructionoftherulebase
Inthisparagraphtherulebaseoftheavoidancestrategywillbedesigned.Therearetwotypes
ofrules.Onetypeisrelatedtotranslationalactioninx,yandzdirection,whiletheothertypeis
relatedtorotationalactionwithrespecttoxandyaxis.Themore,therulebaseconstitutesof
twoparts.Onepartincorporatestherulesrelatedtotherepellinginfluence,theothertakesinto
accounttheattractinginfluence.

2.5.1Rulesrelatedtotranslationalaction
Because the attracting rules of the translational type cannot be active whenever a repelling
influenceineitherx,yorzdirectionis,threevariablesareintroducedtolinktherulesofboth
parts.Thesevariablesareresetto0whenevertheoutputactionsoftherepellinginfluencerules
are Nil actions, and set to 1 whenever these same outputs differ from the Nil action. The
linguisticnotationsforthesevariablesareindicatedasfollows:

Avoidanceinx
Avoidanceiny
Avoidanceinz

2.5.1.1Repellingrules
Theoverallguidelinetofollowisthat,whenevercontinuingmotioninacertaindirectionis
not possible anymore, an alternative direction has to be selected to continue motion, and
thisforaslongastheEndEffectoristooclosetotheobstacle.Thealternativedirectioncan
befreelychosenbythedesigneroftherulebase.Here,thepositivezdirectionischosenas
thealternativedirectionwhenevermotioninxorydirection,inpositiveornegativesense,
is obstructed. By implementing this choice, a movement of the End Effector above the
obstacleisassured.
Whenever further motion in the z direction is obstructed, we chose to follow a criterion
based on x1 and y1 to decide on the direction of the avoidance action. These distance
differences were introduced in equation (3.4) of paragraph3.3 asthe differences between
final and actual coordinate in the considered direction. The criterion is schematically
introducedintable3.6.Thevariablesxlengthandylengtharethedimensionsoftheobstacle
inxandydirectionrespectively.

_
Chapter3.ArtificialIntelligence
58

Criterion

Avoidancedirection
x

x length + y length
x1 y1 >
2
x length + y length
y1 x1 >
2

otherwise

xandy

Table3.6:Decisioncriterionforavoidancedirectionwhenzdirectionisobstructed

Thesenseoftheavoidanceactiondependsonthesignofthecoordinatesofinitialandfinal
positionoftheTCP.Thecriteriontodeterminethesignofxandyavoidanceisillustrated
intable3.7.

Criterion
x1 > 0
x1 < 0

y1 > 0
y1 < 0

Signofavoidance
Positivex
Negativex
Positivey
Negativey

Table3.7:Decisioncriterionforsignofxandyavoidancedirection

Finally we mention that repelling avoidance actions will always be chosen of the output
typeBig.Withthisknowledge,thepartoftherulebaserelatedtorepellingactionsinx,y
and z direction can be constructed. Avoidance actions are undertaken when the TCP
enterstheimaginaryparallelepipedswithlabelsCloseandVeryClose.

R1:IF
R2:IF
R3:IF
R4: IF
R5:IF
R6: IF
R7: IF
R8: IF

VClPosx
VClPosy
ClPosx
ClPosy
VClNegx
VClNegy
ClNegx
ClNegy

THEN
THEN
THEN
THEN
THEN
THEN
THEN
THEN

BigPositivez
BigPositivez
BigPositivez
BigPositivez
BigPositivez
BigPositivez
BigPositivez
BigPositivez

R9: IF
R10:IF
R11:IF
R12:IF

VClNegz
VClNegz
ClNegz
ClNegz

AND
AND
AND
AND

AvoidanceyPositive
AvoidanceyNegative
AvoidanceyPositive
AvoidanceyNegative

THEN
THEN
THEN
THEN

BigPositivey
BigNegativey
BigPositivey
BigNegativey

R13:IF
R14:IF
R15:IF
R16:IF

VClNegz
VClNegz
ClNegz
ClNegz

AND
AND
AND
AND

AvoidancexPositive
AvoidancexNegative
AvoidancexPositive
AvoidancexNegative

THEN
THEN
THEN
THEN

BigPositivex
BigNegativex
BigPositivex
BigNegativex

_
Chapter3.ArtificialIntelligence
59

ConsideredthefirstelementofthepremiseinrulesR9R16isonlystatedasNegz,weonly
takeintoaccountthosecaseswheretheTCPclosesuptotheobstaclefromabove,i.e.when
theTPCismovinginnegativezdirection.However,similarrulescanbeimplementedwith
termsinPosztomodelthecaseswheretheTCPisclosinguptotheobstaclefrombelow.

2.5.1.2Attractingrules
Theserulesareallcomposedofthreetypesofelements.Thefirstelementisadescriptionof
the distance to the final position. Thesecond element checks if the TCP is sufficientlyfar
removed from the obstacle. The third element checks if any avoidance actions are active.
TherulesR17R23illustratetherulesforclosingupinxdirection.Fortheconstructionof
theanalogrulesforyandzdirection,werefertoAppendixB.

R17:IFGFPosxANDFarPositivexANDNOTAvoidancexANDNOTAvoidancey

THENBigPositivex

R18:IFGFNegxANDFarNegativexANDNOTAvoidancexANDNOTAvoidancey

THENBigNegativex

R19:IFGClPosxANDFarPositivexANDNOTAvoidancexANDNOTAvoidancey

THENSmallPositivex

R20:IFGClNegxANDFarNegativexANDNOTAvoidancexANDNOTAvoidancey

THENSmallNegativex

R21:IFGVClPosxANDFarPositivexANDNOTAvoidancexANDNOTAvoidancey

THENVerySmallPositivex

R22:IFGVClNegxANDFarNegativexANDNOTAvoidancexANDNOTAvoidancey

THENVerySmallNegativex

R23:IFGoalx

THENNilx

2.5.2Rulesrelatedtorotationalaction
WhenevertheTCPentersthezoneNCloseconstructedaroundtheobstacle,achangeintheEnd
Effectors orientation will be demanded, so that the End Effector is in perpendicular position
with a side of the obstacle. This change in orientation can however only be demanded and
executed if the End Effector is in a position that is inconvenient. To illustrate the idea of this
kindofavoidancerules,welookattheleftpartoffigure3.12.WhentheTCPclosesuptothe
obstacle in positive y direction, at a certain moment the label Not Close Positive y will be
_
Chapter3.ArtificialIntelligence
60

activatedforthepositionoftheTCP.OnearmoftheEndEffectorisalreadyinthezoneNClose
and a rotation of +90 with respect to the xaxis will be demanded to assure a perpendicular
positionoftheEndEffectortothesideoftheobstacle.Onemomentlater,theTCPisstillinthe
zone NClose. However, no rotation should be demanded, for the End Effector is already in a
correctorientationwithrespecttothesideoftheobstacle.Thisisillustratedintherightpartof
figure3.12,indicatedasaconvenientorientation.
WhentheEndEffectorhasmovedaroundtheobstacleandisclosinguptoitsfinaldestination,
areturntoitsoriginalorientationwillberequested.Toidentifythemomentwhenthereturnin
orientationcanbeinitiated,aflagClosetoFinalisintroduced.ClosetoFinalisinitializedasfalse
andissettotruewhentheTCPisinthezoneFarandcloserthanapredetermineddistanceto
thefinaldestination.

Z
Y
NClose

X
NClose

1.InconvenientOrientation

2.Convenient Orientation

Figure3.12:Illustrationoftherepellingactionsinorientation

The following rules R24 R27 illustrate the actions to undertake with respect to avoidance
rotationswithrespecttothexaxis.

R24:IFNOTClosetoFinalANDNClosePositiveyAND0AND0AND0
THENMinus90
R25:IFNOTClosetoFinalANDNCloseNegativeyAND0AND0AND0
THENPlus90
R26:IFClosetoFinalANDFarPositiveyAND+90AND0AND0
THENZero
R27:IFClosetoFinalANDFarNegativeyAND90AND0AND0
THENZero
_
Chapter3.ArtificialIntelligence
61

Therulesforavoidancerotationalactionsalongtheyaxiscanbeanalogicallyconstructed(see
AppendixB).
In literature, three properties of rule bases are classically considered: continuity, consistency
andcompletenessofthefuzzyrulebase.Wewillnowshortlycommentthefirsttwoproperties
andillustratethemforourrulebase.

2.5.3Continuityoftherulebase
Arulebaseissaidtobecontinuouswhenthepremisesoftherulesareadjacentaswellasthe
consequents.Wecanspeakintermsofadjacentfuzzysetswhenanorderisdetectedinthe
fuzzyset.IntherulesR17R23wecandetectanorderinthesequenceofpremises.Thisorder
wasalreadyintroducedintable3.1andcanbewrittenas:

GoalFarNegativex<GCloseNegx<<Goalx<<GCPosx<GoalFarPositivex(3.12)

Looking at the consequents of the rules R17 R23 we see that the demanded actions can be
orderedinacontinuousway:

BigNegativex<SmallNegx<<Goalx<<SmallPosx<BigPositivex(3.13)

The part R17 R23 of our rule base is now continuous due to the fact that the MFs of the
linguistictermsin(3.12)and(3.13)haveacertainoverlap,whichisillustratedbythetriangular
MFs of figure 3.8. This overlap guarantees that no neighbour rules in R17 R23 will have a
consequentwithzerointersection.Thisway,smallvariationsintheentrancesetx*oftheFIS
wont provoke any unexpected variations in the output y* of the FIS. The continuity of the
attracting rules guarantees that the TCP will move from its initial to its final position in a
continuous way, especially during the deceleration process, where transitions between the
fuzzysetsofdistancetothefinallocationoccur.

2.5.4Consistencyoftherulebase
A rule base is consistent when no rules with the same premise and a different consequent
appear. On first inspection, our rule base doesnt contain any contradictions of this type.
However,inthewaythelabelsfordistancedescriptionaredefined,certainlabelsintersectwith
eachother.ThisisthecaseforthefollowinglabelsofthetypeNClose:

_
Chapter3.ArtificialIntelligence
62

NClosePositivex andNClosePositivey

NClosePositivex andNCloseNegativex

NCloseNegativexandNClosePositivey

NCloseNegativexandNCloseNegativey

Inconsistenciescanoccurinthepartoftherulebaserelatedtorotationalavoidanceaction.In
thesamemomentarotationalavoidancealongxandyaxiscanbedemanded.Toavoidthis,
priority is assigned to one of both rotations. The used criterion is based on the distance
differences x2 and y2, and is denoted in table 3.8. The first criterion can be explained as
follows.Whentheobstacleiscloserinxtheniny,theTCPwillbeatthePositivexorNegativex
side of the obstacle in a few moments from now, when the translational action is activated.
Therefore,thecorrectrotationalactiontocommandistheonethatpreventsthefuturecollision
of the End Effector with the obstacle side Positive x or Negative x. The adequate rotation to
avoid this collision is a rotation along the y axis. Analogically, the second criterion can be
explained.

Criterion
x1 > y1

Rotationaxis
x

x1 < y1

Table3.8:CriterionforrotationaxiswhenTCPisinintersectionzone

2.6Thefuzzyinferencemechanism
Theinferenceprocessofafuzzylogicalgorithmcanbeconsideredastheevaluationofallrules
in the rule base for a certain entrance set x*. The result of the inference process is a fuzzy
relationforthesystemoutputy*.
For the inference of one rule for an entrance x*, we will use a generalizedmodusponens. This
methodisbasedonanifthenrelationandcanbesymbolicallywrittenas:

IFxisATHENyisB
x*isA(3.14)
_
y*isB

Thegeneralizedmodusponenscanbeinterpretedasfollows.Thefirstlineof(3.14)indicatesthe
rulethatislocatedinthedesignedrulebase.Ifarealsystementrancex*appears,characterized
byitsfuzzysetA,thatisnottotallyequaltoA,thesystemoutputy*willresultinB.Hereby,
BresemblesBasmuchasAresemblesA.
_
Chapter3.ArtificialIntelligence
63

Theoverallinferenceprocesscanbedividedintothreesteps.Afirstoneistheresolutionofthe
composedpremisesofallrules.Next,theimplicationofeverypremiseontheconsequentwill
bedeterminedusingtheintroducedgeneralizedmodusponens.Finally,theresolvedfuzzyrules
areaggregatedtodetermineauniquefuzzyrelationfortheconsideredvariable.

2.6.1Resolutionofrulepremises
AgeneralruleRj,composedofpremisestatementsconnectedbytheANDoperator,canbe
writtenasfollows:

Rj:IFx1isA1jANDANDxiisAijANDANDxnisAnjTHENyisBj(3.15)

ThepartbeforethewordTHENistherulespremise.Asexplainedintheintroductionof
B

paragraph2.1,aninputvectorx*,ischaracterizedbyitsfuzzyset A(x)generatedusinga
fuzzicatorofthesingletontype(3.2).Toresolvethepremiseof(3.15),theinferenceofthe
entrance x* with the fuzzy sets of the corresponding labels will have to be determined.
WiththenmembershipfunctionsofthelinguisticentrancelabelsAij (i=1,,n)compactly
writtenasAj(x),theimplicationtoinferis:

n
n
n
A(x) o Aj(x) = I A'i ( xi ) I I Aij ( xi ) = I A'i ( xi ) Aij ( xi ) (3.16)
i =1
i =1
i=1

Usingthesingletonfuzzificatorofequation(3.2),theproducttermsintherighthandside
ofthelastassignmentin(3.16)simplifyto:

A' ( xi ) A ( xi ) = A ( x *i ) (3.17)
i

ij

ij

Fortheintersectionoperator wewilluseaTnormoftheproducttype.Theresolutionof
thepremiseof(3.15)canthereforebewrittenasaproduct:

*
A(x) o Aj(x) = Aij ( x i ) (3.18)
i =1

2.6.2Implicationofrulepremisesonruleconsequents
We consider again the general rule Rj of statement (3.15). We now want to obtain the
membershipfunction Bj(y)oftheconsideredonedimensionaloutputvariabley.Thiscan
besymbolicallywrittenas:

_
Chapter3.ArtificialIntelligence
64

B ( y ) = I A (x * i ) I B ( y ) (3.19)
'

i =1

ij

TakingasTnormfortheintersectiontheproductoperator,expression(3.19)resultsin:

B ( y ) = A (x * i ), B ( y ) (3.20)
'

ij

i =1

Reminding the fact that singleton output functions were chosen, an easy programming
implementationcanberealizedfortheinferenceofeveryrule.Foraprogrammingexample,
werefertoappendixB.

2.6.3Aggregationofruleconsequents
Whenasetofmrulesimposesactionsforacertainoutputvariabley,thefuzzyaggregation
operationisthefuzzyunion.Thiscanberepresentedasfollows:
m

B ( y) = U B ( y)
'

'

j =1

(3.21)

A fuzzyunion can be calculated usingthe Snormmaximum or algebraic sum operators.


For our application we implemented the union operator as the maximum operator. This
resultsforB(y)in:

B ( y ) = m ax( B ( y ))
'

'

j =1,..., m

(3.22)

2.7Defuzzificationprocess
ForafuzzyentrancesetA,theresultoftheinferenceprocessdescribedintheprevioussection
isafuzzyoutputsetB.Whentheoutputmembershipfunctions Bj(y)arenotofthesingleton
type, this fuzzy set B is characterized by a MF B(y) that holds non zero values in a certain
rangeofoutputvaluesy.Thegoalofthedefuzzificationprocessistodeterminefromthefuzzy
set B one unique output value y* of the considered output variable. A commonly used
operatoristhecenterofareaoperator.
However, because we chose the MFs of the output variables as singletons, no defuzzification
processwillberequiredinourFIS.Ascanbereadilyunderstoodfromtheexplanationinthe
previous sections, the output of the rule aggregation process is a discrete value equal to the
maximumresultoftheruleinferencesofallrulesrelatedtothesameoutputvariable.

_
Chapter3.ArtificialIntelligence
65

2.8AlgorithmoftheFuzzyInferenceSystem
ToconcludetheelaborationaboutFuzzyInferenceSystems,wesummarizethestepsthathave
tobeundertakentocalculateoneoutputactionforourrobotcontrolapplication.Thesesteps
arerepeatedinawhileloopthatisconditionedbyaBooleanflagthatbecomesnegativeifthe
3DdistancebetweentheTCPandthefinalpositionissmallerthanacertainvalue.Thisvalue
can be chosen depending on the desired accuracy with which we want to reach the final
position.
Forallcontrolactions(movementsinx,y,zdirectionandrotationswithrespecttoxoryaxis),
ageneralloopexecutionprotocolcanbestatedasfollows:

1.

MeasuringofactualToolCenterPointpositionandEndEffectororientation;

2.

Determinationof1Ddistancedifferencesx1,y1,z1, x2,y2andz2;

3.

Calculation of the distance discriptions related to the attracting and repelling


influences,usingtheonedimensionalMFs;

4.

Evaluationofdiversecriteriatodetermine:
a.

Translationalavoidancedirectionandsense;

b. RotationaxisifToolCenterPointisinanintersectionzone;
5.

Inferenceoftherulebase.Foreverygroupofrulesrelatedtothesameoutput
variable,theactionstoundertakeare:
a.

Resolutionofrulepremises;

b. Implicationofpremisesonruleconsequents;
c.

Aggregationofruleconsequents;

6.

Defuzzificationoftheaggregationresults;

7.

Increment, decrement or maintenance of TCPs position and End Effectors


orientationdependingonthedefuzzificationresult.

The fuzzy avoidance algorithm was implemented in MATLAB. The program code of
fuzzy_comm.m given in appendix B calculates an alternative trajectory and generates a 3D
animationoftheTCPandEndEffectormovements.

2.9Realtimeperformanceofthefuzzysystem
TheFuzzyInferenceSystemdesignedinthischapterisanonlinetrajectoryplanningmethod
for robot motion. To be able to perform motion, the robot controller depends on the
informationcalculatedinapcandsentoveracommunicationmedium.

_
Chapter3.ArtificialIntelligence
66

A realtime data generation and subsequent transfer to the robot controller need to be
guaranteed.Iftherobothasntreceivedthenextpositionbeforeitstartstodeceleratetowards
thelastpositionavailable,nocontinuousmotioncanbeexecuted.
One aspect of the realtime issue is the transfer of the information along the communication
medium.Inthenextchapter,wewillintroduceEthernetastheusedcommunicationchannel.
Realtimeconsiderationsaboutthismediumwillthenclearlybeelaborated.
The realtime aspect that we want to consider in this paragraph is the generation of the
positionaldata,focusingonthetimeneededtocalculatepositionsandthefrequencyatwhich
thealternativepositionsaresenttotherobotcontroller.

2.9.1TimeconsumptionoftheFuzzyInferenceSystem
A considerable number of linguistic distance descriptions were used in the design of the
inference system. These labels were needed to allow us to construct imaginary cubes
around the obstacle and hereby divide the area around the obstacle in different security
zones.Toevaluateonelinguisticlabel,asetofonedimensionalmembershipfunctionshad
to be evaluated. These evaluations, together with the large number of program lines that
have to pass the MATLAB interpreter in the execution of every while loop, cause the
trajectory calculation to be a time consuming process. After executing the MATLAB fuzzy
application a number of times, an upper limit of 20ms was obtained as the mean time
neededtocalculateonealternativeposition.

2.9.2Sizeofthedistanceincrementsanddecrements
Since the calculation of one Fuzzy Inference position and rotation is a timeconsuming
process, we had to focus on the size of the distance increments and decrements of the
outputactions.Thatthischoiceisactuallyacostanalysiscanbeunderstoodasfollows.
Large position increments guarantee a fast closing up to the final destination. However,
they also cause a fast closing up to the obstacle. To assure that the Tool Center Point
doesnt hit the obstacle, the safety zones around the obstacle need to be designed
proportionallytothedistanceincrements.Andthismeansthatthemembershipfunctions
needtobeamplifiedinwidth.
For large distance increments, if an obstacle is encountered in the robots workspace, the
Tool Center Point will have to start moving along an alternative path when it is still
relatively far away from the obstacle. During the entire alternative path motion, the TCP

_
Chapter3.ArtificialIntelligence
67

willstayfarawayfromtheobstacle.Alotofextradistancewillhavetobecoveredandthis
isalsotimeconsuming.
Inthisoptimizationdesign,thetwosetsofparameterstoadjustarethedifferentdistance
incrementsandthewidthoftheonedimensionalmembershipfunctions.Intheend,asfor
thedesignoftheentireFuzzyInferenceSystem,thevaluesoftheseparametershavebeen
chosenratherintuitively.Fortheonedimensionalmembershipfunctionsoffigure3.9that
areusedtoformthedistancedescriptionstotheobstacle,theparametersa,bandchave
beenchosenasfollows:

a=40
b=0
c=+40

Thischoiceresultsinmembershipfunctionsoftriangularformwithbase80mm.Inrelation
tothesecurityzonesaroundtheobstacle(seefigure3.7),thesevaluesresultinazoneVery
Closewithanouteredgeat4cmoftheobstacleandazoneClosewithanouteredgeat8cm
oftheobstacle.
Thewidthedofthesmallbaseofthetrapezoidalmembershipfunctionofthelinguistic
labelsNotClose(seefigure3.9)ischosenas120mm.ThisresultsinazoneNotClosearound
theobstaclewithaninneredgeat12cmandanouteredgeat24cmfromtheobstacle.To
refreshtheideaofthefuzzyavoidancestrategy,avoidanceactionsareundertakenassoon
astheToolCenterPointentersthezoneNotCloseoffigure3.7.
With these safety zones constructed around the obstacle, distance increments that realize
satisfyingresultsaretheonesalreadystatedintable3.4.TheBigNegativeandBigPositive
increments are 30 and +30 respectively. Smaller positional actions are required to realize
the finer adjustments when closing up to the final position. We found increments and
decrementsof3and5mmassatisfyingvaluesforourapplication.
The condition that expresses the end of the alternative path calculation can be stated as
follows:

(x

TCP

x final ) + ( yTCP y final ) + (zTCP z final ) raccuracy (3.23)


2

We chose an accuracy of 10mm to trigger the stop condition of the while loop that
calculatesthesubsequentalternativepositions.

_
Chapter3.ArtificialIntelligence
68

In figure 3.13 the result of an alternative motion simulation is displayed. The cube
representstheobstacle,thecirclestheinitialandfinalpositionoftheToolCenterPoint.The
sequence of crosses represents the alternative path followed by the robots End Effector,
that is also represented in a simplified way. During the design of the Fuzzy Inference
system, visual representations as the ones of figure 3.13 have been of indispensable
importancetodebugandimprovetheartificialintelligencesystem.

Figure3.13:RepresentationofEndEffectorandalternativepatharoundtheobstacle.

2.9.3CommunicationbetweentheFuzzyInferenceSystemandrobotcontroller
In the next chapter, an Ethernet communication protocol will be introduced to establish
data exchanges between a MATLAB session and the robot controller. The idea is that a
serverclientconnectionissetupbetweenapcandtherobotcontroller.Thisconnectionis
identifiedasasocalledsocketconnectionbetweentwophysicalEthernetportsoneach
of the connected devices.Through this socket, datapackagesare sent and received using
theTransmissionControlProtocol.
The data packages contain a set of positional and rotational actions to undertake by the
robotandarepreparedtobesentfromMATLABoverEthernet.Thepackagesarriveinthe
data input buffer of the robots system, where they are read by a robot communication
program. Details of this communication will be discussed in the next chapter. In this
_
Chapter3.ArtificialIntelligence
69

paragraphwewanttocommenthowmanyalternativepositionsaresenttotherobotinone
package.
Iftimeperformancewouldnotbeoneofourconcerns,allalternativepositionscouldfirst
becalculatedandthenbesenttotherobotinonepackage.However,consideredthatevery
calculation loop involves a cost of 20ms, this onepackage method can easily lead to
calculationtimesof1,5sfortypicalalternativepathsof75positions.
Therobotprogramthatreadsdatainfromthesystemsbufferneedsupto240msbetween
takingthefirstandthelastbyteoutofthebuffer.Inthisprogram,thatwillbecommented
in the next chapter, an empty buffer is absolutely necessary to perform a successful next
readingoperation.Wethereforechoseapackagesizeofdatathatneedsacalculationtime
thatisslightlybiggerthanthe240msneededtogetthedataoutoftherobotsystemsbuffer.
Anotherconsiderationtomakeisthatwedonotneedtosendeverycalculatedpositionto
the robot. Since we only perform increments of 30mm in one translational action,
subsequentpositionsoftenliecloseandinthesamedirection.Becausewedidntwantto
forcetherobottomovetowardspositionsthatlieveryclose,wedecidedtosendonlyevery
fourthalternativeposition.Thisstrategyhasproventogivesmoothermotionthantheone
obtainedbysendingeverypositiontotherobot.
Takingallthisintoaccount,packagesoffourpositions,whichareintheirturnthefourthof
aseriesofsubsequentpositions,wereformedandsentfromMATLABtotherobotssystem.
Thetypicaltimeneededtoformonesuchpackageis320ms,since16alternativepositions
actuallyhavetobecalculatedtoformthepackage.
For programming details, the reader is referred to the program fuzzy_comm.m in
AppendixB.

3.References
[11] P. G. Zavlangas and S. G. Tzafestas, Industrial Robot Navigation and Obstacle Avoidance
EmployingFuzzyLogic,JournalofIntelligentandRoboticSystems27:pp.8597,2000.
[12] Dr. ir. R. Jager, Fuzzy sets, reasoning and control, Delft University of Technology,
DepartmentofElectricalEngineering,SystemsandControlGroup,1996.
[13]J.R.LlataGarcaandE.GonzalezSarabia,IntroduccinalasTcnicasdeInteligenciaArtificial,
Grupo de Ingeniera de Control, Departamento de Tecnologa Electrnica e Ingeniera de
SistemasyAutomtica,UniversidaddeCantabria,2003.

_
Chapter3.ArtificialIntelligence
70

Chapter4

ActiveSecurity

1.Introduction
In the previous chapters we designed the tools to develop an active security system. The
general operation principles of this security system can be described as follows. During the
executionofanormalrobottask,theartificialvisionsystemwillprovideuswithinformation
about the status of the robots workspace. When a predefined obstacle is detected, this is
signaled to the robot controller that pauses its movement. With the vision system, the
characteristicpositionsoftheobstaclearecalculatedandpassedontotheartificialintelligence
system. Starting from the current robot position, the fuzzy inference system produces the
subsequentEndEffectorspositionsalongasafealternativepath.Thealternativepositionsare
passed on to the robot controller and the robot continues motion towards its final position,
herebyavoidingtheobstacle.
In order to realize the above described active security application, we need to focus on two
specific items. A first one is the setup of a performing communication system to exchange
information between a pc that is active in a MATLAB session and the robot controller that is
running a KAREL program. KAREL is the programming language developed by FANUC
Robotics for advanced user applications. In this chapter, a communication over Ethernet
betweentherobotcontrollerandapcwillbeintroduced.
Once a communication system is set up, the second step is the implementation of the active
security problem in KAREL. A multitasking design that consists of a communication task, a
safetytaskandasecuritytaskwillbeintroduced.
In this chapter, basic principles of KAREL programming are introduced, as well as
multitasking and semaphore issues, for these are used in the solution to the active security
problem.
SomepracticalaspectsoftheFANUCRobotArcMate100iBthatwereusedintheinvestigation
will also be highlighted. As the most important ones, we mention the ways to define a user
coordinatesystemandtoconfigureEthernetcommunication.
Hardware and software aspects of the used Ethernet communication, as well as KAREL and
MATLABimplementationissueswillbehighlighted.
Oncealltoolsareprepared,theactivesecuritysolutionwillbedeveloped.Experimentalresults,
withspecialattentionforrealtimeperformance,willbediscussed.
_
Chapter4.ActiveSecurity
71

2.FANUCRobotArcMate100iB
The FANUC Robot Arc Mate 100iB (figure 1.1) of the research group TEISA is an industrial
robot of FANUC Robotics, Inc. The robot has 6 axes and the Tool Center Point (TCP) can
therefore reach every position within a range of 1800 mm and in a specific rollpitchyaw
orientationoftheEndEffectorthatisonlyrestrictedbytheaxesrotationallimits.
Therobotiscontrolledbyacontroller(figure4.1).Formechanicalandelectricaldetailsofthe
robot and its controller, as well as for safety instructions, we refer to the manual FANUC
RoboticsCursodeProgamacinTPENivelB[14].

Figure4.1:ControllerandTeachPendantofRobotArcMate100iB

2.1Basicprinciplesofmotionmanipulations
Robotmotionmanipulationsareperformedusingateachpendant.Theteachpendant(see
figure4.1)isatouchkeyinstrumentathandformatthatisusedbytherobotoperatorto
perform all robot and controller related manipulations: moving robot axes, teaching
positions, write TPE programs, change system variables, set Ethernet communication
settings,runTPEandKARELprograms,consultingtheactualTCPposition,
TPE is the programming language that is used to design normal robot programs, e.g. to
performspecificweldingtasks.Programpositionscanbetaughtbymanuallyjoggingthe
robot axes and recording the positions in position registers. The TPE system allows logic
programming structures, moving to taught positions with different speeds, precisions or
motion types (joint, linear, circular), monitoring of input signals, calling of programs,
TPE also provides a very large number of specially designed functions that are useful in
welding or manipulation tasks. Nevertheless, for advanced user applications, TPE isnt
_
Chapter4.ActiveSecurity
72

sufficient. In paragraph 3, we will introduce the principles of the KAREL highlevel


programming language of FANUC Robotics that allow us to develop more advanced
apllications.
We can state that getting to know the robot by working with the teach pendant and
programminginTPEisanimportantfirststepbeforestartingtoprograminKAREL.

2.2Definingauserandtoolcoordinatesystem
The controller allows us to move the robot in different coordinate systems. If the Joint
coordinate system is activated, every axis can be moved separately. More interesting for
ourapplicationaretheCartesiancoordinatesystems.EspeciallythreeCartesiancoordinate
systeminterestus:

Worldcoordinatesystem

Usercoordinatesystem

Toolcoordinatesystem

TheWorldcoordinatesystemissoftwaredefinedandcannotbechanged.Itcandifferfor
everyrobottype.IftheWorldcoordinatesystemisselected,theTCPlinearlymovesalong
thex,yorzaxis.Rotationswithrespecttox,yandzaxiscanbeperformed.
TheUsercoordinatesystemisdefinedrelativetotheWorldcoordinatesystem.Itisdefined
bythepositionofitsorigininWorldcoordinatesandarollpitchyaworientationofthex,
yandzaxes.MotionsandrotationsequivalenttotheonesintheWorldreferencesystem
canbeperformedintheUsercoordinatesystem.
TheToolcoordinatesystemisfixedtothetoolthatismountedonthesixthaxis.Theorigin
oftheToolcoordinatesystemistaughttocoincidewiththetoolscenterpoint,thesocalled
ToolCenterPoint(TCP).AToolcoordinatesystemisdefinedbyanorientationvectorand
thepositionoftheTCPrelativetotheEndEffectorsfaceplate(seefigure4.3furtheron).
ToolandUserframescanbetaughtbytheoperatorwiththeteachpendant.Therefore,we
use the threepoint method as described in the TPE manual [14]. The principles of the
threepointmethodcanshortlybedescribedasfollows.
To definea Toolframe we manually jog the robotinto threedifferentaxis configurations
andtouchthesamephysicalpointinspacethatcoincideswiththenewdesiredTCP.We
recordeveryoneofthesethreedifferentpositions.Whatthecontrolleractuallyregistersare
thethreepositionsofthefaceplatescenter.Withthisinformation,thecontrollercalculates
the new TCP position. The orientation of the new Tool frame is equal to the one of the
originallydefinedToolFrame.
_
Chapter4.ActiveSecurity
73

To define a User frame we subsequently touch three characteristic points of the new
Cartesiancoordinatesystem:theorigin,apointalongthexaxisandapointalongtheyaxis.
ThezaxisisparalleltothezaxisoftheWorldframe.
Forourrobotmotionandvisioncoordinateframetobeequal,wetaughtaUserframethat
coincides with the reference frame of the first calibration image of the three cameras (see
section3.1ofchapter2).
Thevalidityofthecoincidenceofthesetwoframeswastestedbyjoggingtherobotaxesto
a set of points in the camera covered area and consulting the User coordinates of these
points with the Teach Pendant. Then the selected points where detected with the vision
systeminallthreecameras.Usingtheobtainedpixelcorrespondences,3Dpositionswere
reconstructed.Thesquaredsumerror(4.1)cangiveusanideaoftheerrorsaccumulated
by the vision system in finding pixel correspondences and 3D reconstruction, and by the
precisionofUsertooltaughtwiththeteachpendant.

(xuser xvision )2 + ( yuser yvision )2 + (zuser zvision )2 (4.1)

Errorsnotbiggerthen10mmwereobtained.Thiserrorcanbeexplainedbytherelatively
bigdistancesbetweenthecamerasandtheworkspace,causingsmallpixelerrorstoresult
inrelativelybig3Dpositionalerrors.Becausenofinepositionalaccuracyisneededforour
application,anerrorof10mmisconsideredsufficientlysmall.

2.3Memoryspaceofthecontroller
Tounderstandhowtherobotcontrollerworks,itisofhighimportancethatweknowhow
thecontrollermemoryspaceiscomposed.Therearethreekindsofcontrollermemoryand
wewillnowhighlighttheiruse.

2.3.1FlashProgrammableReadOnlyMemory(FROM)
TheFROMisusedforpermanentstorageofthesystemsoftware.Thissystemsoftwareis
loaded into the controller in the format of 32 FROM images (FROM00.IMG to
FROM31.IMG).Toperformthisoperationanexternalstoragedevice,suchasamemory
card,isused.ForFANUCrobots,aPCMCIAadaptorwithCompactFlashMemorycardis
typicallyusedtostorethesystemsoftware.TheFROMisalsoavailableforuserstorage
astheFROMdevice(FR:).

_
Chapter4.ActiveSecurity
74

2.3.2DynamicRandomAccessMemory(DRAM)
DRAMistemporaryorvolatile,whichmeansthatitscontentsdonotretaintheirstored
valueswhenpowerisremoved.Systemsoftware,aswellasKARELprogramsandmost
KARELvariablesareloadedintoandexecutedfromDRAM.

2.3.3Batterybackedstatic/randomaccessmemory(SRAM)
SRAM is permanent or nonvolatile. Typically, Teach Pendant programs are stored on
SRAM.KARELprogramscanhavevariablesthatarestoredinSRAM.Aportionofthe
SRAM can be defined as a storage device called RAM Disk (RD:). On system software
setup, two SRAM images (SRAM00.IMG and SRAM01.IMG) are loaded into the
controllertogetherwiththeFROMimages.

2.3.4Memorybackups
As for all electronic equipments that work with memory devices, taking backups of
createdprogramsorsystemsoftwareisofutmostimportance.Abackupofthesystem
software allocated in FROM can be executed by writing 32 FROM images of 1MB
(FROM00.IMGtoFROM31.IMG)toanexternaldevice,i.e.thememorycardofthetype
PCMCIA with flash memory. A backup of all stored teach pendant and KAREL
programs,setupofToolandUsercoordinatesystems,variablefiles,settingsofsystem
variables,canbetakenbyexecutinganAllofabovebackuptoanexternaldevice,such
asaUSBpindrive(UD1:).Formoredetailsontheseoperations,werefertothemanual
FANUCRoboticsCursodeProgamacinTPENivelB[14],chapter21,Gestindeficheros.

3.KARELprogramming
KAREL is the highlevel programming language developed by FANUC Robotics, Inc. KAREL
incorporatesthestructuresandconventionscommontohighlevellanguagessuchasPascal,as
wellasfeaturesdevelopedespeciallyforroboticsapplications.Amongthesefeatureswenote
motion control statements, condition handlers, input and output operations, procedure and
functionroutinesandmultiprogramming.Inthissection,wewilltrytohighlightthebasicsof
KARELprogramming.ForamorethoroughexplanationontheKARELlanguage,wereferto
theFANUCRoboticsSYSTEMRJ3iCControllerKARELReferenceManual[15].

_
Chapter4.ActiveSecurity
75

3.1Programmingprinciples
AKARELprogramismadeupofdeclarationsandexecutablestatementsstoredinasource
code file. This source code is written and stored in a text editor such as Word Pad. A
KAREL source code is saved with the extension .kl and compiled using the offline
software WinOLPC+. If no syntax or program structure errors are detected, the offline
compiler returns a pcode binary file that has a .pc extension. This pcode file is
subsequently loaded into the RAM memory of the robot controller. Upon loading of this
binary file, the KAREL controller creates in RAM a variable file .vr that contains the
uninitialized program data, corresponding to all variables declared in the original .kl
source file. It is important to state that program logic, i.e. all executable statements, and
programdataareseparateinKAREL.Thisway,thesamedatacanbeusedandreferenced
tobymorethanoneprogram.Ifwewanttosharevariabledataamongmultipleprograms,
the KAREL language FROM clause must be specified in thesection where the variable is
declared,sothatthesystemcanperformthelinktotheother.vrfilewhentheprogramis
loaded.
Once the pcode of a program is loaded, the KAREL program can be executed with the
teach pendant by pushing the SHIFT + FWD key combination while holding the dead man
switch pressed. The KAREL interpreter executes all executable statements in the binary
code.ChangestodatavariablesarestoredinRAMandanexecutionhistoryissaved.

3.2Programstructure
ThestructureofaKARELprogramcanroughlybesketchedasfollows:

PROGRAMprog_name

TranslatorDirectives

CONST,TYPE,and/orVARDeclarations

ROUTINEDeclarations

BEGIN

ExecutableStatements

ENDprog_name

ROUTINEDeclarations

3.2.1Variabledeclarations
The program sections announced by CONST, TYPE and VAR are used to declare
constants, user defined data types andexistent KAREL data types respectively. Besides
theclassicaldatatypesinteger,real,string,array,ofhighlevellanguages,KARELalso
provides motion related variable types and FILE variable types. We will come back to
thisinthesectionsonmotionandread/writeoperations.
_
Chapter4.ActiveSecurity
76

3.2.2Routinedeclarations
A KAREL routine is a program section that can be implemented to serve e.g. as a
function routine or as an interrupt routine in a condition handler (see section 3.3). A
routineisdefinedbydeclaringitslocalconstantandvariabledatatypesandexecutable
statements. Routines that are local to the program can be defined after the executable
section if the routine is declared using a FROM clause with the program name of the
programitisusedin.Aroutinecanalsobecalledfromroutinelibraries.
Routines can be invoked by the program in which they are declared and by any other
routinedeclaredinthesamemainprogram.Alsorecursiveroutinecallsarepossible.
ItisimportanttostatethatnoFILEvariabletypescanbedeclaredinaroutine.Aswillbe
seen further on, this makes it impossible to perform read/ write operations in routines.
The number of regular and recursive routine calls is only restricted by the stack size
allocatedtotheinvokingprogram.
Variablesoftheinvokingprogramcanbepassedtoaroutinebyreferenceorbyvalue.If
an argument is passed by reference, the corresponding parameter shares the same
memory location as the argument. Therefore, changing the value of the parameter
changesthevalueofthecorrespondingargument.Ifpassedbyvalue,operationsonthe
variableinsidetheroutinedontaffectthevalueofthevariableinthemainprogram.Ifa
routine returns a calculated value, it is referred to as a function routine. If it doesnt
returnavalue,itiscalledaprocedureroutine.
ForexamplesofroutinedeclarationsandcallswerefertoAppendixCandtochapter5of
theKARELReferenceManual[15].

3.2.3Executablestatements
Allexecutablestatementsoftheprogramhavetobestatedintheprogrampartbetween
thereservedwordsBEGINandEND.Apartfromroutinecalls,read/writeoperationsand
motionmanipulations,multitaskingpossibilitiesareofferedbecauseamainprogramcan
invokeotheronestobestartedup,pausedoraborted.Thegenerationofsocalledchild
tasksfromwithinaparenttaskandalsosemaphoreusewillbecommentedinthesection
onmultitasking.
A strong tool offered by the KAREL controller is the use of condition handlers. They
respond to external conditions more efficiently than conventional program control
structuresallow.Conditionhandlershaveproventobeofhighimportanceinouractive
securitysystem,sincetherobotcontrollerneedstoreactasquicklyaspossiblewhenan
_
Chapter4.ActiveSecurity
77

obstacle is detected by the vision system. In the next section, we elaborate the use of
conditionhandlers.

3.3Conditionhandlers
Conditionhandlersscanthesystemenvironmentwheretheprogramisrunningforcertain
conditionstobefulfilled.Oneormorespecifiedconditionsaremonitoredinaparallelway
with normal program execution and, if the conditions occur, corresponding actions are
takeninresponse.

3.3.1Basicprinciples
Condition handlers are defined in the executable section of the program. After being
defined, they also need to be enabled. Once triggered, they are deactivated, unless the
ENABLE action is included in the list of actions to undertake upon triggering of the
condition handler. A condition handler is erased from the program with a PURGE
operation.
Aconditionhandlercanbedefinedglobaltotheprogramorlocaltoamovestatement.
When defined as a global condition handler, the condition handler is monitored
throughoutallprogramexecutiononceithasbeenenabled.Alocalconditionhandleris
declared and enabled within a MOVE statement. If the monitored condition is fulfilled
during the specified motion, the corresponding action is undertaken and the condition
handler is purged. Once the motion is completed and the condition handler wasnt
triggered,itisautomaticallypurged.

3.3.2Conditionhandlerconditions
Among the possible conditions to which a condition handler listens, we point out the
relationalconditions,e.g.aBooleanbecomingTRUE,andprogrameventconditions,e.g.
theABORTconditionthatmonitorsifaprogramiscalledtobeaborted.Ifaprogramis
abortedandaconditionhandleristriggeredbythiscondition,onlytheactionswithinthe
condition handler can be executed. A routine declared in the aborted program can no
longerbeexecuted.
LocalconditionshandlerscanmonitorwhentheEndEffectorhasreachedaspecificpath
node (AT NODE condition) or when the End Effector is at a specific time t (in
milliseconds)beforeapathnode(TIMEBEFOREcondition).

_
Chapter4.ActiveSecurity
78

3.3.3.Conditionhandleractions
When a monitored condition is fulfilled, actions are undertaken. As motion related
actions we can state STOP to stop all current motion (the stopped motion is put on a
stopped motion stack), RESUME to resume the last stopped motion (the previously
stoppedmotionistakenfromthestoppedmotionstack)andCANCELtocancelcurrent
motion(motionisstopped,butnotputonastoppedmotionstack).
Routinescanbecalledasactionsofconditionhandlers.Theythenfunctionasinterrupt
routines. It is important to know that no arguments can be passed to these interrupt
routines and that no READ statements can be executed on files that are opened in the
interruptedprogram.
Other actions are the ABORT, PAUSE and CONTINUE actions to abort, pause or
continue a paused program. Also, a semaphore can be signaled with the SIGNAL
SEMAPHORE[n]builtin.

ForamorethoroughexplanationontheuseofconditionhandlerswerefertotheKAREL
ReferenceManual[15],forpracticalexamplestoAppendixC.

3.4Motionrelatedprogrammingfeatures

3.4.1Positionalandmotiondatatypes
The KAREL language supports different types of positional data types: VECTOR,
POSITION,JOINTPOSandXYZWPR.
The VECTOR data type is a structure with three fields that contain three REAL values
representingalocationordirectioninthreedimensionalCartesiancoordinates.
ThePOSITIONdatatypeisusedtostoretheToolCenterPointsposition,EndEffectors
orientation and the present robot configuration in a structure containing a normal, an
orient, an approach, and a location VECTOR. One field of the POSITION structure
containsaCONFIGvariablethatindicatesthejointplacementsandmultipleturnsofthe
robotwhenitisataparticularposition.
The JOINTPOS data type represents in REAL variables the position of each robot axis,
expressedindegrees.
TheXYZWPRdatatypeisadatastructurewithsevenfields.ThreeREALcomponents
specify the Tool Center Points Cartesian location (x, y, z), another three REAL

_
Chapter4.ActiveSecurity
79

components specify the End Effectors rollpitchyaw orientation (w, p, r), and in the
seventhfieldtheCONFIGdatatypespecifiesthecurrentrobotconfiguration.
An important motion related data type is the PATH data type. A PATH is a varying
lengthlistofelementscalledpathnodes,numberedfrom1tothenumberofnodesinthe
PATH. Positional data type variables such as XYZWPR or JOINTPOS variables can be
added, removed or inserted in a PATH with the builtin APPEND_NODE,
DELETE_NODEandINSERT_NODErespectively.Thelengthofthepathandthesizeof
the path nodes can be consulted as well. The PATH data type is introduced because it
will be possible to move along all nodes in a path by using one instruction: MOVE
ALONG path_var. Knowing all subsequent positions in the path, the controller will be
abletoplantheforeseenmotionandinterpolateinamoreefficientway,allowinghigher
motionspeeds.

3.4.2Positionrelatedoperators
KAREL provides us with a number of builtin functions that is extremely useful in
motionoperationsandmanipulations.Amongothers,wecanmentionthefollowingones:
CURPOS,CURJPOS,UNPOSandPOS.
TheCURPOSbuiltinreturnsthecurrentCartesianpositionofthetoolcenterpoint(TCP),
expressed as a XYZWPR variable. CURJPOS performs the equivalent action for the
JOINTPOSreturnvariabletype.
The UNPOS builtin decomposes an XYZWPR variable, assigning the values of the
structures fields to separate variables x, y, z, w, p, r and config_data, that are 6 REAL
and1CONFIGvariablerespectively.
POSisusedtocreateanXYZWPRcomposedofthespecifiedlocationarguments(x,y,z),
orientationarguments(w,p,r),andconfigurationargument(config_data).

3.4.3Coordinateframes
As described in section 2.2 of this chapter, there are different references for the TCPs
motion.Thethreereferenceframesare:

WORLDispredefined

UFRAMEdeterminedbytheuser

UTOOLdefinedbytheinstalledtool

The world frame is predefined for each robot and is used as the default frame of
reference.Thelocationoftheworldframeisdifferentforeachrobotmodel.
_
Chapter4.ActiveSecurity
80

Using the teaching method of section 2.2, the user can define his Cartesian coordinate
system relative to the world frame. When recording a user coordinate system, the
controller system actually assigns a value of the data type POSITION to the system
variable$UFRAME.Ifyouwanttomoveinaspecificuserframeintheexecutablepartof
aKARELprogram,youneedtoactivatethisframeasfollows:
$MNUFRAMENUM[1]=1
$GROUP[1].$UFRAME=$MNUFRAME[1,$MNUFRAMENUM[1]]
$GROUP[1] indicates that we are setting the user frame for motion group 1. A motion
groupistheactivatedsetofrobotaxes.Forsafetyreasons,onlyonegroupofaxes,i.e.all
6 axes together, is defined and activated on the controller. The system variable
$MNUFRAMENUM[1]indicatesthenumberoftheusercoordinatesystemasithasbeen
taughtwiththeteachpendant,e.g.fortheexamplestatedhere.
AUTOOLreferenceframehastheToolCenterPoint(TCP)asitsorigin.Whenteachinga
UserToolFrame,theprogrammerdefinesthepositionoftheTCPrelativetothefaceplate
of the robot by assigning a value to the system variable $UTOOL. The system sets an
locationandanorientationwithrespecttothefaceplate(seefigure4.1).
ThethreedifferentreferenceframesWORLD,UTOOLandUFRAMEarerepresentedin
figure4.1.

Figure4.1:ReferencingpositionsinKAREL

3.4.4Motioninstructions
Inroboticapplications,motionisthemovementoftheTCPfromaninitialpositiontoa
desired destination position. The MOVE statement directs the motion of the TCP. The
_
Chapter4.ActiveSecurity
81

KAREL system provides us with a set of MOVE statements that are used for different
motional manipulations. Among others we mention the MOVE ALONG statement that
causesmotionalongthenodesdefinedforaPATHvariable.Statementsthathaveproven
tobeveryusefulforourapplicationaretheregularMOVETOandtheMOVEABOUT
statements.TheMOVETOstatementspecifiesthedestinationofthemoveasaparticular
positionorindividualpathnode.MOVEABOUTallowsyoutospecifyadestinationthat
is an angular distance in degrees about a specified VECTOR in tool coordinates. The
MOVEABOUTstatementisusedtoperformtherotationalavoidanceactionsdetermined
by the artificial intelligence system. MOVE TO is used to perform the translational
actions.
The use of a MOVE ALONG statement requires the preliminary knowledge of all path
nodes ina trajectory.As,at the beginning of thealternative motion, we dont know all
alternative positions yet, we cannot use MOVE ALONG statements in the
implementationofthealternativetrajectorymotion.

3.4.5Motiontypes
The system variable $MOTYPE determines how locations are interpolated during TCP
motion.Therearethreepossiblevaluesof$MOTYPE:
JOINT
LINEAR
CIRCULAR
Thedifferentmotiontypesaredepictedinfigure4.3.Forouractivesecurityapplication,
theLINEARmotiontypesispreferred,forthismotiontypeassuresthattheTCPmoves
inastraightline,fromtheinitialpositiontothefinalposition,attheprogrammedspeed.
For JOINT motion, the controller interpolates each axis in a way that guarantees the
smallesttimepossibletoreachthefinalposition.Thefollowedtrajectoryisnotasimple
geometric space such as a straight line. Because the motion of the robot arm cannot be
predicted,thismotiontypeisunfitforouractivesecurityapplication.

_
Chapter4.ActiveSecurity
82

Figure4.3:LocationinterpolationoftheTCPfordifferentmotiontypes

3.4.6Terminationtypes
Robot motion can be divided in motion intervals. One motion interval is defined to be
that part of motion generated by a single motion statement. Motion intervals can be
terminatedinseveralways.Dependingonthevalueofthesystemvariable$TERMTYPE,
theprogramenvironmentusesdifferentcriterionstodecidewhenamotionintervalhas
terminated and program execution can be continued by the interpreter. The different
valuesof$TERMTYPEare:
FINE
COARSE
NOSETTLE
NODECEL
VARDECEL
ForFINEandCOARSEtherobotmovestothepathnodewithaprecisiondeterminedby
acertainsystemvariableandstopsinthispathnodebeforebeginninganextmotion.
NOSETTLEcausesthemotionenvironmenttosignalintervalcompletionassoonasthe
TCPhasdecelerated.TheTCPwontsettlepreciselyinthedestinationposition.
Neitheroneofthisthreeterminationtypeswillserveuswell,becausetheywillallcause
unsmooth and even non continuous motion when a loop of MOVE TO statements is
executed.
NODECEL permits continuous motion near the taught positions. As soon as the axes
begin to decelerate, interval termination is signaled to the interpreter and acceleration
towards the next position is initiated. VARDECEL allows the programmer to set a
variabledecelerationtobeperformed.
_
Chapter4.ActiveSecurity
83

Finally, we would like to mention the abnormal interval termination statements


CANCELandSTOP.Thesebuiltinscanbeusedinglobalorlocalconditionhandlers.
When a CANCEL statement is invoked in a condition handler, the motion interval is
consideredcompletedimmediately.Themotionenvironmentsignalstheinterpreterand
theMOVEstatementisterminated.Therobotsaxesaredeceleratedtoahalt.
A STOP causes the motion of the axes to stop. However, it does not cause interval
termination. The robot stops moving, but the unexecuted motion is put on a stopped
motion stack and the motion can be resumed. The interval will not terminate until the
motionhasbeenresumedorterminatednormally.IfnoNOWAITclause(seefurtheron)
is used, program execution is thus halted, because the interpreter is waiting for the
motionintervaltocomplete.
In our active security application, upon detecting of an obstacle, normal trajectory
execution needs to aborted immediately. We therefore use a CANCEL statement in a
globalconditionhandler.WerefertoAppendixCforprogrammingdetails.

3.4.7Motionclauses
OptionalclausesincludedinaMOVEstatementcanspecifymotioncharacteristicsfora
singlemoveorconditionstobemonitoredduringamove(i.e.alocalconditionhandler).
ClausesthathaveproventobeveryusefulinourapplicationaretheWITHclauseand
theNOWAITclause.
The WITH clause allows you to specify a temporary value for a system variable, e.g. a
motionorterminationtype.Themotionenvironmentusesthisvaluewhileexecutingthis
MOVEonly.AtterminationoftheMOVE,thetemporaryvalueceasestoexist.
The NOWAIT clause allows the KAREL interpreter to continue program execution
beyond the current MOVE statement while the motion environment carries out the
motion.
If used in combination with the termination type NODECEL, the NOWAIT clause has
proventorealizeasatisfyingcontinuousmotionifconsecutiveMOVETOstatementsare
executedonaseriesofXYZWPRvariablesthatarenotgatheredinaPATHvariable.This
guaranteedusthatcontinuousmotionalongthealternativepositionscouldbestartedup
before knowing all of these alternative positions. For, implementing MOVE TO
statementswithNOWAITclausesinaloopresultsinamotioninwhichnodeceleration
nordiscontinuityarenoticed.

_
Chapter4.ActiveSecurity
84

Figure 4.2 depicts the effect of $TERMTYPE and the NOWAIT clause on motion
execution.

Figure4.2:Effectof$TERMTYPEonpathmotion

Ascanbeseenfromfigure4.2,wehavetobearinmindthatcombiningtheNODECEL
termination type with NOWAIT clauses causes the TCP not to pass through the exact
destination location. In practice, this effect has proven to be very small for low motion
speeds.

To terminate the paragraph on motion control, we state a KAREL program line of a


MOVEABOUTstatementwitha$TERMTYPEandNOWAITclause:

WITH$TERMTYPE=NOSETTLEMOVEABOUTx_axisBYalpha

The variables x_axis and alpha have to be previously declared and initialized as
VECTORandREALdatatypesrespectively.
We would like to point out that we only highlighted some of the numerous aspects on
KARELmotion.Formoreinformation,e.g.onaccelerationanddecelerationmechanisms,
systemvariablesrelatedtospeed,accelerationanddeceleration,controllerinterpolation
methods,,werefertotheKARELReferenceManual[15].

_
Chapter4.ActiveSecurity
85

3.5Readandwriteoperations
AKARELprogramcanperformserialI/OoperationsondatafilesresidingintheKAREL
file system and on serial communication ports associated to connectors on the KAREL
controller.
Inparagraph4onrealtimecommunicationwewillelaboratehowinformationcanbesend
from inside a MATLAB session to a running KAREL program over Ethernet. This will be
possible because the KAREL system provides us with the necessary tools to activate
Ethernet sockets and perform file input and output operations through these sockets.
SocketscanbeseenasEthernetactivatedcommunicationportsthatenabledataexchange
betweenanexternaldeviceandtherobotcontroller.Detailsonthiscommunicationwillbe
giveninparagraph4.
Fundamental mechanisms in the established socket communication are file input and
outputoperations.Datafilesorcommunicationportscanbeopened,readfromorwritten
to and closed. We will now highlight the elements that are involved in this process: file
variables,fileoperationsandtheinput/outputbuffer.

3.5.1Filevariables
AFILEvariableisusedtoindicatethefileorcommunicationportonwhichyouwantto
performserialI/Ooperations.AFILEisdeclaredintheVARsectionoftheprogramend
initiated by associating it to a specific file or communication port in the OPEN FILE
statement.TheCLOSEFILEstatementdisconnectstheFILEandmakesitpossibletouse
thevariabletoreadorwritefromanotherfileorcommunicationport.

3.5.2Fileoperations
Besides the OPEN FILE and CLOSE FILE operations, two basic file operations are
available:READandWRITE.Thedataitemsthathavetobereadorwritten,arestatedas
parametersofthespecificfunction.
More builtins are provided by the KAREL system. E.g., file attributes can be set using
theSET_FILE_ATTRbuiltin.
Fordetailsaboutfileoperationbuiltins,werefertotheKARELReferenceManual[15].

_
Chapter4.ActiveSecurity
86

3.5.3Input/outputbuffer
AnareainRAM,calledabuffer,isusedtoholdupto256bytesofdatathathasnotyet
been transmitted during a READ or WRITE operation. Especially for the online data
exchangebetweenapcandthecontroller,thisbufferwillbeofimportance.Themoment
dataissentoverasocketandreceivedbythecontroller,thisdataistemporarystoredin
thebufferbeforeaREADoperationisperformedontheFILEassociatedtothesocket.To
check if data has already been received from a serial port, the KAREL builtin
BYTES_AHEAD is of indispensable importance. BYTES_AHEAD checks to state of the
buffer associated to a specific FILE and returns the number of buffered bytes that are
readytobereadin.

3.5.4Example:readinginanarrayofXYZWPRvariables
For our application, reading in positional variables is of high importance. Alternative
positions are calculated for in MATLAB and subsequently need to be transmitted to the
controller,readinandstoredinKARELpositionalvariables.Withthefollowingexample,
we want to illustrate how an ARRAY OF XYZWPR pos_array is read in from a FILE
calleddata_file.Hereby,itisimportanttoknowthatarraysneedtobereadinelementby
elementinaforloopandthatforXYZWPRvariables,everyfieldneedstobereadinasa
separateREALvariable.Theextension.dtisthetypicalKARELformatspecifierfordata
filesthatarestoredintheKARELsystem.Thefiledata.dtcontainsnum_nodesrowsof6
REALvariablesthatareseparatedbyabackspacedelimiter.

OPENFILEdata_file(RW,data.dt)
FORi=1tonum_nodesDO
READdata_file(pos_array[i].x)
READdata_file(pos_array[i].y)
READdata_file(pos_array[i].z)
READdata_file(pos_array[i].w)
READdata_file(pos_array[i].p)
READdata_file(pos_array[i].r)
ENDFOR
CLOSEFILEdata_file

3.6Multitasking
Multitasking allows more than one program to run on the controller on a timesharing
basis, so that multiple programs appear to run simultaneously. For our active security
application, we will design a set of processes of which each one has its specific

_
Chapter4.ActiveSecurity
87

functionality in the overall system. The multitasking mechanisms of the KAREL system
willbeusedtointroduceacertaininteractionbetweentheseprocesses.
Atermcommonlyusedinmultitaskingisatask.Ataskisauserprogramthatisrunning
or paused. A task is executed by an interpreter. Interpreters are system components to
which a task is assigned when it is started. The system interpreters are capable of
concurrentlyexecutingtasks.

3.6.1Taskscheduling
Itisimportanttobeawarethatalthoughmultipletasksseemtooperateatthesametime,
theyaresharinguseofthesameprocessor.Atanyinstantonlyonetaskisreallybeing
executed.
Onceexecutionofastatementisstarted,itmustcompletebeforestatementsfromanother
task can be executed. The only exceptions on this rule are formed by the interruptible
statements:MOVE, READ,DELAY,WAITandWAITFOR.Whilewaitingforamotion
or read operation to complete, a task is in a hold condition. The DELAY, WAIT and
WAITFORstatementsarealsoholdconditions.
Ataskthatiscurrentlyrunningwillexecutestatementsuntiloneofthefollowingevents
occurs:

Aholdconditionoccurs;

Ahigherpriorityprogrambecomesreadytorun;

Thetasktimesliceexpires;

Theprogramabortsorpauses.

Ataskisreadytorunwhenitisintherunningstateandhasnoholdconditions.Only
one task is actually executed at a time. Task priority and timeslicing will determine
whichtaskwillbeexecutedwhenmorethanonetaskisreadytorun.

3.6.1.1Priorityscheduling
Iftwoormoretaskswithdifferentprioritiesarereadytorun,thetaskwiththehighest
prioritywillrunfirst.InKAREL,adefaultpriorityof50isassignedtoeachusertask,8
isthehighestpriority,143thelowest.
Atasksprioritycanbedeclaredupontranslationbyusingthe%PRIORITYtranslator
directiveafterthefirstlineinthe.klcode.WiththeSET_TSK_ATTRbuiltinthecurrent
priorityofataskcanbesetandthuschangedwhereverthatisneededintheprogram
execution.
_
Chapter4.ActiveSecurity
88

3.6.1.2Timeslicing
Iftwoormoretasksofthesamepriorityarereadytorun,theywillsharethesystem
resources by timeslicing, which is a mechanism of alternating use of the system. A
timeslice permits other tasks of the same priority to execute, but not lower priority
tasks.FormoredetailsonpriorityschedulingandtimeslicingwerefertotheKAREL
ReferenceManual[15].

3.6.2Parentandchildtasks
Execution of a task can be invoked by another, already running task. The invoking
program is called the parent task and creates a new task, the so called child task. In
KARELprogramsnewtasksarecreatedusingtheRUN_TASKbuiltin.Oncecreated,a
childtaskrunsindependentlyoftheparenttask.
Achildtaskcanusevariablesdeclaredintheparenttaskifthisvariableisdeclaredinthe
childtaskwiththesamevariablenameandusingtheFROMclause.ABOOLEANnamed
detection that is also declared in a parent task parent.kl, is declared in a child task of
parent.klasfollows:

VAR

detectionFROMparent:BOOLEAN

Besides the RUN_TASK builtin, there are a number of builtins used to control and
monitorothertasksinKAREL:PAUSE_TASK(topauseatask),CONT_TASK(toresume
a paused task), ABORT_TASK (to abort a task) and GET_TSK_INFO (to obtain the
currenttaskstate).
As task related condition handler actions we mention the PAUSE, CONTINUE and
ABORTbuiltinstorespectivelypause,resumeandabortthetaskinwhichthecondition
handlerisdefinedandenabled.

3.6.3Semaphores

Aspreviouslyseen,theKARELcontrollerallowsustocreateprogramstructuresthatare
task oriented. A child task can be invoked from a parent task, in a specific moment of
program execution. This allows us to attribute the execution of sub processes to tasks,
thatareprogrammedinseparate.klfiles.Thisway,aclearprogrammingstructureofthe
parenttaskcanbemaintainedandthefunctionalityoftheapplicationcanbeamplified.

_
Chapter4.ActiveSecurity
89

Previouslywehaveseen,aspossibilitiestopassonprogramexecutioncontrolfromone
tasktoanother,thepriorityassignmenttranslatordirectiveandthebuiltinfunctionsfor
creating,pausingandresumingchildtasks.
Innearlyallprogramapplicationsandnotonlyinthefieldofrobotprogrammingthat
incorporate the passing on of execution control between processes, semaphores have
proven to be of indispensable importance. Ever since Dijkstra introduced the use of
semaphores in 1968, semaphores have been widely recognized and implemented in
softwaresystems.
TheKARELcontrollersupportstheuseofcountingsemaphoresanddisposesofasetof
builtin functions to perform operations on semaphores. For the design of our active
security system, we thankfully used semaphores to pass on program control between
tasksorroutinesinspecificmoments,e.g.whenaconditionhandleristriggered.
Thebasicprinciplesofsemaphoresandthepossibleoperationsandsemaphoreswillnow
be highlighted, as well as the KAREL builtin instructions that correspond to the
semaphoreoperations.

3.6.3.1Basicprinciplesofsemaphores
Semaphoresarespecialintegervariablesintheuniversewhereprocessesaredeclared
and running. Before parallel running processes are started up, semaphores are
initializedataspecificvalue,oftenbeing0or1,dependingontheirapplication.Afterit
hasbeeninitialized,thevalueorcountofasemaphorecanbechangedbyusingtwoso
calledinitializationprimitives:thewaitandsignaloperation.

3.6.3.1.1Waitoperation
Whenaprocess,i.e.aKARELprogram,taskorroutine,executesawaitoperation
on a semaphore, the semaphore count is decremented by 1. The resulting
semaphorecountisthenevaluatedonbeingnonnegative(0)ornegative(<0).If
the count is nonnegative, the process that executed the wait operation can
continue its normal execution. If the count is negative, the process is paused and
putinabufferthatisassociatedtothesamesemaphore.

3.6.3.1.2Signaloperation
Aprocessthatexecutesasignaloperationonasemaphorecausesthecountofthis
semaphoretobeincrementedby1.Iftheresultingcountispositive(>0),thenthe
_
Chapter4.ActiveSecurity
90

signaloperationdoesnthaveanyeffect.Iftheresultingsemaphorecountisnon
positive (0), one of the processes that was previously put on the buffer of the
semaphoreisreleasedandbecomesreadytorunagain.Ifthetaskthatexecutedthe
signal operation maintains executing depends on its priority relation to the
releasedtask.

Tasksarereleasedfromthebufferonafirstin/firstoutbasis.Everysemaphorehasits
ownbuffer.

It is important to state that semaphores need to be declared as a global variable in the


programtasksthatuseit.Thisway,thesemaphorescountisvisibleandaccessibleforall
tasks using it. In KAREL, a semaphore declared as an INTEGER in a parent task
PARENT.KL, needs to be declared in the parents child tasks CHILD_A.KL and
CHILD_B.KLwithaFROMPARENTclausetomakesurethechildtaskscanaccessthe
semaphore.SeeAppendixCforaprogrammingexample.
Semaphores are software mechanisms. For the user, only the initialization, wait and
signal operations are visible and executable. However, in the kernel of the system the
primitive operations are backed up by a computer algorithm that implements the
semaphoremechanismdescribedabove.
Inrobotprogrammingapplications,semaphoresareusedtoassurethemutualexclusion
ofprogramtasks.Thismeansthattwoormoretasksareprogrammedinsuchawaythat
onlyoneofthesetcanbeactiveatthesamemoment.Mutualexclusionsemaphoresare
initializedat1andthefirstlineofeachprogramisawaitoperationonthesemaphore.
The first task that is executed decrements the semaphore count to 0 and can thus
continue its execution. When another task is run, it decrements the count to 1 and the
task is put on the semaphores buffer. When a signal operation is executed on the
semaphore,e.g.bythefirstprogramuponitstermination,orwithinaconditionhandler,
thebufferedtaskisreleasedandrun.
Intable4.1thesemaphorerelatedKARELbuiltinsanddescriptionoftheirusearestated.
Fortheinputparametersofeachbuiltin,werefertoKARELReferenceManual[15].Itis
importanttomentionthatonlySIGNALSEMAPHOREcanbeusedasanactionwithina
conditionhandler.

_
Chapter4.ActiveSecurity
91

KARELbuiltin

Description

SIGNALSEMAPHORE

Signalsthespecifiedsemaphore

CLEAR_SEMA

Resetsthesemaphorecountto0

POST_SEMA

Adds1tothesemaphorecount

PEND_SEMA

Subtracts1fromthesemaphorecount

SEMA_COUNT

Returnssemaphorescurrentvalue

Table4.1:SemaphorerelatedKARELbuiltins

3.6.4Motioncontrol
Animportantrestrictioninmultitaskingliesinthecontrolofthevariousmotiongroups.
Onlyonetaskcanhavecontroloruseofagroupofaxes.Ataskrequirestheuseofthe
motiongroupifatleastoneMOVEstatementisexecutedintheprogram.
ThefirsttaskthatisrunandthatcontainsMOVEstatements,getscontrolofthemotion
group.Ifasecondmotionexecutingtaskisattemptedtoberun,thisresultsinanerror,
forthetaskattemptstogetcontrolofamotiongroupthathasalreadybeenassignedto
thefirsttask.
This can beavoided by imposing that the task doesnt ask control of the motion group
upon activation. This is realized by using the translator directive %NOLOCKGROUP.
When the task needs motion control at a certain point during its execution, the
LOCK_GROUP builtin can be used to assign the motion group to the task. Two tasks
competing for control of a motion group can be provided with a semaphore based
mutual exclusion mechanism so that no task can execute a LOCK_GROUP statement
when the other task locked the group and hasnt released it yet with an
UNLOCK_GROUPstatement.Foranexampleofthisstrategy,seetheKARELReference
Manual[15].
Anothersolutionistoexecuteallmotioninthesametask,butimplementthealternative
motioninaninterruptroutineofthetask.Forouractivesecurityapplication,wechose
thissolution.

4.Realtimecommunication
As seen in the introduction chapter of this thesis, in a realtime communication system
informationneedstobetransferredataratethatissufficientlyhighforthegivenenvironment.
Asimportantasthefastnessoftransferistheguaranteethatdatatransferwillcompletewithin
_
Chapter4.ActiveSecurity
92

a determined time. If transmission time is guaranteed, we call the communication system


deterministic.
In industrial settings, the traditional communication systems are fieldbus networks. A
commonlyusedfieldbusisthePofibus(ProcessFieldBus)thatallowstransmissionspeedsup
to500Kbit/sec.ForProfibus,therealtimeconditionismetbecausebusadmissioniscontrolled
byatokenpassingmechanism.Adevicecanonlystartbroadcastingwhenitisinpossessionof
atoken.Sendingdevices(masters)passthetokenontoeachotheranditisguaranteedthata
master will become in possession of the token within a certain time. Deterministic
communicationisguaranteedfortheProfibustypesDP(DecentralPeriphery)andPA(Process
Automation).Formoredetailsonfieldbuses,werefertoliterature[21].
Nowadays, the demand for Ethernet as a realtime control network is increasing. As some of
the reasons we can mention the lower product costs in comparison to field buses, the
possibility to monitor data transmissions or the easy connectivity of office pcs in order to
performcontroloperationsontheindustrialequipments.However,wecannotacceptEthernet
asarealtimecommunicationmediumwithoutmakingsomeimportantconsiderations.
Ethernet, as defined in IEEE 802.3, is unsuitable for strict realtime industrial applications
becauseitscommunicationisnondeterministic.Thisisduetothedefinitionofthenetworks
media access control protocol that is based on Carrier Sense Multiple Access with Collision
Detection (CSMA/CD). For a thorough explanation of CSMA/CD mechanisms we refer to
literature on Ethernet. What it boils down to is that nodes connected to a same network can
start transmitting data packages at the same moment. When this happens, a collision of the
data packages occurs and information is destroyed. The involved network nodes stop
transmittingandwaitarandomtimethatisdeterminedbyanexponentialbackoffalgorithm.
Whentheirwaitingtimehaspast,eachoneoftheinvolvednodestriestotransmitagain.Anew
collision is most unlikely, but can never be fully excluded. Networks nodes that can cause
collisionsofdatapackageswhentheytransmit,aresaidtobeinthesamecollisiondomain.
In modern motion control applications such as robotics cycle times are typically hundreds of
microseconds ([16]). Traditional Ethernet and field bus systems are not capable of meeting
thesehighcycletimerequirements.
Despite of all this, highspeed Ethernet switches, full duplex Fast Ethernet (100Mbps) and
GigabitEthernet(1Gbps)havemadeitpossibletouseTCP/IPEthernetconnectionstotransmit
controlsignalsanddata.

_
Chapter4.ActiveSecurity
93

4.1FullduplexEthernet
Modern Ethernet is designed as fullduplex. This means that a network device can
simultaneouslysendandreceivedata.TheEthernetcardinstalledinthedevicecanbegin
sendingdatawhiletheprocessofreceivingdataisstillgoingon,ortheotherwayaround,
itcanbeginreceivingwhilesendingofdataisstillgoingon.

4.2FastEthernetswitches
AswitchisaspecializedjunctionboxthathasmultiplebuiltinEthernetportsanditsown
processor. Only one device is connected to each port of the Ethernet switch. This causes
every connected device to be isolated onto its own collision domain. The chance for
collisionstooccurisherebytotallyeliminated.Sinceitarethecollisionsthatdeterminethe
nondeterministic character of Ethernet, the use of an Ethernet switch can adjust an
EthernetLocalAreaNetwork(LAN)tobedeterministic.
Typically for industrial applications, key control equipment is isolated into collision
domains with a switch. For our application, five devices are connected to the switch: the
threeAxisnetworkcameras,therobotcontrollerandthepcfromwhichcontrolactionsare
monitored.Thisideaispresentedinfigure4.5.

Axis205
GICcam1
193.144.187.122

Axis205
GICcam2
193.144.187.123

Axis205
GICcam3
193.144.187.17

FANUC
ArcMate100iB

SWITCH

PC

193.144.187.250

193.144.187.156

Figure4.5:TEISALocalAreaNetworkwithEthernetswitch

Unlikeaclassicalrepeatinghubthatpassesonsentdataframesfromthesendingdeviceto
all devices connected in the LAN, a switch can be trained to send data only to the
destinationdevice.Dataframesreceivedbytheswitchareinitiallyforwardedtoeachofthe
switchsports,butastheswitchlearnswhichMACaddressisassociatedwithwhichport,
it forwards data frames only to the port associated with the frames destination address.
_
Chapter4.ActiveSecurity
94

Thisallowsseveralsystemstosimultaneouslycommunicatefullduplexandatfullspeed,
for a switch offers the full bandwidth to every connected device, where a repeating hub
dividesthebandwidthbetweenallconnecteddevices.
The highspeed Ethernet switch installed in the TEISA lab establishes a Fast Ethernet
(100Mbps)connectionbetweentheconnecteddevicesoffigure4.5.

Using the OSI Reference Model as the international standard for protocol layers in
communicationsystems,wecansituateEthernetintheDataLinklayerofOSI,forEthernet
provides functional and procedural means to transfer data between connected network
entities.OntopoftheEthernetprotocolotherprotocolsareusedtoexchangedatapackages
betweenconnecteddevices.
A commonly used connection oriented protocol is TCP, which stands for Transmission
ControlProtocol.TCPworksintheTransportlayeroftheOSImodel.UsingTCP,datacan
be sent in a stream with the guarantee that data will arrive in the order it was sent. If
communication errors occur, as well in the data itself as in the data order, this can be
correctedbecauseofthewaytheprotocolisimplemented.TCPpackagesneedtobeformed
accordingtostrictrulesontheformatofpackageheaders.Headersarebitpackageswith
specific functions. Among others we can mention the communication port number of
senderandreceiver,theheaderlength,theacknowledgereceiptflagandthechecksum.For
moredetails,werefertosection4.3.3ofthischapterandtoliteratureonTCP.
TCPconnectionsareinitiatedfollowingaspecificconnectionprocedure.Aclientneedsto
send a request for connection to a server that accepts the connection with the client by
returninganacknowledgementpackage.Subsequently,theclientanswerstheserverwith
an acknowledgement for connection acceptance. From then on, client and server are
connectedandsynchronizedandcanstartexchangingdatapackages.
Due to the extensive and time consuming activities in the exchange of data, such as
checking if data is received in the correct order, sending of acknowledgements, retries in
transmission if errors occurred,, the TCP protocol isnt very suitable for realtime
communicationwheredataneedstoarrivewitha100%guaranteeandwithinadetermined
timeinterval.However,combinedwithFastEthernetandswitches,TCPhasproventobe
suitableforindustrialrealtimecommunications.Forroboticsapplications,FANUCRobotics
provides TCP communication options, such as Socket Messaging and the File Transfer
Protocol. We will now introduce the principles of TCP data exchanges through socket
connectionsbetweentwonetworkdevices.Afterhavingelaboratedtheprinciplesofsocket
_
Chapter4.ActiveSecurity
95

messaging,wewillexplainhowthesocketmessagingoptionisusedinKARELprograms
tosetupacommunicationwithaMATLABsession.

4.3Socketmessaging
Whatisasocket?
A socket is a software endpoint that establishes bidirectional communication between a server
programandoneormoreclientprograms.Thesocketassociatestheserverprogramwithaspecific
hardwareportonthemachinewhereitrunssoanyclientprogramanywhereinthenetworkwith
asocketassociatedwiththatsameportcancommunicatewiththeserverprogram[17].
As this definition says, a socket is the identification of a machine, identified by its IP
address, with a corresponding hardware port. To set up the connection between a server
and a client through a socket, the IP address of the server needs to be specified together
withthephysicalcommunicationport,thatmustbethesameonclientandserverdevice.
Once the connection between two entities is established, both of them can communicate
accordingtoacertainprotocol,suchasTCP.Whenasocketiscreated,a32bitnumberis
generated to identify the socket. A receiving socket is typically identified as an even
numberandasendingsocketasanoddnumber.
The communication ports between 0 and 1023 are reserved ports. They are used for well
knownserverapplications(e.g.:80forhttp,20and21forftp,23fortelnet).Theportrange
between1024and49151canbeusedforuserdefinedserverapplications.Typically,aport
between1024and5000isused.
The connection and communication sequence to set up and use a socket connection is
illustratedintheexampleoffigure4.6.Aserverisputinthelistenstate.Whentheclient
requests a connection, the server accepts connection. Once connected, both client and
servercansendandreceivedata.Eithertheserverortheclientcanclosetheconnection.

_
Chapter4.ActiveSecurity
96

SERVER=Fanuc
193.144.187.156port2007
Waitingforcommunication

CLIENT=pc
port2007
Initiatingcommunication

1.listen
2b.connect
5.close

3a.send
4b.receive

2a.connect
5.close

3b.receive
4a.send

TCPontopofETHERNET
Socketcommunicationservice

Figure4.6:Socketconnectionandcommunicationsequence

SoftwaretosetupasocketcommunicationcanbedevelopedinC,C++,Java,Perlorother
languages.Fordetailsonthesesoftwareapplicationswerefertoliterature.Inthefollowing
sections, we will see how we can use preprogrammed socket functions to set up a
connection,tosendandreceivedataandtodisconnectthesocket.

4.3.1KARELsocketmessagingoption
TheKARELsystemfullysupportstheabovedescribedcommunicationmechanisms.In
order to use the User Socket Mesg option R636 of the FANUC Handling Tool System
SoftwareinKARELprograms,clientorservertagsfirstneedtobesetupinthesystem.
A tag is a label to which the parameters needed to set up a socket communication are
assigned. If it concerns a client tag, the remote servers IP address and the physical
communication port need to be configured. For a server tag, only the communication
portneedstobeset.
SincetheKARELsystemprovidesmethodstoinstantlydetectwhentheserveracceptsa
clientsconnectionrequest,wedesignedtherobotcontrollertobetheserver.Thismeans
wehavetosetupaservertagS1.Usingtheteachpendant,thisisdonefollowingthe
key sequence MENUSSETUPHOSTCOMMF4[SHOW]SERVERS. After having
setuptheservertagS1,thesystemvariable$HOSTS_CFG[1].$SERVER_PORTneedsto
begiventhevalueofthechosencommunicationport,e.g.2007forauserdefinedserver
application.TheservertagisnowreadyforuseinaKARELprogram.Formoreinfoon
this procedure we refer to the FANUC Robotics SYSTEM RJ3iC Controller Internet
OptionsSetupandOperationsManual[18].

_
Chapter4.ActiveSecurity
97

ThreesocketmessagerelatedbuiltinsareprovidedbytheKARELsystem:MSG_DISCO,
MSG_CONNECTandMSG_PING.Theyareusedtorespectivelydisconnect,connectand
pingaclientorserversocket.FormoredetailswerefertotheKARELReferenceManual
[15]. The use of a socket in a KAREL program is illustrated by the following .kl source
code:

PROGRAMsocket
VAR

ComFile:FILE

Status:INTEGER

BEGIN
MSG_DISCO(S1:,Status)closingcommunicationportbeforestart
MSG_CONNECT(S1:,Status)openportforcommunication
WAITFORStatus=0waituntilclientdeviceconnects
OPEN_FILEComFile(RW,S1:)openFILEvariableassociatedtosocket
[readandwriteoperationsonComFile]
CLOSE_FILEComFilecloseFILEvariable
MSG_DISCO(S1:,Status)disconnectserversocket
ENDsocket

Ifwaitingfordatatoarrivethroughthecommunicationport,ausefulKARELbuiltinis
BYTES_AHEAD. With this function, we can detect if new data is received over an
openedcommunicationfileandisreadytobereadinbythesystem.Thismethodwillbe
usedtowatchiftheobstacledetectionsignalhasbeensentfromthepc.
FortheprogrammingapplicationsofKARELsockets,werefertotheprogramcomm.kl
inAppendixC.

4.3.2SocketmessagingandMATLAB
SinceallpositionalcalculationsandvisionrelatedoperationsareperformedinMATLAB,
it is desirable that we are able to exchange data information between MATLAB and the
robotcontrollerinanelegantandtimeefficientway.TheKARELcontrolleroffersusthe
optionofsocketmessagingfromtherobotsside.OnthesupportwebsiteofMATLABwe
found a toolbox that is designed to set up a socket communication between MATLAB
sessions of two network connected pcs. The toolbox MSocket is designed by Steven
_
Chapter4.ActiveSecurity
98

Michael of the MIT Lincoln Laboratory. The package contains a series of functions
writteninC++andcompiledtoMATLABexecutable.dllfunctionfiles.Thetoolboxoffers
thefollowingfunctions:

mslisten:makestheserverlistentoconnectionrequestsbyaclientonaspecific
port;

msconnect:aclientrequestsconnectiontotheserver,specifyingtheserversIP
addressandcommunicationport;

msaccept:theserveracceptsconnectiontotheclient;

mssend:tosendasinglevariableoverthesocket;

msclose:closesthesocketconnection.

Compatibility of KAREL and MATLAB functions for creating socket connections and
sendingvariableshadtobetested.Itwasfoundthatstringvariablescanbesuccessfully
sentfromMATLABoverthesockettoareceivingKARELprogram.Inthenextparagraph
wewillcommentsomemoredetailsabouttheformatofthesentdatapackagesandsome
specificcommunicationissues.
ThankstothesockettoolboxofStevenMichaelitwasmadepossibletodirectlysenddata
fromaMATLABsessiontoaKARELprograminanelegantandtimeefficientway.
Forouractivesecurityapplication,theactualpositionoftheToolCenterPointneedsto
besentfromtheKARELsystemtoMATLAB,becausethefuzzysystemneedsthisposition
as a start point for the calculation of alternative path positions. The socket package for
MATLABhoweverdoesntsupportacommunicationinthisdirection.Wethereforeuseda
simple clientsocketapplication written in Perl, thatis based on the good Perlprogram
examples that can be found in the Perl Black Book[19], written by Steven Holzner. The
codeoftheprogramClient.plisaddedtoAppendixC.Theprogramestablishesasocket
connection with a server that is waiting for a client to connect at a specific
communication port. Upon connection and creationof the socket, the client receivesall
information that is sent by the server, as long as the socket is not disconnected. Perl
scriptscanbecalledinMATLABandwesuccessfullytestedthistoreceivetheactualTool
CenterPointspositionfromtheKARELsystem.

4.3.3Formatofdatapackages
TCP data packages consist of a header section and a data section. The package header
contains all information that is needed to successfully perform all steps in the
_
Chapter4.ActiveSecurity
99

transmission protocol. Figure 4.7 illustrates the structure of a TCP package. Without
wantingtogiveathoroughexplanationonthefunctionofeachfieldintheTCPpackage,
we can mention the source and destination port of the data package, the sequence
number, the acknowledgment number and the checksum. A sequence number is
assignedtoeachpackagetomakesurenopackagesarelostandthatthedataisdelivered
intherightorder.Theacknowledgmentnumberisusedintheprocedurethatconfirmsif
packages have been successfully received. The checksum allows the Transmission
ControlProtocoltocheckifnobyteshavebeencorrupted.Forthefunctionsoftheother
fields,werefertoliteratureontheTransmissionControlProtocol.

Bytes

FieldDescription

04

SourcePort

58

SequenceNumber

912

AcknowledgmentNumber

1316

DataOffset

DestinationPort

Reserved

Flags

Window

1720

Checksum

2124

Options

UrgentPointer

24+

Data

Figure4.7:HeaderfieldsofaTCPdatapackage

When a data package is sent from MATLAB to a receiving KAREL program, incoming
bytes are stored in the KAREL system buffer that can contain up to 256 bytes. In the
KAREL program comm.kl of Appendix C, it is illustrated how incoming bytes can be
stored one by one in an ARRAY OF STRING variables. When testing the set up
communication between MATLAB and KAREL, it was found that some of the first 24
elements of the ARRAY variable contained meaningless information, originating from
theinformationinthepackagesheaders.AfterreadinginanewdatapackageinKAREL,
the first 24 characters of the read in ARRAY OF STRING need to be discarded. For us,
usefulinformationstartsatposition25intheARRAY.FortheapplicationwritteninPerl,
nocharacterscorrespondingtothepackageheaderswerefoundafterreceivingthedata
in MATLAB. Depending on the different software languages used for the socket
applications and the compatibility between these applications, we need to discard to
packageheaderbytesmanuallyornot.

_
Chapter4.ActiveSecurity
100

4.3.4KARELcommunicationprogramcomm.kl
Insection5ofthischapterasolutiontotheactivesecurityproblemwillbeproposed.In
thissolution,arobotapplicationwritteninKARELwillbeintroduced.Thekeyprogram
of the application is a task in which all communicational interactions between the
KAREL system and our pc are situated. The three main communication tasks of the
KARELprogramcomm.kl(seeAppendixC)canbedescribedasfollows:

Receivinganobstacledetectionsignalthroughasocketconnection;

SendingbacktheactualToolCenterPointspositionthroughasocket;

Receiving alternative positional and rotational data, sent in packages over


thesocket.

As stated earlier, we have chosen the FANUC robot to act as the server in all
communicationprocesses.ThereasonisthattheKARELsystemallowsustosetaWAIT
FORactionafteraMSG_CONNECTiscalledtoconnectaservertag.Onlywhenaclient
is effectively connected to the specified port of the server in the listen state, the WAIT
FORstatementistriggeredandprogramexecutioncontinued.
For the receiving communication actions, one KAREL server tag S4 is used. At the
beginningofprogramexecution,aMSG_CONNactionisperformedtoputtheserverin
thelistenstate.AssoonasaMATLABsocketiscreatedandconnectedtotheFANUCrobot,
theprogramcomm.klstartscheckingifnewdatahasenteredthesystemsinputbuffer.
For the sending communicationaction, we designed the Perlscript Client.pl.Becausea
newsocketiscreatedforthisapplication,wehadtosetupanotherKARELservertagS3
thatlistensforconnectionstoanotherphysicalportoftheFANUCdevice.ServertagS4
listenforconnectionstoport2007,whileS3listensforconnectionstoport2008.
For more details about the three communication steps, we refer to Appendix C. To
understandtheprogramcode,wewouldliketohighlightthestrategyusedtoreadinthe
alternativepositionaldata.
Asexplainedinparagraph2.9.3ofchapter3,wesenddatapackagesfromMATLABthat
contain four alternative positions and rotational configurations. In order to be read in
successfullybytheKARELsystem,thesenumericdatahastobeconvertedtothestring
format.Thisoperationcanbesymbolicallywrittenasinexpression(4.2).Theconversion
int2string consists in the transformation of positions x, y and z (expressed in integer
millimeters) and angles and (values in degrees of either 90, 0 or +90) to a string in
which s indicates the sign of each numeric value. If the numeric value is smaller than
_
Chapter4.ActiveSecurity
101

1000, extra 0 strings need to be padded to obtain a field of five characters for each
numericpositionvalueandthreecharactersforeveryangle.

sxxxxsyyyyszzzzss(4.2)
[x,y,z,,]
int 2 string

Infuzzy_comm.mofAppendixBthesestringsarecreatedafterthecalculationofevery
fourthalternativeposition.Fourofthesepositionalstringsareconcatenatedandsentto
the robot controller. Because every alternative position and angular configuration is
created in a uniform way, the KAREL program comm.kl is capable to extract all
informationandconverttheSTRINGvariablestotheINTEGERdatatype.
WhenthelastalternativepathnodeiscalculatedinMATLAB,aspecialstringvalueSis
padded to the last data package to indicate the termination of the alternative path. The
KAREL system is programmed so that this S is detected and all read operations are
stopped.Forprogramimplementationdetails,werefertocomm.klinAppendixC.

4.4RealtimeperformanceoftheEthernetcommunication
When testing the communication system, very satisfying results were obtained. Times to
executesocketcommunicationactionsweremeasuredusingtheavailabletimingvariables
of MATLAB (tic & toc, clock, etime) and the KAREL system (CONNECT TIMER and
DISCONNECTTIMERbuiltins).
The upper time limits of some actions are stated in tables 4.2 and 4.3. Especially the
execution time of the Perl script Client.pl gives us a good idea about the time needed to
perform communicational actions. Client.pl establishes a connection to the server tag S3
listening at port 2008, receives the actual Tool Center Point position and is ended upon
closing of the server tag S3 in the KAREL program comm.kl. It therefore incorporates
connection time, sending time and disconnection time. In table 4.3, tcp_x is an integer
variablecontainingthexpositionoftheToolCenterPoint.Thezerotimesindicatedwitha
(*)needtobeinterpretedastimessmallerthenthecountingstepofthetimerfeature.Inthe
KAREL system, the result of the TIMER operations is determined by the system variable
$SCR.$COND_TIMEthatindicatesthestepwithwhichtheTIMERisincremented.Forour
robotthissystemvariablehasaminimumvalueof4milliseconds.Theexecutiontimesof
the statements MSG_DISCO, MSG_CONNECT and WRITE are therefore smaller than 4
milliseconds,butnotequalto0.

_
Chapter4.ActiveSecurity
102

MATLABsocketaction

Uppertimelimit[msec]

msconnect

0(*)

mssend

0(*)

perl(Client.pl)

160

msclose

0(*)

Table4.2:SocketcommunicationtimesforMATLABactions

KARELsocketaction

Uppertimelimit[msec]

MSG_DISCO(S3,Status)

0(*)

MSG_CONNECT(S3,Status)

0(*)

WRITEComFile(tcp_x)

0(*)

READ110bytesininputbuffer

240

Table4.3:SocketcommunicationtimesforKARELactions

5.Theactivesecurityproblem
Having solved the socket communication issue and knowing the possibilities of the KAREL
programming language, we now have all tools available that are needed to solve the active
securityprobleminaKARELapplication.
Arobotactivesecuritysystemcanbeseenasasafetybackupforthenormalrobottask.When
an unexpected event occurs, e.g. a foreign object enters the robots workspace, this has to be
signaled to the robot. Motion along the regular trajectory is then cancelled, and thanks to an
input stream of information, motion along an alternative path is started up to guarantee the
robotwillreachitsoriginaldestination.
Tools such as condition handlers, semaphores, the systems input buffer, routine structures,
priority scheduling and sharing variables among different KAREL programs are especially
createdbyFANUCtoallowtheimplementationofcomplexuserapplications.Althoughalotof
thinking work preceded the implementation of a solution to the active security problem, a
solutionwasrelativelyeasytocreatethankstothementionedprogrammingfeatures.

5.1AsolutioninKAREL
Wedesignedasolutiontotheactivesecurityprobleminataskorientedway.Aparenttask
called secure.kl is created to run two child tasks and to initialize some important shared
variablesusedbybothchildtasks.Onechildtaskisinchargeofexecutingallrobotmotion
_
Chapter4.ActiveSecurity
103

and is called moves.kl. The other child task performs all communicational tasks and is
calledcomm.kl.Wewillfirstgiveashortdescriptionofeachprogram.

5.1.1Secure.kl
A count semaphore mutex for mutual exclusion is created in the VAR section and
initiated inaROUTINE. As shared variables for thechild tasksare created in the VAR
section:

detect:anINTEGERdatatypethatbecomes1upondetectionoftheobstacle;

halt:anINTEGERdatatypethatindicatesifreadingoperationshaveended;

count: an INTEGER data type that indicates the number of read in alternative
positions;

xyz_data: an ARRAY[num_nodes, 5] OF INTEGER that will contain the


alternativepathpositionsandrotationalconfigurations.

TheARRAYOFINTEGERxyz_datawillbefilledupwithdatabycomm.klandusedby
moves.kl to execute the alternative motion. The INTEGERs are initialized at 0 in a
ROUTINEofsecure.kl.IntheMAINsectionofsecure.kl,theROUTINEstoinitializethe
semaphoreandthesharedvariablesarecalled.Subsequently,thetwochildtasksarerun.
Notethat,oncethetwotasksarerunwiththeRUN_TASKbuiltin,theprogramsecure.kl
ceases to exist, it is aborted. The child tasks now operate independent of their parent.
However,the.vrfileofsecure.klstaysaccessibleforusebymoves.klandcomm.kl.

5.1.2Moves.kl
Theprogrammoves.klexecutesthenormalmotiontrajectoryinitsMAINsection.Upon
triggering of a condition handler by the INTEGER detect becoming 1, this motion is
cancelledandaninterruptroutineisinvoked.Inthisinterruptroutine,motionalongthe
alternativepathisexecuted.
Thesharedvariablesdetect,count,mutexandxyz_dataaredeclaredintheVARsection
withaFROMsecureclause.

5.1.3Comm.kl
As already introduced in section 4.3.4, comm.kl receives the detection signal from
MATLAB, sends the current Tool Center Point position back to MATLAB, receives the

_
Chapter4.ActiveSecurity
104

positionsandrotationsalongthealternativepathandstorestheminxyz_data.Thesame
sharedvariablesaredeclaredintheVARsection.

5.1.4Taskpriorities
Wechosetogivethecommunicationtaskcomm.klahigherprioritythenmoves.kl.The
philosophy followed to make this decision is that important information entering the
KAREL system needs to have precedence to the execution of motion statements. For
example,itisnotallowablethata1foradetectedsignalhasalreadyenteredthesystem,
butcannotbereadoutthebufferbecausemoves.klisevaluatingaMOVEstatement.
However,wementionthatassigningahigherprioritytomoves.klisntimpossibleeither
because evaluating MOVE statements only consumes a few milliseconds of processing
time.MOVEstatementsareinterruptiblestatements.Thismeansthatduringthemotion
ofaMOVE,theprogrammoves.klisinaholdstate,andcomm.klthengetsfullattention
oftheCentralProcessingUnit.
However, we made the choice to assign the higher priority of 49 to comm.kl, while
moves.kl has the default priority of 50. Therefore comm.kl is constantly checking the
systeminputbuffer,unlessitisinaholdstate.Ifcomm.klwouldnevercomeinahold
state, moves.kl would never get an opportunity to execute, for comm.kl has a higher
priority. To give moves.kl a chance to be executed, we included a DELAY of 20ms
betweentwoBYTES_AHEADoperationsthatcheckifanybytesofthedetectsignalhave
arrived in the input buffer. We realize that including a DELAY statement in a higher
priority task is a form of busy waiting. However, for the reason of absolute priority of
communicationinformation,weimplementedtheprioritiesinthisway.

Thefunctionandmainpropertiesofthetaskssecure.kl,moves.klandcomm.klarequalitatively
illustratedinfigure4.8.

_
Chapter4.ActiveSecurity
105

secure.kl
VAR
detect,mutex,halt,count:INTEGER
xyz_data:ARRAYOFINTEGER

RUN_TASK(moves)

RUN_TASK(comm)
comm.kl

moves.kl

%PRIORITY=49

%PRIORITY=50

VAR
detect,mutex,halt,
countFROMsecure:
INTEGER
xyz_dataFROMsecure:
ARRAYOFINTEGER

VAR
detect,mutex,halt,
countFROMsecure:
INTEGER
xyz_dataFROMsecure:
ARRAYOFINTEGER

Figure4.8:ArchitectureofKARELactivesecuritysolution.

5.1.5Semaphoreuseandprogramexecutiondynamics
In this section we want to comment the dynamical cooperation between the two tasks
comm.kl and moves.kl. To help understand the following explanation, the execution
dynamicsofbothtasksareschematicallyrepresentedinfigure4.9.Anaxisindicatesthe
evolution of time during program execution. The condition next to each arrow triggers
thefollowingstepinprogramexecution.
Whenbothtasksarestartedupbysecure.klwiththeRUN_TASKstatement,thefirsttask
thatreceivesexecutiontimeoftheCentralProcessingUnit(CPU)iscomm.kl,forithasa
higherpriority.IntheWHILEloopthatchecksifdatahasbeenreceivedinthesystems
inputbuffer,aDELAYstatementof20msisincorporatedtogivemoves.klexecutiontime.
TherobotstartsmovingbetweenaStartandFinalposition(XYZWPRvariables).DELAY
statementsof1000msareintroducedtosimulatethetimeofamanipulationactioninthe
end points of the trajectory. This motional action is repeated as long as no detect = 1
signal has been read from the systems input buffer. The BYTES_AHEAD operation on
thebufferisrepeatedaslongasnobyteshavearrivedintheinputbuffer(BufBytes=0).
Whenadetect=1signalisdetected,comm.klsuspendsitselfbypendingthesemaphore
mutex with the PEND_SEMA builtin. Subsequently, a global condition handler in
moves.kl is triggered. The normal motion loop is instantaneously cancelled and the
_
Chapter4.ActiveSecurity
106

interrupt routine alternative for execution of the alternative path is invoked from the
condition handler. The first action of the routine alternative is signaling the semaphore
mutexthatwasearlierpendedbycomm.kl.Comm.kltakesovertheCPUcontrol.Ituses
thiscontroltoputtheservertagS3inalistenstateatport2008.Atthisport,theFANUC
robotwillreceiveaconnectrequestbythePerlscriptClient.pl.AslongasthisPerlscript
is not executed in MATLAB, the task comm.kl is in a hold state, for we implemented a
WAITFORstatementthatsuspendsprogramexecutionuntiltheconnectionofthesocket
hasbeensuccessfullyestablished.Moves.kltakesoverCPUcontrolwhencomm.klisin
this hold state. However, moving along the alternative path isnt possible, because no
alternative positions have been read in yet. Therefore, moves.kl suspends itself by
invokingaPEND_SEMAactiononthesemaphoremutex.
As soon as the Perl script is executed in MATLAB, a socket connection is established.
Comm.kl registers the current Tool Center Point position and sends it over the socket
port2008toMATLAB.There,theprogramfuzzy_comm.misinvokedusingasarguments
the current Tool Center Point position and the obstacle position calculated with the
artificialvisionsystem.Assoonasthefirstdatapackageiscalculated,itissentoverthe
socketconnectiontotheFANUCservertagS4atport2007,earlieropenedtoreceivethe
detectionsignal.Whenthefirstpositionsarereceivedbycomm.kl,thesemaphoremutex
is signaled by comm.kl. This way, execution of the interrupt routine in moves.kl is
resumedandmotionalongthealternativepathisinitiated.Withadelayof80ms,needed
togivemoves.klprocessingtime,thenextdatapackagessentfromMATLABarereadin
bycomm.kl.WhenthestopstringSisencounteredinthedatapackages,itmeansthat
all alternative positions were calculated and sent in MATLAB. The task comm.kl aborts,
for all executable statements in the MAIN section have been executed. When the final
destinationofthealternativepathisreached,moves.klreturnsfromtheinterruptroutine
alternativeandisalsoaborted.

Wewouldliketostatethatthissolutionisonethatworks,butthatitsurelyisnttheonly
solutiontotheactivesecurityprogram.ExperiencedKARELprogrammersmightbeable
toimplementasolutionthathasamoresolidstructureandthatconsumesCPUtimeina
moreefficientway.

_
Chapter4.ActiveSecurity
107

t0

moves.kl

RUN_TASK
MOVETOStart
DELAY1000
MOVETOFinal
DELAY1000

CONDITION[1]:
WHENdetect=1

comm.kl
RUN_TASK

1.BYTES_AHEAD
statementonsystem
inputbuffer
2.DELAY20

IFdetect=1
WHILEBufBytes=0

WHILEdetect=0

PEND_SEMA
1.CANCELmotion
2.Callinterruptroutine
Alternative
3.POST_SEMA
WAITFOR
incomm.kl
PEND_SEMA

POST_SEMA
bymoves.kl

WAITFOR
connectiontoS3on
port2008
CalltoClient.pl
scriptinMATLAB

POST_SEMA
bycomm.kl

Executealternative
trajectory

Finalposition
reached

1.Socketconnectedat
port2008
2.SendactualTCP
position
3.Receivefirstdata
package
4.POST_SEMA
5.Continueread
operations
StringS
received
ABORT

tfinal

ABORT

Figure4.9:Executiondynamicsofmoves.klandcomm.kl.

_
Chapter4.ActiveSecurity
108

We end this paragraph on the KAREL solution of the active security problem with the
remarkthatthedesignedsecuritysystemisnotacyclicone.Onceanalternativepathis
executed, all programs come to a natural end, they are aborted. In a more automated
design, it would be desirable that robot motion continues when the final position is
reached.AmorethoroughinteractionbetweentheKARELcommunicationtaskandthe
visionsystemwouldberequiredtosignaliftheobstacleisstillpresentintheworkspace
or not. If it is still present, a new alternative trajectory has to be followed. If it is not
presentanymore,therobotcanreturntoexecutingitsnormalactivity.Theprogramming
ofsuchaKARELapplicationishoweverleftforfutureinvestigators.

6.Experimentalresults
ThepresentedKARELsolutionfortheactivesecurityproblemwastestedinanintegratedway
withthefuzzyartificialintelligencesystem.Veryfastreactiontimestoasentobstacledetection
signalwereobserved.Subsequently,thecurrentToolCenterPointisquasiinstantaneouslysent
over a socket. Alternative position and rotation packages were created in the string format
elaboratedinsection4.3.4andsentoverasocketinthefunctionfuzzy_comm.mofAppendixB.
Uponreceivingthefirstdatapackage(within400milliseconds),themotorsoftherobotsaxes
immediatelystartacceleratingandmotionisinitiated.Evenbeforemotionhasvisuallystarted,
alldatapackageshavebeenreceivedbythesystem.
IndebuggingandtestingKARELprograms,KARELstatementstowritemessagestothescreen
oftheTeachPendanthaveproventobeveryusefultofollowtheprogramexecutionsteps.We
refertoAppendixCforprogrammingdetails.Wealsohavetostatethatthehomepageofthe
robot(http://robotorhttp://193.144.187.156intheLANoftheTEISAresearchgroup)isavery
useful tool, especially to follow the current state of activated programs and to consult the
valuesofprogramdataafterprogramexecutionhasterminated.
In a second phase, the artificial vision system was integrated in the overall system. Upon
detectionoftheobstacle,theobstacledetectionsignalissentquicklyoverthesocketandrobot
motionishalted.BetweenthesendingoftheobstacledetectionsignalandaskingfortheTool
CenterPointscurrentposition,thevisionsystemcalculatestheobstaclesposition.Duringthis
time,theKARELprogramscomm.klandmoves.klarebothinasuspendedstate,forcomm.kl
iswaitingforaconnectiontoS3atport2008andmoves.klissuspendedbyaPEND_SEMA
action. Since the calculation of the obstacles position in the work space is the most time
consumingprocessinthewholeactivesecuritysystem(seetable2.4),therobotsEndEffector
isnoticeablypausedatthismoment,uptoatimeofthreeseconds.
_
Chapter4.ActiveSecurity
109

In the programmed application, a translational motion speed of 200 mm/sec and rotational
speedof90/secwereused.
A MATLAB script to execute the active security system with the vision and fuzzy systems
incorporatedisaddedtoAppendixC.
Readers that want to see the robot and the active security in action, can request a movie by
sendinganemailtotheauthorofthisthesis(Brecht.Fevery@UGent.be).

7.Acknowledgments
Special thanks goes to Hugo Herrero of FANUC Espaa for his support in KAREL
programmingissuesandforproposingthesocketmessagingcommunication.
In a critical moment of this project, Dieter Debaillie of FANUC Belgium helped us out to
establishtheUserSocketMsgoptionintherobotssystemsoftware.Healsoprovideduswitha
keyKARELexampleforsocketcommunication.
Thanks goes to Antonio Jenaro for his knowledge transfer in Ethernet configurations of
network devices and for installing and configuring the Fast Ethernet switch in the TEISA
laboratory.
Finally, most special thanks goes to professor Jos Ramn Llata for his intensive support
duringthelastbusymonthsofthisprojectandforproposingthePerlsocketcommunication,
herebysolvingthelastmissinglink.

8.References
[14]FANUCRoboticsEspaa,FANUCRoboticsCursodeProgamacinTPE,NivelB.
[15] FANUC Robotics America, Inc., FANUC Robotics SYSTEM RJ3iC Controller KAREL
ReferenceManual,2006.
[16] Real time Ethernet I, Introduction to RealTime Electronic Control Systems,
http://industrialethernetu.com/courses/
[17] http://java.sun.com/developer/onlineTraining/Programming/BasicJava2/socket for the
definitionofanEthernetsocket.
[18] FANUC Robotics America, Inc., SYSTEM RJ3iC Controller Internet Options Setup and
OperationsManual.
[19]StevenHolzner,PerlBlackBook,TheCoriolisGroup,1999.
[20]WhatmakesEthernetindustrial?,www.neteon.net.
[21] L.J.M. Van Moergestel, Computersystemen en Embedded Systemen, 2e herziene druk,
AcademicService,DenHaag,2007.
_
Chapter4.ActiveSecurity
110

[22]www.wikipedia.comforinformationonEthernetandtheTransmissionControlProtocol.

_
Chapter4.ActiveSecurity
111

Chapter5

Conclusion

InthisMastersThesis,anactivesecuritysystemforanindustrialFANUCrobotwasintroduced.
Thefundamentsofthesecuritysystemareformedbyanartificialintelligenceandanartificial
visionsystem.
Known stereo vision techniques were introduced to design a vision method that can identify
andlocalizeobstaclesofcertaincharacteristicsintherobotsworkspace.Giventhevisiontools
available in this project, a user friendly and time critical method was developed to check the
robots workspace for obstacles. The key element to this solution was the use of ActiveX
componentsthatallowustodrawcameraimagesoutofavideostreamofimages.
If camera images need to be available at a higher time rate, cameras equipped with frame
grabberscanalwaysbeinstalledforfutureprojects.
Basic methods for object recognition were employed. In future investigation work, advanced
identification methods can be used, e.g. to distinguish the robots End Effector from foreign
objects.
Intheproposedrealtimesetting,thetimeneededtocalculatethecharacteristicpositionsofa
cubic obstacle was rather high, in some cases up to 2.5 seconds. A better technique can be
developedtocalculateforobstaclecorners.
TheartificialintelligencetechniqueFuzzyLogicwassuccessfullyappliedforthedesignofa3D
obstacleavoidancestrategy.FollowingallbasicstepsinthedesignofaFuzzyInferenceSystem,
thisstrategywasdevelopedandsimulatedinMATLAB.Thankstothedesignedcommunication
system, alternative path positions and rotational configurations could be transferred to the
robotssystematatimecriticalrate.
Finally,usingEthernetascommunicationmediumfortherobotcontrol,arealtimesolutionto
the active security problem was developed in the programming language KAREL of FANUC
Robotics.
As this project has been the first one on the FANUC Robot Arc Mate 100iB of the research
group TEISA, a great deal of the investigation time was consumed in getting to know the
robots programming language KAREL and in setting up a communication system through
Ethernetsockets.However,webelievethatthesecurityapplicationpresentedintheprevious
chapter is a strong solution to the given problem and a very good example of the KAREL
systemspossibilities.Moreover,socketcommunicationisanelegantandfastwaytoexchange
databetweencontrolpcsandrobotcontrollers.And,asEthernetisworkingitswayuptothe
_
Chapter5.Conclusion
112

industrial work floor, the socket communication is a valid industrial solution today and will
probablymaintainitsstrongpositionintheyearstocome.
Asmentionedattheendofthepreviouschapter,amoreautomatedKARELapplicationcanbe
designed in which robot motion continues when the final position of the alternative path is
reached.Forthisapplication,amorethoroughinteractionbetweentheKARELcommunication
task and the vision system would be required to signal the presence or the absence of an
obstacleintherobotsworkspace.Subsequently,thedecisiontoreturntothenormalrobottask
or to follow a new alternative path has to be undertaken. The programming of such a cyclic
KARELapplicationisleftasachallengeforfutureinvestigators.

_
Chapter5.Conclusion
113

AppendixA

function varargout = CamAxis(varargin)


% CamAxis M-file for CamAxis.fig
%
CamAxis, by itself, creates a new CamAxis or raises the existing
%
singleton*.
%
%
H = CamAxis returns the handle to a new CamAxis or the handle to
%
the existing singleton*.
%
%
CamAxis('CALLBACK',hObject,eventData,handles,...) calls the local
%
function named CALLBACK in CamAxis.M with the given input arguments.
%
%
CamAxis('Property','Value',...) creates a new CamAxis or raises the
%
existing singleton*. Starting from the left, property value pairs are
%
applied to the GUI before CamAxis_OpeningFunction gets called. An
%
unrecognized property name or invalid value makes property application
%
stop. All inputs are passed to CamAxis_OpeningFcn via varargin.
%
%
*See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
%
instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help CamAxis
% Last Modified by GUIDE v2.5 22-May-2007 18:07:49
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @CamAxis_OpeningFcn, ...
'gui_OutputFcn', @CamAxis_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before CamAxis is made visible.


function CamAxis_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to CamAxis (see VARARGIN)
% Choose default command line output for CamAxis
handles.output = hObject;
% Contador de paquetes de imgenes
handles.cont=0;
% Flags de sincronizacin
handles.flag1=false;
handles.flag2=false;
handles.flag3=false;
handles.flag3_ok=false; %para liberar la captura de imagenes en giccam1 y giccam2
handles.quiet=false;
handles.detect=false;
% Puesta en marcha de los temporizadores
handles.t=clock;
handles.t1=handles.t;
handles.t2=handles.t;
handles.t3=handles.t;

% Direcciones IP de las cmaras


set(handles.activex1,'URL','http://giccam1/mjpg/video.mjpg');
set(handles.activex2,'URL','http://giccam2/mjpg/video.mjpg');
set(handles.activex3,'URL','http://giccam3/mjpg/video.mjpg');
%imagenes
handles.I1=[];
handles.I2=[];
handles.I3=[];
handles.I3_old=uint8(zeros(480,640));
% ----- for SOCKET COMMUNICATION ----handles.socket=msconnect('193.144.187.156',2007);
if handles.socket>0
disp(sprintf('Socket successfully connected to FANUC!'));
else
disp(sprintf('Socket could not be connected to FANUC'));
end
handles.sent=0; %indicates if a signal has been sent
handles.success=-1; %becomes positive after successful sending operation
% ----- END Socket Communication ----%Update handles structure
guidata(hObject, handles);
% UIWAIT makes CamAxis wait for user response (see UIRESUME)
uiwait(handles.figure1);
% --- Outputs from this function are returned to the command line.
function varargout = CamAxis_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.I1;
varargout{2} = handles.I2;
varargout{3} = handles.I3;
varargout{4} = handles.socket;
varargout{5} = handles.success;
delete(handles.figure1);
% --------------------------------------------------------------------function activex3_OnNewImage(hObject, eventdata, handles)
% hObject handle to activex3 (see GCBO)
% eventdata structure with parameters passed to COM event listener
% handles structure with handles and user data (see GUIDATA)
% Si no se ha ledo la imagen de GICCAM3
if ~handles.flag3
handles.flag3=true;
guidata(handles.figure1, handles);
% LEE IMAGEN CAPTURADA
% Toma la imagen BMP
[DatBMP,sb]=handles.activex3.GetBMP([],[]);
% Guarda la matriz BMP
% Datos de la cabecera
bfOffBits=sum(double(DatBMP(11:14)).*(256.^[0:3]));
biWidth=sum(double(DatBMP(19:22)).*(256.^[0:3]));
biHeight=sum(double(DatBMP(23:26)).*(256.^[0:3]));
% Crea la matriz de imagen
if (handles.cont>0)
handles.I3_old=handles.I3;
end
I3=uint8(zeros(biHeight,biWidth,3));
% Asignamos cada matriz de color a la imagen
% ROJO
I3(end:-1:1,:,1)=reshape(DatBMP(bfOffBits+3:3:end)',biWidth,biHeight)';
% VERDE
I3(end:-1:1,:,2)=reshape(DatBMP(bfOffBits+2:3:end)',biWidth,biHeight)';

% AZUL
I3(end:-1:1,:,3)=reshape(DatBMP(bfOffBits+1:3:end)',biWidth,biHeight)';
% Convierte la imagen a escala de grises
handles.I3=rgb2gray(I3);
end;
% Update handles structure
guidata(handles.figure1, handles);
% Cuando se haya ledo la imgen 3, verifica si hay un obstculo
% quieto. Si no hay, pone el flag3 a 'false' para volver a tomar
% imagenes
if ~handles.flag3_ok
%Detect movements in workspace
I3_old=handles.I3_old;
I3=handles.I3;
handles.quiet=movimiento(I3_old, I3);
%Detect obstacles
handles.detect=check(I3,0.1);
%Evaluate flags
if handles.detect & handles.quiet
handles.flag3_ok=true;
elseif handles.detect & ~handles.quiet
tf=clock;
disp(sprintf('Moving object detected. Detection time = %.4f',etime(tf,handles.t)));
disp(sprintf('...'));
% ----- SOCKET COMMUNICATION ----if (handles.sent == 0)
handles.sent=1;
handles.success=mssend(handles.socket,'1');
if handles.success>0
disp(sprintf('Detection signal successfully sent to FANUC!'));
disp(sprintf('...'));
else
disp(sprintf('Detection signal could NOT be sent to FANUC'));
end
end
% Update handles structure
guidata(handles.figure1, handles);
% ----- END Socket Communication ----handles.t=tf;
handles.flag3=false;
elseif ~handles.detect
handles.flag3=false;
handles.cont=handles.cont+1;
tf=clock;
disp(sprintf('No object detected. Detection time = %.4f',etime(tf,handles.t)));
disp(sprintf('...'));
if handles.cont==300 %to force a close after a certain time has passed
figure1_CloseRequestFcn(hObject, eventdata, handles);
end;
handles.t=tf;
end;
end;
% Update handles structure
guidata(handles.figure1, handles);
% -------------------------------------------------------------------function activex1_OnNewImage(hObject, eventdata, handles)
% hObject handle to activex1 (see GCBO)
% eventdata structure with parameters passed to COM event listener
% handles structure with handles and user data (see GUIDATA)
% Si no se ha ledo la imagen de GICCAM1
if handles.flag3_ok & ~handles.flag1
handles.flag1=true;
guidata(handles.figure1, handles);
% LEE IMAGEN CAPTURADA
% Toma la imagen BMP

[DatBMP,sb]=handles.activex1.GetBMP([],[]);
% Guarda la matriz BMP
% Datos de la cabecera
bfOffBits=sum(double(DatBMP(11:14)).*(256.^[0:3]));
biWidth=sum(double(DatBMP(19:22)).*(256.^[0:3]));
biHeight=sum(double(DatBMP(23:26)).*(256.^[0:3]));
% Crea la matriz de imagen
I1=uint8(zeros(biHeight,biWidth,3));
% Asignamos cada matriz de color a la imagen
% ROJO
I1(end:-1:1,:,1)=reshape(DatBMP(bfOffBits+3:3:end)',biWidth,biHeight)';
% VERDE
I1(end:-1:1,:,2)=reshape(DatBMP(bfOffBits+2:3:end)',biWidth,biHeight)';
% AZUL
I1(end:-1:1,:,3)=reshape(DatBMP(bfOffBits+1:3:end)',biWidth,biHeight)';
% Convierte la imagen a escala de grises
handles.I1=rgb2gray(I1);
end;
% Update handles structure
guidata(handles.figure1, handles);
% Cuando se hayan ledo las tres imgenes, calcula tiempo total de
% captura y devuelve matrices de imagenes
if handles.flag1 & handles.flag2 & handles.flag3_ok
% Update handles structure
guidata(handles.figure1, handles);
tf=clock;
disp(sprintf('Quiet object detected. Image package time = %.4f',etime(tf,handles.t)));
disp(sprintf('...'));
% ----- SOCKET COMMUNICATION ----if (handles.sent == 0)
handles.sent=1;
handles.success=mssend(handles.socket,'1');
if handles.success>0
disp(sprintf('Detection signal successfully sent to FANUC!'));
disp(sprintf('...'));
else
disp(sprintf('Detection signal could NOT be sent to FANUC.'));
end
end
% Update handles structure
guidata(handles.figure1, handles);
% ----- END Socket Communication ----figure1_CloseRequestFcn(hObject, eventdata, handles);
end;
% ------------------------------------------------------------------------function activex2_OnNewImage(hObject, eventdata, handles)
% hObject handle to activex2 (see GCBO)
% eventdata structure with parameters passed to COM event listener
% handles structure with handles and user data (see GUIDATA)
% Si no se ha ledo la imagen de GICCAM2
if handles.flag3_ok & ~handles.flag2
handles.flag2=true;
guidata(handles.figure1, handles);
% LEE IMAGEN CAPTURADA
% Toma la imagen BMP
[DatBMP,sb]=handles.activex2.GetBMP([],[]);
% Guarda la matriz BMP
% Datos de la cabecera
bfOffBits=sum(double(DatBMP(11:14)).*(256.^[0:3]));
biWidth=sum(double(DatBMP(19:22)).*(256.^[0:3]));
biHeight=sum(double(DatBMP(23:26)).*(256.^[0:3]));
% Crea la matriz de imagen
I2=uint8(zeros(biHeight,biWidth,3));
% Asignamos cada matriz de color a la imagen
% ROJO
I2(end:-1:1,:,1)=reshape(DatBMP(bfOffBits+3:3:end)',biWidth,biHeight)';
% VERDE

I2(end:-1:1,:,2)=reshape(DatBMP(bfOffBits+2:3:end)',biWidth,biHeight)';
% AZUL
I2(end:-1:1,:,3)=reshape(DatBMP(bfOffBits+1:3:end)',biWidth,biHeight)';
% Convierte la imagen a escala de grises
handles.I2=rgb2gray(I2);
end;
% Update handles structure
guidata(handles.figure1, handles);
% Cuando se hayan ledo las tres imgenes, calcula tiempo total de
% captura y devuelve matrices de imagenes
if handles.flag1 & handles.flag2 & handles.flag3_ok
% Update handles structure
guidata(handles.figure1, handles);
tf=clock;
disp(sprintf('Quiet object detected. Image package time = %.4f',etime(tf,handles.t)));
disp(sprintf('...'));
% ----- SOCKET COMMUNICATION ----if (handles.sent == 0)
handles.sent=1;
handles.success=mssend(handles.socket,'1');
if handles.success>0
disp(sprintf('Detection signal successfully sent to FANUC!'));
disp(sprintf('...'));
else
disp(sprintf('Detection signal could NOT be sent to FANUC.'));
end
end
% Update handles structure
guidata(handles.figure1, handles);
% ----- END Socket Communication ----figure1_CloseRequestFcn(hObject, eventdata, handles);
end;
% --- Executes when user attempts to close figure1.
function figure1_CloseRequestFcn(hObject, eventdata, handles)
% hObject handle to figure1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hint: delete(hObject) closes the figure
uiresume;
% ------ END of CamAxis.m -----function quieto=movimiento(I1, I2)
% Funcion para detectar movimiento en imagenes consecutivas I1 y I2
I1=I1(101:480,151:490); %restringuir imagenes a la zona de trabajo
I2=I2(101:480,151:490);
diff=I1-I2;
max_diff=max(max(diff));
if max_diff > 75
quieto=false;
else
quieto=true;
end
% ------ END of movimiento.m ------function detect=check(I,thresh)
% Function to detect an obstacle of cubic form
detect=false;
I=I(201:480,151:490);
threshold = graythresh(I)+thresh;
bw = im2bw(I,threshold);

bw = bwareaopen(bw,10000);
[B,L] = bwboundaries(bw,'noholes');
stats = regionprops(L,'Area');
for k = 1:length(B)
% obtain (X,Y) coordenates of edges with label 'k'
boundary = B{k};
% compute a simple estimate of the object's perimeter
delta_sq = diff(boundary).^2;
perimeter = sum(sqrt(sum(delta_sq,2)));
% obtain the area corresponding to label 'k'
area = stats(k).Area;
%Calculation of the metric
if ~(perimeter == 0)
metric = perimeter^2/area;
end
%application of criterion
if (metric>15 & metric<31)
detect=true;
k=length(B);
end
end
% ----- END of check.m -----% function p_corresp = correspondence(I1,I2,I3)
% Funcin que calcula correspondencias de pxeles en las 3 imagenes.
% ENTRADAS: imagenes I1, I2, I3
% SALIDAS: tripletes de coordenadas pixel correspondientes con las esquinas
% superiores del obstculo
% Brecht Fevery, 5 de Abril 2.007
function p_corresp = correspondence(I1, I2, I3)
%*** DECLARACIN DE PARMETROS DE CALIBRACIN Y MATRICES DE LOS MTODOS
%ESTEREOSCPICOS ***
% Carga los parmetros de calibracin de las cmaras
load par_c1
load par_c2
load par_c3
% Parmetros extrnsecos
parext1=parext_c1;
parext2=parext_c2;
parext3=parext_c3;
% Parmetros intrnsecos
parint1=[parint_c1;parint_inv_c1'];
parint2=[parint_c2;parint_inv_c2'];
parint3=[parint_c3;parint_inv_c3'];
% Obtiene las matrices fundamentales entre cada par de cmaras
[M1,K1,R1,t1]=descomp(M_c1);
[M2,K2,R2,t2]=descomp(M_c2);
[M3,K3,R3,t3]=descomp(M_c3);
F12=matfund(K1,K2,R1,R2,t1,t2);
F23=matfund(K2,K3,R2,R3,t2,t3);
% F31=matfund(K3,K1,R3,R1,t3,t1);
F13=matfund(K1,K3,R1,R3,t1,t3);
% Restriccin de las imagenes a una zona de bsqueda de inters

I11=I1(71:480,51:350,:);
I22=I2(41:480,271:580,:);
I33=I3(201:480,51:590,:);
% *** PARMETROS DE CANNY ***
% Asigna el valor del umbral alto y bajo
umbralto=0.25;
umbrbajo=0.0;
% *** CLCULO DE LAS ESQUINAS EN LAS IMAGENES RESTRINGUIDAS ***
%corner: funcin de He Xiaochen, www.mathworks.com
[corner1,I1_marcado]=corner(I11,[],160,3.5,umbralto,umbrbajo,[],[]);
[corner2,I2_marcado]=corner(I22,[],160,3.5,umbralto,umbrbajo,[],[]);
[corner3,I3_marcado]=corner(I33,[],160,3.5,umbralto,umbrbajo,[],[]);
% Corregir pxeles identificados para deshacer la restriccin de la imagen
corner1=[corner1(:,2)+50, corner1(:,1)+70];
corner2=[corner2(:,2)+270, corner2(:,1)+40];
corner3=[corner3(:,2)+50, corner3(:,1)+200];
% Representar los pxeles de borde en las imagenes
% figure(1);
% imshow(I1);
% hold on
% for i=1:size(corner1,1)
% plot(corner1(i,1), corner1(i,2), 'rx');
% end
%
% figure(2);
% imshow(I2);
% hold on
% for i=1:size(corner2,1)
% plot(corner2(i,1), corner2(i,2), 'bx');
% end
%
% figure(3);
% imshow(I3);
% hold on
% for i=1:size(corner3,1)
% plot(corner3(i,1), corner3(i,2), 'gx');
% end
% ***BUSCAR CORRESPONDENCIAS ***
%Principio: para cada pixel de esquina en la primera imagen, buscar los
%pxeles correspondientes en la segunda y la tercera imagen
p1=[];
p2=[];
p3=[];
umbral_1=10;
umbral_2=20;
for i=1:size(corner1,1)
% 1. Clculo de las lneas epipolares en I2 y I3, correspondientes al
% pxel de tipo esquina en I1
% Lnea epipolar en I2 correspondiente al corner1(i):
rd12=linepip(F12,corner1(i,1),corner1(i,2));
% Lnea epipolar en I3 correspondiente al corner1(i):
rd13=linepip(F13,corner1(i,1),corner1(i,2));
%
%
%
%
%
%
%

%Dibujo de las lneas epipolares en I2 y I3


u1=0;
u2=640;
v1_2=u1*rd12(1)+rd12(2);
v2_2=u2*rd12(1)+rd12(2);
v1_3=u1*rd13(1)+rd13(2);
v2_3=u2*rd13(1)+rd13(2);

%
%
%
%
%
%

figure(2);
l1=line([u1 u2],[v1_2 v2_2], 'Color', [0 0 1], 'LineWidth', 2);
pause
figure(3);
l2=line([u1 u2],[v1_3 v2_3], 'Color', [0 1 0], 'LineWidth', 2);
pause

% 2. Eliminar esquinas que no estn cercanos de la lnea epipolar asociada


% a la esquina corner1(i,:) de I1
%2.1 Hallar las esquinas en I2 con menor distancia a rd12
p2_cand=[];
for k=1:size(corner2,1)
dist_perp=distancia_perp(corner2(k,1),corner2(k,2),rd12);
if dist_perp < umbral_1
p2_cand=[p2_cand; corner2(k,1) corner2(k,2)];
end
end
%No rellenamos los vectores de correspondencias cuando no se ha
%detectado un candidato en I2:
if ~isempty(p2_cand)
% 3. Clculo de las lneas epipolares en I3 asociadas a los candidatos p2_cand
rd23=linepip(F23,p2_cand(:,1),p2_cand(:,2));
%
%
%
%
%
%
%
%
%
%
%

%Dibujo de las lneas epipolares rd23


figure(3);
for j=1:size(rd23,2)
u1=0;
u2=640;
v1_3=u1*rd23(1,j)+rd23(2,j);
v2_3=u2*rd23(1,j)+rd23(2,j);
l3=line([u1 u2], [v1_3 v2_3], 'Color', [1 0 0], 'LineWidth', 2);
pause
set(l3,'visible','off');
end

% 5. Clculo de las intersecciones entre la rectas rd13 y el conjunto de rectas {rd23} para
% hallar la correspondencia de pxeles
for j=1:size(rd23,2)
%Clculo de la interseccin
u=(rd13(2)-rd23(2,j))/(rd23(1,j)-rd13(1));
v=u*rd13(1)+rd13(2);
%Buscar las esquinas en I3 que estn cercanas de la
%interseccin
for m=1:size(corner3,1)
dist= sqrt((u-corner3(m,1))^2+(v-corner3(m,2))^2);
if dist < umbral_2
% Rellenar el vector de correspondencias
p1=[p1;corner1(i,1) corner1(i,2)];
p2=[p2;p2_cand(j,1) p2_cand(j,2)];
p3=[p3;corner3(m,1) corner3(m,2)];
end
end
end
end
% set(l1,'visible','off');
% set(l2,'visible','off');
end
p_corresp=[p1, p2, p3];
toc
%Representar paso a paso los pxeles de correspondencia en las imagenes

close all;
figure(1);
imshow(I1);
hold on
figure(2);
imshow(I2);
hold on
figure(3);
imshow(I3);
hold on
for i=1:size(p1,1)
figure(1);
plot(p1(i,1), p1(i,2), 'ro');
figure(2);
plot(p2(i,1), p2(i,2), 'bo');
figure(3);
plot(p3(i,1), p3(i,2), 'go');
end
% ----- END of correspondence.m -----function dist=distancia_perp(u, v, rd)
% Funcion que calcula la distancia entre un pixel y una recta
dist=abs(rd(1)*u+rd(2)-v)/sqrt(rd(1)^2+1);
% ----- END of distancia_perp.m ----function rd=linepip(F,ui,vi)
% Devuelve los parmetros de la lnea epipolar
%ENTRADAS:
% matriz fundamental F
% vectores de pxeles en la imagen izquierda: ui y vi
%SALIDA:
% rd: vector que contiene pendiente y ordenada en el origen de la lnea epipolar
% (c) Carlos Torre
% Calculamos la lnea epipolar correspondiente en la imagen derecha.
rd=F*[ui';vi';ones(size(ui'))];
md=-(rd(1,:)+sqrt(eps))./(rd(2,:)+sqrt(eps)); % pendiente
bd=-rd(3,:)./(rd(2,:)+sqrt(eps)); % ordenada en el origen
rd=[md; bd]; % matriz (nx2)
% ----- END of linepip.m ----function Pos=Pos_3D(p_corresp)
% Funcion que calcula las posiciones 3D de las correspondencias detectadas
% tras la bsqueda de esquinas, teniendo en cuenta las distorciones de los pixeles
% ENTRADA: pxeles de correspondencias
% SALIDA: vector de posiciones 3D de las esquinas superiores del obstculo
for i=size(p_corresp,1)
for j=1:size(p_corresp,2)
p_corresp(i,j)=p_corresp(i,j)-1; %por convenio del metodo de calibracion
end
end
% Carga los parmetros de calibracin de las cmaras
load par_c1
load par_c2

load par_c3
% Parmetros extrnsecos
parext1=parext_c1;
parext2=parext_c2;
parext3=parext_c3;
% Parmetros intrnsecos
parint1=[parint_c1;parint_inv_c1'];
parint2=[parint_c2;parint_inv_c2'];
parint3=[parint_c3;parint_inv_c3'];
sys=configc('axis205');
% 1. Calculo 3D mediante correspondencias en I1 y I3
% 1.1 Preparacion de las matrices de proyeccion
% camara a: giccam 1
% camara b: giccam 3
Mi=[parint1(1,1)*parint1(1,2)*sys(1)/sys(3) 0 parint1(1,3);
0 parint1(1,2)*sys(2)/sys(4) parint1(1,4);
0 0 1]*parext1;
Md=[parint3(1,1)*parint3(1,2)*sys(1)/sys(3) 0 parint3(1,3);
0 parint3(1,2)*sys(2)/sys(4) parint3(1,4);
0 0 1]*parext3;
%1.2 Corregir los pxeles y expresarlos en el s.d.c. de cmara
pi=imcorr('axis205',parint1(2,:)',p_corresp(:,1:2));
pd=imcorr('axis205',parint3(2,:)',p_corresp(:,5:6));
% 1.3 Aplicar el modelo pin-hole para calcular los puntos 3D
X=[]; Y=[]; Z=[];
if ~(isempty(pi)|isempty(pd))
ui=pi(:,1);
vi=pi(:,2);
ud=pd(:,1);
vd=pd(:,2);
for p=1:length(ui)
A=[ ui(p)*Mi(3,:)-Mi(1,:);
vi(p)*Mi(3,:)-Mi(2,:);
ud(p)*Md(3,:)-Md(1,:);
vd(p)*Md(3,:)-Md(2,:) ];
% Minimizamos la norma al cuadrado de A*P, siendo P el punto del espacio 3D buscado.
[V,D]=eig(A'*A);
% Las coordenadas homogeneas 3D del punto buscado seran las componentes del vector propio
% de norma unidad de la matriz V correspondiente al valor propio mas pequeo.
[i,j]=find(D==min(diag(D)));
i=min(i);
X=[X; V(1,i)/V(4,i)];
Y=[Y; V(2,i)/V(4,i)];
Z=[Z; V(3,i)/V(4,i)];
end
end
Pos_13=[X Y Z];
% 2. Calculo 3D mediante correspondencias en I2 y I3
% 2.1 Preparacion de las matrices de proyeccion
% camara a: giccam 2
% camara b: giccam 3
Mi=[parint2(1,1)*parint2(1,2)*sys(1)/sys(3) 0 parint2(1,3);
0 parint2(1,2)*sys(2)/sys(4) parint2(1,4);
0 0 1]*parext2;

Md=[parint3(1,1)*parint3(1,2)*sys(1)/sys(3) 0 parint3(1,3);
0 parint3(1,2)*sys(2)/sys(4) parint3(1,4);
0 0 1]*parext3;
%2.2 Corregir los pxeles y expresarlos en el s.d.c. de cmara
pi=imcorr('axis205',parint2(2,:)',p_corresp(:,3:4));
pd=imcorr('axis205',parint3(2,:)',p_corresp(:,5:6));
% 2.3 Aplicar el modelo pin-hole para calcular los puntos 3D
X=[]; Y=[]; Z=[];
if ~(isempty(pi)|isempty(pd))
ui=pi(:,1);
vi=pi(:,2);
ud=pd(:,1);
vd=pd(:,2);
for p=1:length(ui)
A=[ ui(p)*Mi(3,:)-Mi(1,:);
vi(p)*Mi(3,:)-Mi(2,:);
ud(p)*Md(3,:)-Md(1,:);
vd(p)*Md(3,:)-Md(2,:) ];
[V,D]=eig(A'*A);
[i,j]=find(D==min(diag(D)));
i=min(i);
X=[X; V(1,i)/V(4,i)];
Y=[Y; V(2,i)/V(4,i)];
Z=[Z; V(3,i)/V(4,i)];
end
end
Pos_23=[X Y Z];
% 3. Calculo 3D mediante correspondencias en I2 y I1
% 3.1 Preparacion de las matrices de proyeccion
% camara a: giccam 2
% camara b: giccam 1
Mi=[parint2(1,1)*parint2(1,2)*sys(1)/sys(3) 0 parint2(1,3);
0 parint2(1,2)*sys(2)/sys(4) parint2(1,4);
0 0 1]*parext2;
Md=[parint1(1,1)*parint1(1,2)*sys(1)/sys(3) 0 parint1(1,3);
0 parint1(1,2)*sys(2)/sys(4) parint1(1,4);
0 0 1]*parext1;
%3.2 Corregir los pxeles y expresarlos en el s.d.c. de cmara
pi=imcorr('axis205',parint2(2,:)',p_corresp(:,3:4));
pd=imcorr('axis205',parint1(2,:)',p_corresp(:,1:2));
% 3.3 Aplicar el modelo pin-hole para calcular los puntos 3D
X=[]; Y=[]; Z=[];
if ~(isempty(pi)|isempty(pd))
ui=pi(:,1);
vi=pi(:,2);
ud=pd(:,1);
vd=pd(:,2);
for p=1:length(ui)
A=[ ui(p)*Mi(3,:)-Mi(1,:);
vi(p)*Mi(3,:)-Mi(2,:);
ud(p)*Md(3,:)-Md(1,:);
vd(p)*Md(3,:)-Md(2,:) ];
[V,D]=eig(A'*A);

[i,j]=find(D==min(diag(D)));
i=min(i);
X=[X; V(1,i)/V(4,i)];
Y=[Y; V(2,i)/V(4,i)];
Z=[Z; V(3,i)/V(4,i)];
end
end
Pos_21=[X Y Z];
Pos=[Pos_13 Pos_23 Pos_21];
% ----- END of Pos_3D.m ----function p_obst=obst_pos(P)
% Funcion que halla la posicion del obstculo a partir de los puntos 3D
% calculados con las correspondencias de pixeles, incluso correspondencias falsas.
% ENTRADA: Vector P con las posiciones 3D calculadas.
% SALIDA: Posicin estimada del obstculo
pos=[0 0 0];
j=0;
% utilizar 3D correspondencias entre imagen 1 y 3:
P=P(:,1:3);
% descartar puntos 3D que no estn en el rango de z del obstculo
for i=1:size(P,1)
if (P(i,3)< -20)
j=j+1;
pos(j,:)=P(i,:);
end
end
% buscar maximas y minimas
x_min=0;
y_min=0;
x_max=0;
y_max=0;
z_max=-40;
for i=1:size(pos,1)
if pos(i,1)< x_min
x_min=pos(i,1);
end
if pos(i,2)< y_min
y_min=pos(i,2);
end
if pos(i,1)> x_max
x_max=pos(i,1);
end
if pos(i,2)> y_max
y_max=pos(i,2);
end
if pos(i,3)> z_max
z_max=pos(i,3);
end
end
% margen de seguridad de 50mm
x_min=x_min-50;
y_min=y_min-50;
z_min=-800;
x_max=x_max+50;
y_max=y_max+50;
z_max=z_max+50;
% crear obstculo
p_obst=[x_min y_min z_min;
x_max y_min z_min;

x_min y_max z_min;


x_max y_max z_min;
x_min y_min z_max;
x_max y_min z_max;
x_min y_max z_max;
x_max y_max z_max];
% dibujar obstculo
figure(4)
axis([-1000 1000 -1000 1000 -800 400])
grid
hold on
xlabel('x');ylabel('y');zlabel('z');
plot_obstacle_0(p_obst,x_max-x_min,y_max-y_min,z_max-z_min,0);
% ----- END of p_obst.m -----% Matlab script to convert the results of the Calibration toolbox for MATLAB
% to the calibration parameters as used in our vision functions
% parmetros extrnsicos
R_c1=rodrigues(omc_1);
T_c1=Tc_1;
parext_c1= [R_c1,T_c1];
% matriz de calibracin y de proyeccin
K_c1=[fc(1) 0 cc(1);
0 fc(2) cc(2);
0 0 1];
M_c1=K_c1*[R_c1,T_c1];
% parmetros intrinsicos
sys=configc('axis205');
NDX=sys(1); NDY=sys(2); Sx=sys(3); Sy=sys(4);
parint_c1=[];
parint_c1(2)=fc(2)*Sy/NDY;
parint_c1(1)=fc(1)*Sx/NDX/parint_c1(2);
parint_c1(3)=cc(1);
parint_c1(4)=cc(2);
parint_c1(5)=kc(1)/parint_c1(2)^3;
parint_c1(6)=kc(2)/parint_c1(2)^5;
parint_c1(7)=kc(3)/parint_c1(2)^2;
parint_c1(8)=kc(4)/parint_c1(2)^2;
parint_inv_c1=invmodel('axis205',parint_c1);
save par_c1 M_c1 parext_c1 parint_c1 parint_inv_c1
% ----- END of script store_calib_param_1 -----function sys=configc(name)
% CONFIGC Loads system configuration information based on given name.
if strcmp(name,'axis205')
sys = [
640, % number of pixels in horizontal direction
480, % number of pixels in vertical direction
5.08, % effective CCD chip size in horizontal direction
3.81, % effective CCD chip size in vertical direction
4,
% nominal focal length
0,
% radius of the circular control points
0,
0,
0,
abs(name)'
];
return;
end
% ----- END of configc.m ------

AppendixB

%function socket=fuzzy_comm(P_ini, P_fin, P_obst, sock)


% Function that calculates Fuzzy alternative positions and rotations for robot's End Effector.
% Information is sent over a socket connection to the robot controller.
% INPUTS:
% P_ini: vector containing initial TCP position
% P_fin: vector containing desired final TCP position
% P_obst: matrix containing the coordinates of the 8 obstacle corners
% sock: connected Ethernet socket. Created with msconnect('193.144.187.156',2007)
% OUTPUT:
% socket: list of strings containing alternative positions and rotational configurations
% (c) Brecht Fevery - 17th of May 2007
function socket=fuzzy_comm(P_ini, P_fin, P_obst, sock)
% ----- for SOCKET COMMUNICATION ----count_0=0; % counts the number of calculated alternative positions.
% If count_0=4, the last position is addded to the sending variable
count_1=0; % counts the number of positions added to the sending variable
% If count_1=4, this variable is sent
count_2=0; % count for the total number of sent packages
s=''; % sending variable
t=clock; % timer
socket={}; % sent variables are stored in this list
success=0; % flag for successful sending operations
% ------- END of declaration SOCKET variables ----% --- Initial and Final Position of Tool Center Point --x_ini=P_ini(1);
y_ini=P_ini(2);
z_ini=P_ini(3);
x_fin=P_fin(1);
y_fin=P_fin(2);
z_fin=P_fin(3);
% --- Caracteristics of the obstacle
%initialization of obstacle limits in coordinates x, y and z
rbound_x=max(P_obst(:,1));
lbound_x=min(P_obst(:,1));
rbound_y=max(P_obst(:,2));
lbound_y=min(P_obst(:,2));
rbound_z=max(P_obst(:,3));
lbound_z=min(P_obst(:,3));
% Length side in x-direction:
x_length=rbound_x-lbound_x;
% Length side in y-direction:
y_length=rbound_y-lbound_y;
% Length side in z-direction:
z_length=rbound_z-lbound_z;
%inclination of cube in XY-plane is always 0:
incline=0;
%initial rotational orientation of TCP
alpha_ini=0;
beta_ini=0;
% creation of the obstacle
% figure(1)
% axis([-1000 1000 -1000 1000 0 1000])
% grid
% hold on
% xlabel('x');ylabel('y');zlabel('z');
%

% plot_obstacle_0(P_obst,x_length,y_length,z_length,incline);
%
% %creation of the End Effecter
% p1=[x_ini; y_ini+40; z_ini];
% p2=[x_ini; y_ini+40; z_ini+100];
% p3=[x_ini; y_ini; z_ini+100];
% p4=[x_ini; y_ini-40; z_ini+100];
% p5=[x_ini; y_ini-40; z_ini];
% p6=[x_ini; y_ini; z_ini+200];
%
% pinza_1=plot3([p2(1);p1(1)],[p2(2);p1(2)],[p2(3);p1(3)], 'LineWidth',3,'color',[1 0 0]);
% pinza_2=plot3([p2(1);p4(1)],[p2(2);p4(2)],[p2(3);p4(3)], 'LineWidth',3,'color',[1 0 0]);
% pinza_3=plot3([p4(1);p5(1)],[p4(2);p5(2)],[p4(3);p5(3)], 'LineWidth',3,'color',[1 0 0]);
% pinza_4=plot3([p6(1);p3(1)],[p6(2);p3(2)],[p6(3);p3(3)], 'LineWidth',3,'color',[1 0 0]);
%
% plot of initial and final position
% plot3(x_ini,y_ini,z_ini,'ro')
% plot3(x_fin,y_fin,z_fin,'ro')
%declaration of storage vectors for position and orientation
pos=[x_ini, y_ini, z_ini];
angle=[alpha_ini, beta_ini];
Pos=[pos];
Angle=[angle];
%initialization of conditions
A= true; %condition of while loop
Close_final=false; %flags that is set when close to final position
z_avoidance=0; %avoidance movement in z direction going on
y_avoidance=0; %avoidance movement in y direction going on
x_avoidance=0; %avoidance movement in x direction going on
intersection_NClose=false; % condition for intersection of distance zones
new_alpha_asked=false; %set to true when new alpha is requested
new_beta_asked=false; %set to true when new beta is requested
if ( abs(abs(x_ini-x_fin)-abs(y_ini-y_fin)) < (x_length+y_length)/2)
x_avoid=true;
y_avoid=true;
elseif (abs(x_ini-x_fin)> abs(y_ini-y_fin))
x_avoid=true;
y_avoid=false;
elseif (abs(x_ini-x_fin) <= abs(y_ini-y_fin))
y_avoid=true;
x_avoid=false;
end
%setting of avoidance sense in y direction
if y_avoid == true
if (y_fin-y_ini > 0)
sign_y_avoidance=1;
elseif (y_fin-y_ini < 0)
sign_y_avoidance=-1;
end
end
%setting of avoidance sense in x direction
if x_avoid == true
if (x_fin-x_ini > 0)
sign_x_avoidance=1;
elseif (x_fin-x_ini < 0)
sign_x_avoidance=-1;
end
end
%while loop for moving towards final position
while A
%x1, y1, z1: distance to obstacle: x_obst-x_actual, y_obst-y_actual, z_obst-z_actual

if pos(1)<lbound_x
x1=lbound_x-pos(1);
elseif pos(1)>rbound_x
x1=rbound_x-pos(1);
elseif (pos(1)>=((rbound_x+lbound_x)/2)) & (pos(1)<=(rbound_x))
x1=+0.001;
else
x1=-0.001;
end
if pos(2)<lbound_y
y1=lbound_y-pos(2);
elseif pos(2)>rbound_y
y1=rbound_y-pos(2);
elseif (pos(2)>=((rbound_y+lbound_y)/2)) & (pos(2)<=(rbound_y))
y1=+0.001;
else
y1=-0.001;
end
if pos(3)<lbound_z
z1=lbound_z-pos(3);
elseif pos(3)>rbound_z
z1=rbound_z-pos(3);
elseif (pos(3)>=((rbound_z+lbound_z)/2)) & (pos(3)<=(rbound_z))
z1=+0.001;
else
z1=-0.001;
end
%x2, y2, z2: distance to objective: x_fin-pos(1), y_fin-pos(2), z_fin-pos(3)
x2=x_fin-pos(1);
y2=y_fin-pos(2);
z2=z_fin-pos(3);
%Creation of action vectors
action_x=[0 0 0 0 0 0 0 ];
action_y=[0 0 0 0 0 0 0 ];
action_z=[0 0 0 0 0 0 0 ];
action_alpha=[0 0 0];
action_beta=[0 0 0];
%Distance discriptions to create three imaginary cubes around the obstacles:
% i) a cube "VeryClose" limited by VClose_pos_x, VClose_neg_x, VClose_pos_y,...
% VClose_neg_y,VClose_pos_z and VClose_neg_z
% ii) a cube "Close" limited by Close_pos_x, Close_neg_x, Close_pos_y, Close_neg_y,...
% Close_pos_z and Close_neg_z
% iii) a cube "NClose" limited by NClose_pos_x, NClose_neg_x, NClose_pos_y, NClose_neg_y,...
%
NClose_pos_z and NClose_neg_z
%note: remark that the cubes "Close" and "NClose" also need distance discriptions VClose ...
%
and VClose + Close respectively.
% The complements of respectively the outer cube "NClose" and "Far" form the zones ...
% "NClose_compl" and "Far",
% These three dimesional functions incorporate the distance discriptions:
% Far_pos_x, Far_neg_x, Far_pos_y, Far_neg_y, Far_pos_z and Far_neg_z +
% analogous terms in NClose.
% x direction
VClose_pos_x=VClose_pos(x1)*max([Contact(y1)*Contact(z1),Contact(y1)*VClose_pos(z1),Contact(y1)*VClo
se_neg(z1),VClose_pos(y1)*Contact(z1),VClose_pos(y1)*VClose_neg(z1),VClose_pos(y1)*VClose_pos(z1),VClose_neg(y1)*Cont
act(z1),VClose_neg(y1)*VClose_neg(z1),VClose_neg(y1)*VClose_pos(z1)]);
VClose_neg_x=VClose_neg(x1)*max([Contact(y1)*Contact(z1),Contact(y1)*VClose_pos(z1),Contact(y1)*VClo
se_neg(z1),VClose_pos(y1)*Contact(z1),VClose_pos(y1)*VClose_neg(z1),VClose_pos(y1)*VClose_pos(z1),VClose_neg(y1)*Cont
act(z1),VClose_neg(y1)*VClose_neg(z1),VClose_neg(y1)*VClose_pos(z1)]);

Close_pos_x=max([VClose_pos(x1),Close_pos(x1)])*max([Contact(y1)*Contact(z1), Contact(y1)*max([Close
_pos(z1),VClose_pos(z1)]), Contact(y1)*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(y1), Close_pos(y1)])*Contact(
z1), max([VClose_pos(y1), Close_pos(y1)])*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(y1), Close_pos(y1)])*max(
[VClose_pos(z1), Close_pos(z1)]), max([VClose_neg(y1), Close_neg(y1)])*Contact(z1), max([VClose_neg(y1), Close_neg(y1)])*m
ax([VClose_neg(z1), Close_neg(z1)]), max([VClose_neg(y1), Close_neg(y1)])*max([VClose_pos(z1), Close_pos(z1)])]);
Close_neg_x=max([VClose_neg(x1),Close_neg(x1)])*max([Contact(y1)*Contact(z1), Contact(y1)*max([Close
_pos(z1),VClose_pos(z1)]), Contact(y1)*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(y1), Close_pos(y1)])*Contact(
z1), max([VClose_pos(y1), Close_pos(y1)])*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(y1), Close_pos(y1)])*max(
[VClose_pos(z1), Close_pos(z1)]), max([VClose_neg(y1), Close_neg(y1)])*Contact(z1), max([VClose_neg(y1), Close_neg(y1)])*m
ax([VClose_neg(z1), Close_neg(z1)]), max([VClose_neg(y1), Close_neg(y1)])*max([VClose_pos(z1), Close_pos(z1)])]);
NClose_pos_x=max([NClose_pos(x1),VClose_pos(x1),Close_pos(x1)])*max([Contact(y1)*Contact(z1), Conta
ct(y1)*max([NClose_pos(z1),Close_pos(z1),VClose_pos(z1)]), Contact(y1)*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)
]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*Contact(z1), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*
max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*max([NClose_p
os(z1),VClose_pos(z1), Close_pos(z1)]), max([NClose_neg(y1),VClose_neg(y1), Close_neg(y1)])*Contact(z1), max([NClose_neg(
y1),VClose_neg(y1), Close_neg(y1)])*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_neg(y1),VClose_ne
g(y1), Close_neg(y1)])*max([NClose_pos(z1),VClose_pos(z1), Close_pos(z1)])]);
NClose_neg_x=max([NClose_neg(x1),VClose_neg(x1),Close_neg(x1)])*max([Contact(y1)*Contact(z1), Conta
ct(y1)*max([NClose_pos(z1),Close_pos(z1),VClose_pos(z1)]), Contact(y1)*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)
]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*Contact(z1), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*
max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*max([NClose_p
os(z1),VClose_pos(z1), Close_pos(z1)]), max([NClose_neg(y1),VClose_neg(y1), Close_neg(y1)])*Contact(z1), max([NClose_neg(
y1),VClose_neg(y1), Close_neg(y1)])*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_neg(y1),VClose_ne
g(y1), Close_neg(y1)])*max([NClose_pos(z1),VClose_pos(z1), Close_pos(z1)])]);
NClose_pos_x_compl=1-max([(1-NClose_pos_compl(x1))*(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(
z1)),(1-NClose_pos_compl(x1))*(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(z1)),(1-NClose_pos_compl(x1))*(1-NClose_po
s_compl(y1))*(1-NClose_neg_compl(z1)),(1-NClose_pos_compl(x1))*(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(z1)),(1-N
Close_pos_compl(x1))*(Contact(y1))*(1-NClose_pos_compl(z1)),(1-NClose_pos_compl(x1))*(Contact(y1))*(1-NClose_neg_compl(
z1)),(1-NClose_pos_compl(x1))*(1-NClose_neg_compl(y1))*(Contact(z1)),(1-NClose_pos_compl(x1))*(1-NClose_pos_compl(y1))*
(Contact(z1)),(1-NClose_pos_compl(x1))*(Contact(y1))*(Contact(z1))]);
NClose_neg_x_compl=1-max([(1-NClose_neg_compl(x1))*(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(
z1)),(1-NClose_neg_compl(x1))*(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(z1)),(1-NClose_neg_compl(x1))*(1-NClose_po
s_compl(y1))*(1-NClose_neg_compl(z1)),(1-NClose_neg_compl(x1))*(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(z1)),(1-N
Close_neg_compl(x1))*(Contact(y1))*(1-NClose_pos_compl(z1)),(1-NClose_neg_compl(x1))*(Contact(y1))*(1-NClose_neg_compl(
z1)),(1-NClose_neg_compl(x1))*(1-NClose_neg_compl(y1))*(Contact(z1)),(1-NClose_neg_compl(x1))*(1-NClose_pos_compl(y1))*
(Contact(z1)),(1-NClose_neg_compl(x1))*(Contact(y1))*(Contact(z1))]);
Far_pos_x=1-max([(1-Far_pos(x1))*(1-Far_neg(y1))*(1-Far_pos(z1)),(1-Far_pos(x1))*(1-Far_neg(y1))*(1-Far_
neg(z1)),(1-Far_pos(x1))*(1-Far_pos(y1))*(1-Far_neg(z1)),(1-Far_pos(x1))*(1-Far_pos(y1))*(1-Far_pos(z1)),(1-Far_pos(x1))*(Cont
act(y1))*(1-Far_pos(z1)),(1-Far_pos(x1))*(Contact(y1))*(1-Far_neg(z1)),(1-Far_pos(x1))*(1-Far_neg(y1))*(Contact(z1)),(1-Far_pos
(x1))*(1-Far_pos(y1))*(Contact(z1)),(1-Far_pos(x1))*(Contact(y1))*(Contact(z1))]);
Far_neg_x=1-max([(1-Far_neg(x1))*(1-Far_neg(y1))*(1-Far_pos(z1)),(1-Far_neg(x1))*(1-Far_neg(y1))*(1-Far_
neg(z1)),(1-Far_neg(x1))*(1-Far_pos(y1))*(1-Far_neg(z1)),(1-Far_neg(x1))*(1-Far_pos(y1))*(1-Far_pos(z1)),(1-Far_neg(x1))*(Cont
act(y1))*(1-Far_pos(z1)),(1-Far_neg(x1))*(Contact(y1))*(1-Far_neg(z1)),(1-Far_neg(x1))*(1-Far_neg(y1))*(Contact(z1)),(1-Far_neg
(x1))*(1-Far_pos(y1))*(Contact(z1)),(1-Far_neg(x1))*(Contact(y1))*(Contact(z1))]);
% y direction
VClose_pos_y=VClose_pos(y1)*max([Contact(x1)*Contact(z1),Contact(x1)*VClose_pos(z1),Contact(x1)*VClo
se_neg(z1),VClose_pos(x1)*Contact(z1),VClose_pos(x1)*VClose_neg(z1),VClose_pos(x1)*VClose_pos(z1),VClose_neg(x1)*Cont
act(z1),VClose_neg(x1)*VClose_neg(z1),VClose_neg(x1)*VClose_pos(z1)]);
VClose_neg_y=VClose_neg(y1)*max([Contact(x1)*Contact(z1),Contact(x1)*VClose_pos(z1),Contact(x1)*VClo
se_neg(z1),VClose_pos(x1)*Contact(z1),VClose_pos(x1)*VClose_neg(z1),VClose_pos(x1)*VClose_pos(z1),VClose_neg(x1)*Cont
act(z1),VClose_neg(x1)*VClose_neg(z1),VClose_neg(x1)*VClose_pos(z1)]);
Close_pos_y=max([VClose_pos(y1),Close_pos(y1)])*max([Contact(x1)*Contact(z1), Contact(x1)*max([Close
_pos(z1),VClose_pos(z1)]), Contact(x1)*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(x1), Close_pos(x1)])*Contact(
z1), max([VClose_pos(x1), Close_pos(x1)])*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(x1), Close_pos(x1)])*max(
[VClose_pos(z1), Close_pos(z1)]), max([VClose_neg(x1), Close_neg(x1)])*Contact(z1), max([VClose_neg(x1), Close_neg(x1)])*m
ax([VClose_neg(z1), Close_neg(z1)]), max([VClose_neg(x1), Close_neg(x1)])*max([VClose_pos(z1), Close_pos(z1)])]);
Close_neg_y=max([VClose_neg(y1),Close_neg(y1)])*max([Contact(x1)*Contact(z1), Contact(x1)*max([Close
_pos(z1),VClose_pos(z1)]), Contact(x1)*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(x1), Close_pos(x1)])*Contact(
z1), max([VClose_pos(x1), Close_pos(x1)])*max([VClose_neg(z1), Close_neg(z1)]), max([VClose_pos(x1), Close_pos(x1)])*max(

[VClose_pos(z1), Close_pos(z1)]), max([VClose_neg(x1), Close_neg(x1)])*Contact(z1), max([VClose_neg(x1), Close_neg(x1)])*m


ax([VClose_neg(z1), Close_neg(z1)]), max([VClose_neg(x1), Close_neg(x1)])*max([VClose_pos(z1), Close_pos(z1)])]);
NClose_pos_y=max([NClose_pos(y1),VClose_pos(y1),Close_pos(y1)])*max([Contact(x1)*Contact(z1), Conta
ct(x1)*max([NClose_pos(z1),Close_pos(z1),VClose_pos(z1)]), Contact(x1)*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)
]), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*Contact(z1), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*
max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*max([NClose_p
os(z1),VClose_pos(z1), Close_pos(z1)]), max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)])*Contact(z1), max([NClose_neg(
x1),VClose_neg(x1), Close_neg(x1)])*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_neg(x1),VClose_ne
g(x1), Close_neg(x1)])*max([NClose_pos(z1),VClose_pos(z1), Close_pos(z1)])]);
NClose_neg_y=max([NClose_neg(y1),VClose_neg(y1),Close_neg(y1)])*max([Contact(x1)*Contact(z1), Conta
ct(x1)*max([NClose_pos(z1),Close_pos(z1),VClose_pos(z1)]), Contact(x1)*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)
]), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*Contact(z1), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*
max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])*max([NClose_p
os(z1),VClose_pos(z1), Close_pos(z1)]), max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)])*Contact(z1), max([NClose_neg(
x1),VClose_neg(x1), Close_neg(x1)])*max([NClose_neg(z1),VClose_neg(z1), Close_neg(z1)]), max([NClose_neg(x1),VClose_ne
g(x1), Close_neg(x1)])*max([NClose_pos(z1),VClose_pos(z1), Close_pos(z1)])]);
NClose_pos_y_compl=1-max([(1-NClose_pos_compl(y1))*(1-NClose_neg_compl(x1))*(1-NClose_pos_compl(
z1)),(1-NClose_pos_compl(y1))*(1-NClose_neg_compl(x1))*(1-NClose_neg_compl(z1)),(1-NClose_pos_compl(y1))*(1-NClose_po
s_compl(x1))*(1-NClose_neg_compl(z1)),(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(x1))*(1-NClose_pos_compl(z1)),(1-N
Close_pos_compl(y1))*(Contact(x1))*(1-NClose_pos_compl(z1)),(1-NClose_pos_compl(y1))*(Contact(x1))*(1-NClose_neg_compl(
z1)),(1-NClose_pos_compl(y1))*(1-NClose_neg_compl(x1))*(Contact(z1)),(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(x1))*
(Contact(z1)),(1-NClose_pos_compl(y1))*(Contact(x1))*(Contact(z1))]);
NClose_neg_y_compl=1-max([(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(x1))*(1-NClose_pos_compl(
z1)),(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(x1))*(1-NClose_neg_compl(z1)),(1-NClose_neg_compl(y1))*(1-NClose_po
s_compl(x1))*(1-NClose_neg_compl(z1)),(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(x1))*(1-NClose_pos_compl(z1)),(1-N
Close_neg_compl(y1))*(Contact(x1))*(1-NClose_pos_compl(z1)),(1-NClose_neg_compl(y1))*(Contact(x1))*(1-NClose_neg_compl(
z1)),(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(x1))*(Contact(z1)),(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(x1))*
(Contact(z1)),(1-NClose_neg_compl(y1))*(Contact(x1))*(Contact(z1))]);
Far_pos_y=1-max([(1-Far_pos(y1))*(1-Far_neg(x1))*(1-Far_pos(z1)),(1-Far_pos(y1))*(1-Far_neg(x1))*(1-Far_
neg(z1)),(1-Far_pos(y1))*(1-Far_pos(x1))*(1-Far_neg(z1)),(1-Far_pos(y1))*(1-Far_pos(x1))*(1-Far_pos(z1)),(1-Far_pos(y1))*(Cont
act(x1))*(1-Far_pos(z1)),(1-Far_pos(y1))*(Contact(x1))*(1-Far_neg(z1)),(1-Far_pos(y1))*(1-Far_neg(x1))*(Contact(z1)),(1-Far_pos
(y1))*(1-Far_pos(x1))*(Contact(z1)),(1-Far_pos(y1))*(Contact(x1))*(Contact(z1))]);
Far_neg_y=1-max([(1-Far_neg(y1))*(1-Far_neg(x1))*(1-Far_pos(z1)),(1-Far_neg(y1))*(1-Far_neg(x1))*(1-Far_
neg(z1)),(1-Far_neg(y1))*(1-Far_pos(x1))*(1-Far_neg(z1)),(1-Far_neg(y1))*(1-Far_pos(x1))*(1-Far_pos(z1)),(1-Far_neg(y1))*(Cont
act(x1))*(1-Far_pos(z1)),(1-Far_neg(y1))*(Contact(x1))*(1-Far_neg(z1)),(1-Far_neg(y1))*(1-Far_neg(x1))*(Contact(z1)),(1-Far_neg
(y1))*(1-Far_pos(x1))*(Contact(z1)),(1-Far_neg(y1))*(Contact(x1))*(Contact(z1))]);
% z direction
VClose_pos_z=VClose_pos(z1)*max([Contact(y1)*Contact(x1),Contact(y1)*VClose_pos(x1),Contact(y1)*VClo
se_neg(x1),VClose_pos(y1)*Contact(x1),VClose_pos(y1)*VClose_neg(x1),VClose_pos(y1)*VClose_pos(x1),VClose_neg(y1)*Cont
act(x1),VClose_neg(y1)*VClose_neg(x1),VClose_neg(y1)*VClose_pos(x1)]);
VClose_neg_z=VClose_neg(z1)*max([Contact(y1)*Contact(x1),Contact(y1)*VClose_pos(x1),Contact(y1)*VClo
se_neg(x1),VClose_pos(y1)*Contact(x1),VClose_pos(y1)*VClose_neg(x1),VClose_pos(y1)*VClose_pos(x1),VClose_neg(y1)*Cont
act(x1),VClose_neg(y1)*VClose_neg(x1),VClose_neg(y1)*VClose_pos(x1)]);
Close_pos_z=max([VClose_pos(z1),Close_pos(z1)])*max([Contact(y1)*Contact(x1), Contact(y1)*max([Close
_pos(x1),VClose_pos(x1)]), Contact(y1)*max([VClose_neg(x1), Close_neg(x1)]), max([VClose_pos(y1), Close_pos(y1)])*Contact(
x1), max([VClose_pos(y1), Close_pos(y1)])*max([VClose_neg(x1), Close_neg(x1)]), max([VClose_pos(y1), Close_pos(y1)])*max(
[VClose_pos(x1), Close_pos(x1)]), max([VClose_neg(y1), Close_neg(y1)])*Contact(x1), max([VClose_neg(y1), Close_neg(y1)])*m
ax([VClose_neg(x1), Close_neg(x1)]), max([VClose_neg(y1), Close_neg(y1)])*max([VClose_pos(x1), Close_pos(x1)])]);
Close_neg_z=max([VClose_neg(z1),Close_neg(z1)])*max([Contact(y1)*Contact(x1), Contact(y1)*max([Close
_pos(x1),VClose_pos(x1)]), Contact(y1)*max([VClose_neg(x1), Close_neg(x1)]), max([VClose_pos(y1), Close_pos(y1)])*Contact(
x1), max([VClose_pos(y1), Close_pos(y1)])*max([VClose_neg(x1), Close_neg(x1)]), max([VClose_pos(y1), Close_pos(y1)])*max(
[VClose_pos(x1), Close_pos(x1)]), max([VClose_neg(y1), Close_neg(y1)])*Contact(x1), max([VClose_neg(y1), Close_neg(y1)])*m
ax([VClose_neg(x1), Close_neg(x1)]), max([VClose_neg(y1), Close_neg(y1)])*max([VClose_pos(x1), Close_pos(x1)])]);
NClose_pos_z=max([NClose_pos(z1),VClose_pos(z1),Close_pos(z1)])*max([Contact(y1)*Contact(x1), Conta
ct(y1)*max([NClose_pos(x1),Close_pos(x1),VClose_pos(x1)]), Contact(y1)*max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)
]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*Contact(x1), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*
max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*max([NClose_p
os(x1),VClose_pos(x1), Close_pos(x1)]), max([NClose_neg(y1),VClose_neg(y1), Close_neg(y1)])*Contact(x1), max([NClose_neg(
y1),VClose_neg(y1), Close_neg(y1)])*max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)]), max([NClose_neg(y1),VClose_ne

g(y1), Close_neg(y1)])*max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])]);


NClose_neg_z=max([NClose_neg(z1),VClose_neg(z1),Close_neg(z1)])*max([Contact(y1)*Contact(x1), Conta
ct(y1)*max([NClose_pos(x1),Close_pos(x1),VClose_pos(x1)]), Contact(y1)*max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)
]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*Contact(x1), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*
max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)]), max([NClose_pos(y1),VClose_pos(y1), Close_pos(y1)])*max([NClose_p
os(x1),VClose_pos(x1), Close_pos(x1)]), max([NClose_neg(y1),VClose_neg(y1), Close_neg(y1)])*Contact(x1), max([NClose_neg(
y1),VClose_neg(y1), Close_neg(y1)])*max([NClose_neg(x1),VClose_neg(x1), Close_neg(x1)]), max([NClose_neg(y1),VClose_ne
g(y1), Close_neg(y1)])*max([NClose_pos(x1),VClose_pos(x1), Close_pos(x1)])]);
NClose_pos_z_compl=1-max([(1-NClose_pos_compl(z1))*(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(
x1)),(1-NClose_pos_compl(z1))*(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(x1)),(1-NClose_pos_compl(z1))*(1-NClose_po
s_compl(y1))*(1-NClose_neg_compl(x1)),(1-NClose_pos_compl(z1))*(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(x1)),(1-N
Close_pos_compl(z1))*(Contact(y1))*(1-NClose_pos_compl(x1)),(1-NClose_pos_compl(z1))*(Contact(y1))*(1-NClose_neg_compl(
x1)),(1-NClose_pos_compl(z1))*(1-NClose_neg_compl(y1))*(Contact(x1)),(1-NClose_pos_compl(z1))*(1-NClose_pos_compl(y1))*
(Contact(x1)),(1-NClose_pos_compl(z1))*(Contact(y1))*(Contact(x1))]);
NClose_neg_z_compl=1-max([(1-NClose_neg_compl(z1))*(1-NClose_neg_compl(y1))*(1-NClose_pos_compl(
x1)),(1-NClose_neg_compl(z1))*(1-NClose_neg_compl(y1))*(1-NClose_neg_compl(x1)),(1-NClose_neg_compl(z1))*(1-NClose_po
s_compl(y1))*(1-NClose_neg_compl(x1)),(1-NClose_neg_compl(z1))*(1-NClose_pos_compl(y1))*(1-NClose_pos_compl(x1)),(1-N
Close_neg_compl(z1))*(Contact(y1))*(1-NClose_pos_compl(x1)),(1-NClose_neg_compl(z1))*(Contact(y1))*(1-NClose_neg_compl(
x1)),(1-NClose_neg_compl(z1))*(1-NClose_neg_compl(y1))*(Contact(x1)),(1-NClose_neg_compl(z1))*(1-NClose_pos_compl(y1))*
(Contact(x1)),(1-NClose_pos_compl(z1))*(Contact(y1))*(Contact(x1))]);
Far_pos_z=1-max([(1-Far_pos(z1))*(1-Far_neg(y1))*(1-Far_pos(x1)),(1-Far_pos(z1))*(1-Far_neg(y1))*(1-Far_
neg(x1)),(1-Far_pos(z1))*(1-Far_pos(y1))*(1-Far_neg(x1)),(1-Far_pos(z1))*(1-Far_pos(y1))*(1-Far_pos(x1)),(1-Far_pos(z1))*(Cont
act(y1))*(1-Far_pos(x1)),(1-Far_pos(z1))*(Contact(y1))*(1-Far_neg(x1)),(1-Far_pos(z1))*(1-Far_neg(y1))*(Contact(x1)),(1-Far_pos
(z1))*(1-Far_pos(y1))*(Contact(x1)),(1-Far_pos(z1))*(Contact(y1))*(Contact(x1))]);
Far_neg_z=1-max([(1-Far_neg(z1))*(1-Far_neg(y1))*(1-Far_pos(x1)),(1-Far_neg(z1))*(1-Far_neg(y1))*(1-Far_
neg(x1)),(1-Far_neg(z1))*(1-Far_pos(y1))*(1-Far_neg(x1)),(1-Far_neg(z1))*(1-Far_pos(y1))*(1-Far_pos(x1)),(1-Far_neg(z1))*(Cont
act(y1))*(1-Far_pos(x1)),(1-Far_neg(z1))*(Contact(y1))*(1-Far_neg(x1)),(1-Far_neg(z1))*(1-Far_neg(y1))*(Contact(x1)),(1-Far_neg
(z1))*(1-Far_pos(y1))*(Contact(x1)),(1-Far_pos(z1))*(Contact(y1))*(Contact(x1))]);
%Repeller rules in position:
action_z(1,1:7)=VClose_pos_x*[0 0 0 0 0 0 1];
action_z(2,1:7)=VClose_pos_y*[0 0 0 0 0 0 1];
action_z(3,1:7)=Close_pos_x*[0 0 0 0 0 0 1];
action_z(4,1:7)=Close_pos_y*[0 0 0 0 0 0 1];
action_z(5,1:7)=VClose_neg_x*[0 0 0 0 0 0 1];
action_z(6,1:7)=VClose_neg_y*[0 0 0 0 0 0 1];
action_z(7,1:7)=Close_neg_x*[0 0 0 0 0 0 1];
action_z(8,1:7)=Close_neg_y*[0 0 0 0 0 0 1];
if max(max(action_z))<= 0.5
z_avoidance=0;
else
z_avoidance=1;
end
if y_avoid == true
if (sign_y_avoidance == 1)
action_y(1,1:7)=VClose_neg_z*[0 0 0 0 0 0 1];
action_y(2,1:7)=Close_neg_z*[0 0 0 0 0 0 1];
action_y(3,1:7)=NClose_neg_z*[0 0 0 0 0 0 1];
end
if (sign_y_avoidance == -1)
action_y(1,1:7)=VClose_neg_z*[1 0 0 0 0 0 0];
action_y(2,1:7)=Close_neg_z*[1 0 0 0 0 0 0];
action_y(3,1:7)=NClose_neg_z*[1 0 0 0 0 0 0];
end
if max(max(action_y))<= 0.5
y_avoidance=0;
else
y_avoidance=1;
end
end

if x_avoid == true
if (sign_x_avoidance == 1)
action_x(1,1:7)=VClose_neg_z*[0 0 0 0 0 0 1];
action_x(2,1:7)=Close_neg_z*[0 0 0 0 0 0 1];
action_x(3,1:7)=NClose_neg_z*[0 0 0 0 0 0 1];
end
if (sign_x_avoidance == -1)
action_x(1,1:7)=VClose_neg_z*[1 0 0 0 0 0 0];
action_x(2,1:7)=Close_neg_z*[1 0 0 0 0 0 0];
action_x(3,1:7)=NClose_neg_z*[1 0 0 0 0 0 0];
end
if max(max(action_x))<= 0.5
x_avoidance=0;
else
x_avoidance=1;
end
end
%Attractor rules in position
% Output functions (Sugeno type) form a set of 7 singletons corresponding to velocities...
% of predetermined level
action_x(5,1:7)=GF_pos(x2)*algebraic_sum([NClose_pos_x_compl, Far_pos_x])*(1-y_avoidance)*...
(1-x_avoidance)*[0 0 0 0 0 0 1];
action_x(6,1:7)=GF_neg(x2)*algebraic_sum([NClose_neg_x_compl, Far_neg_x])*(1-y_avoidance)*...
(1-x_avoidance)*[1 0 0 0 0 0 0];
action_x(7,1:7)=GC_pos(x2)*Far_pos_x*(1-y_avoidance)*(1-x_avoidance)*[0 0 0 0 0 1 0];
action_x(8,1:7)=GC_neg(x2)*Far_neg_x*(1-y_avoidance)*(1-x_avoidance)*[0 1 0 0 0 0 0];
action_x(9,1:7)=GVC_pos(x2)*Far_pos_x*(1-y_avoidance)*(1-x_avoidance)*[0 0 0 0 1 0 0];
action_x(10,1:7)=GVC_neg(x2)*Far_neg_x*(1-y_avoidance)*(1-x_avoidance)*[0 0 1 0 0 0 0];
action_x(11,1:7)=Goal(x2)*[0 0 0 1 0 0 0];
action_z(9,1:7)=GF_pos(z2)*algebraic_sum([NClose_pos_z_compl, Far_pos_z])*(1-z_avoidance)*....
[0 0 0 0 0 0 1];
action_z(10,1:7)=GF_neg(z2)*algebraic_sum([NClose_neg_z_compl, Far_neg_z])*(1-z_avoidance)*...
[1 0 0 0 0 0 0];
action_z(11,1:7)=GC_pos(z2)*Far_pos_z*(1-z_avoidance)*[0 0 0 0 0 1 0];
action_z(12,1:7)=GC_neg(z2)*Far_neg_z*(1-z_avoidance)*[0 1 0 0 0 0 0];
action_z(13,1:7)=GVC_pos(z2)*Far_pos_z*(1-z_avoidance)*[0 0 0 0 1 0 0];
action_z(14,1:7)=GVC_neg(z2)*Far_neg_z*(1-z_avoidance)*[0 0 1 0 0 0 0];
action_z(15,1:7)=Goal(z2)*(1-z_avoidance)*[0 0 0 1 0 0 0];
action_y(5,1:7)=GF_pos(y2)*algebraic_sum([NClose_pos_y_compl, Far_pos_y])*(1-y_avoidance)*...
(1-x_avoidance)*[0 0 0 0 0 0 1];
action_y(6,1:7)=GF_neg(y2)*algebraic_sum([NClose_neg_y_compl, Far_neg_y])*(1-y_avoidance)*...
(1-x_avoidance)*[1 0 0 0 0 0 0];
action_y(7,1:7)=GC_pos(y2)*Far_pos_y*(1-y_avoidance)*(1-x_avoidance)*[0 0 0 0 0 1 0];
action_y(8,1:7)=GC_neg(y2)*Far_neg_y*(1-y_avoidance)*(1-x_avoidance)*[0 1 0 0 0 0 0];
action_y(9,1:7)=GVC_pos(y2)*Far_pos_y*(1-y_avoidance)*(1-x_avoidance)*[0 0 0 0 1 0 0];
action_y(10,1:7)=GVC_neg(y2)*Far_neg_y*(1-y_avoidance)*(1-x_avoidance)*[0 0 1 0 0 0 0];
action_y(11,1:7)=Goal(y2)*(1-y_avoidance)*[0 0 0 1 0 0 0];
%Repeller rules in orientation
%Detection of intersection of zones type NClose:
intersection_NClose= (NClose_pos_x*NClose_pos_y > 0.05 | NClose_pos_x*NClose_neg_y > 0.05 | ...
NClose_neg_x*NClose_pos_y > 0.05 | NClose_neg_x*NClose_neg_y > 0.05) ;
% Step 1 of repeller rules: detection of the accurate avoidance angle
if (Close_final== false & new_alpha_asked == false & new_beta_asked == false )
action_alpha(1,1:3)=max([NClose_pos_y, Close_pos_y, VClose_pos_y])*angle_0(angle(1))*...
angle_0(angle(2))*angle_0(angle(3))*[0 0 1];
% a value of "1" in a certain position of alpha_desired corresponds to an angle of [- 90 0 +90]
action_alpha(2,1:3)=max([NClose_neg_y, Close_neg_y, VClose_neg_y])*angle_0(angle(1))*...
angle_0(angle(2))*angle_0(angle(3))*[1 0 0];
action_beta(1,1:3)=max([NClose_pos_x, Close_pos_x, VClose_pos_x])*angle_0(angle(1))*...

angle_0(angle(2))*angle_0(angle(3))*[1 0 0];
action_beta(2,1:3)=max([NClose_neg_x, Close_neg_x, VClose_neg_x])*angle_0(angle(1))*...
angle_0(angle(2))*angle_0(angle(3))*[0 0 1];
action_alpha(3,1:3)=max([NClose_neg_z, Close_neg_z, VClose_neg_z])*max([angle_90_pos(angle(1))*...
angle_0(angle(2))*angle_0(angle(3)),angle_90_neg(angle(1))*angle_0(angle(2))*...
angle_0(angle(3))])*[0 1 0];
action_beta(3,1:3)=max([NClose_neg_z, Close_neg_z, VClose_neg_z])*max([angle_0(angle(1))*...
angle_90_pos(angle(2))*angle_0(angle(3)),angle_0(angle(1))*angle_90_neg(angle(2))*...
angle_0(angle(3))])*[0 1 0];
if intersection_NClose == true
if (abs(y1) > abs(x1) & max(max(action_alpha)) >= 0.9 )
new_alpha_asked = true;
elseif (abs(x1) > abs(y1) & max(max(action_beta)) >= 0.9 )
new_beta_asked = true;
end
elseif max(max(action_alpha)) >= 0.9
new_alpha_asked = true;
elseif max(max(action_beta)) >=0.9
new_beta_asked = true;
end
end
%Step 2 in repeller rules: setting of flag to move towards final orientation
if Close_final==false
if (sqrt((x_fin-pos(1))^2+(y_fin-pos(2))^2+(z_fin-pos(3))^2)<=250)
Close_final=true;
end
end
%Step 3 in repeller rules: return to zero orientation of End Effector
if (Close_final == true & new_alpha_asked == false & new_beta_asked == false)
action_alpha(1,1:3)=Far_pos_y*angle_90_pos(angle(1))*angle_0(angle(2))*angle_0(angle(3))*[0 1 0];
action_alpha(2,1:3)=Far_neg_y*angle_90_neg(angle(1))*angle_0(angle(2))*angle_0(angle(3))*[0 1 0];
action_beta(1,1:3)=Far_pos_x*angle_0(angle(1))*angle_90_pos(angle(2))*angle_0(angle(3))*[0 1 0];
action_beta(2,1:3)=Far_neg_x*angle_0(angle(1))*angle_90_neg(angle(2))*angle_0(angle(3))*[0 1 0];
if intersection_NClose == true
if (abs(y1) > abs(x1) & max(max(action_alpha)) >= 0.9 )
new_alpha_asked = true;
elseif (abs(x1) > abs(y1) & max(max(action_beta)) >= 0.9 )
new_beta_asked = true;
end
elseif max(max(action_alpha)) >= 0.9
new_alpha_asked = true;
elseif max(max(action_beta)) >=0.9
new_beta_asked = true;
end
end
%Step 4 in repeller rules: determination of the avoidance action
if (new_alpha_asked==true )
[dummy_1, alpha_level]=max(max(action_alpha));
if alpha_level==1
alpha_goal=-90;
elseif alpha_level==2
alpha_goal=0;
elseif alpha_level==3
alpha_goal=90;
end
alpha_delta=alpha_goal-angle(1);
%
%
%
%

for i=1:10
set(pinza_1,'visible','off');
set(pinza_2,'visible','off');

%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

set(pinza_3,'visible','off');
set(pinza_4,'visible','off');
%draw position and orientation of End Effector
%REMARK: premultiplication of matrices guarantees rotation with respect to fixed X- and Y- axes
p1=Rx(angle(1)+alpha_delta*i/10)*[0; +40 ;0]+[pos(1); pos(2); pos(3)];
p2=Rx(angle(1)+alpha_delta*i/10)*[0; 40; +100]+[pos(1); pos(2); pos(3)];
p3=Rx(angle(1)+alpha_delta*i/10)*[0; 0; +100]+[pos(1); pos(2); pos(3)];
p4=Rx(angle(1)+alpha_delta*i/10)*[0; -40; +100]+[pos(1); pos(2); pos(3)];
p5=Rx(angle(1)+alpha_delta*i/10)*[0; -40; 0]+[pos(1); pos(2); pos(3)];
p6=Rx(angle(1)+alpha_delta*i/10)*[0; 0; +200]+[pos(1); pos(2); pos(3)];
pinza_1=plot3([p2(1);p1(1)],[p2(2);p1(2)],[p2(3);p1(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_2=plot3([p2(1);p4(1)],[p2(2);p4(2)],[p2(3);p4(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_3=plot3([p4(1);p5(1)],[p4(2);p5(2)],[p4(3);p5(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_4=plot3([p6(1);p3(1)],[p6(2);p3(2)],[p6(3);p3(3)], 'LineWidth',3,'color',[1 0 0]);
pause(0.1)
end
angle(1)=alpha_goal;
Pos=[Pos; Pos(size(Pos,1),:)];
Angle=[Angle; angle(1) angle(2)];
new_alpha_asked=false;
end
if (new_beta_asked==true )
[dummy_1, beta_level]=max(max(action_beta));
if beta_level==1
beta_goal=-90;
elseif beta_level==2
beta_goal=0;
elseif beta_level==3
beta_goal=90;
end
beta_delta=beta_goal-angle(2);

%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

for i=1:10
set(pinza_1,'visible','off');
set(pinza_2,'visible','off');
set(pinza_3,'visible','off');
set(pinza_4,'visible','off');
%draw position and orientation of End Effector
%REMARK: premultiplication of matrices guarantees rotation with respect to fixed X- and Y- axes
p1=Ry(angle(2)+beta_delta*i/10)*[0; +40 ;0]+[pos(1); pos(2); pos(3)];
p2=Ry(angle(2)+beta_delta*i/10)*[0; 40; +100]+[pos(1); pos(2); pos(3)];
p3=Ry(angle(2)+beta_delta*i/10)*[0; 0; +100]+[pos(1); pos(2); pos(3)];
p4=Ry(angle(2)+beta_delta*i/10)*[0; -40; +100]+[pos(1); pos(2); pos(3)];
p5=Ry(angle(2)+beta_delta*i/10)*[0; -40; 0]+[pos(1); pos(2); pos(3)];
p6=Ry(angle(2)+beta_delta*i/10)*[0; 0; +200]+[pos(1); pos(2); pos(3)];
pinza_1=plot3([p2(1);p1(1)],[p2(2);p1(2)],[p2(3);p1(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_2=plot3([p2(1);p4(1)],[p2(2);p4(2)],[p2(3);p4(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_3=plot3([p4(1);p5(1)],[p4(2);p5(2)],[p4(3);p5(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_4=plot3([p6(1);p3(1)],[p6(2);p3(2)],[p6(3);p3(3)], 'LineWidth',3,'color',[1 0 0]);
pause(0.1)
end

angle(2)=beta_goal;
Pos=[Pos; Pos(size(Pos,1),:)];
Angle=[Angle; angle(1) angle(2)];
new_beta_asked=false;
end
%Agregation and defuzzification of x, y and z actions
x_agregate=max(action_x);
y_agregate=max(action_y);
z_agregate=max(action_z);
%
%
%
%
%
%
%

delete previous drawing of the end Effector


if A
set(pinza_1,'visible','off');
set(pinza_2,'visible','off');
set(pinza_3,'visible','off');
set(pinza_4,'visible','off');
end
%Actions
[x_level, speed_x]=max(x_agregate);
if speed_x==1
D_pos_1=x_level*(-30);
elseif speed_x==2
D_pos_1=x_level*(-5);
elseif speed_x==3
D_pos_1=x_level*(-3);
elseif speed_x==4
D_pos_1=x_level*(0);
elseif speed_x==5
D_pos_1=x_level*(3);
elseif speed_x==6
D_pos_1=x_level*(5);
elseif speed_x==7
D_pos_1=x_level*(30);
end
[y_level, speed_y]=max(y_agregate);
if speed_y==1
D_pos_2=y_level*(-30);
elseif speed_y==2
D_pos_2=y_level*(-5);
elseif speed_y==3
D_pos_2=y_level*(-3);
elseif speed_y==4
D_pos_2=y_level*(0);
elseif speed_y==5
D_pos_2=y_level*(3);
elseif speed_y==6
D_pos_2=y_level*(5);
elseif speed_y==7
D_pos_2=y_level*(30);
end
[z_level, speed_z]=max(z_agregate);
if speed_z==1
D_pos_3=z_level*(-30);
elseif speed_z==2
D_pos_3=z_level*(-5);
elseif speed_z==3
D_pos_3=z_level*(-3);
elseif speed_z==4
D_pos_3=z_level*(0);
elseif speed_z==5

D_pos_3=z_level*(3);
elseif speed_z==6
D_pos_3=z_level*(5);
elseif speed_z==7
D_pos_3=z_level*(30);
end
pos=pos+[D_pos_1, D_pos_2, D_pos_3];
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%

draw position of the Tool Center Point


plot3(pos(1), pos(2), pos(3),'bo')
draw position and orientation of End Effector
p1=p1+[D_pos_1; D_pos_2; D_pos_3];
p2=p2+[D_pos_1; D_pos_2; D_pos_3];
p3=p3+[D_pos_1; D_pos_2; D_pos_3];
p4=p4+[D_pos_1; D_pos_2; D_pos_3];
p5=p5+[D_pos_1; D_pos_2; D_pos_3];
p6=p6+[D_pos_1; D_pos_2; D_pos_3];
pinza_1=plot3([p2(1);p1(1)],[p2(2);p1(2)],[p2(3);p1(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_2=plot3([p2(1);p4(1)],[p2(2);p4(2)],[p2(3);p4(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_3=plot3([p4(1);p5(1)],[p4(2);p5(2)],[p4(3);p5(3)], 'LineWidth',3,'color',[1 0 0]);
pinza_4=plot3([p6(1);p3(1)],[p6(2);p3(2)],[p6(3);p3(3)], 'LineWidth',3,'color',[1 0 0]);
%updating of position and rotation angles
Pos=[Pos; pos];
Angle=[Angle; angle];

% ----- for SOCKET COMMUNICATION -----%1. update of while condition


A= (sqrt((x_fin-pos(1))^2+(y_fin-pos(2))^2+(z_fin-pos(3))^2)>=10);
% 2. counts update and sending position variable over socket
count_0=count_0+1;
if ((count_0 == 4) | ~A)
count_0=0;
count_1=count_1+1;
%convert last position and angles to string and add to sending
%variable
s=strcat(s,int_string([pos angle]));
if ((count_1==4) | ~A)
count_1=0;
if ~A
s=strcat(s,'S'); % Add an 'S' to indicate the final position was reached
pause(0.15) % to make sure KAREL system buffer is empty before sending...
end
success=mssend(sock,s);
count_2=count_2+1;
socket{count_2}=s;
s='';
success=1;
if success>0
if ~A
disp(sprintf('Last package was successfully sent over the socket!'));
else
disp(sprintf(strcat('Package n ',num2str(count_2),' was successfully sent over the socket!')));
end
disp(sprintf(strcat('Calculation and sending time: ',num2str(etime(clock,t)),'seconds.')));
disp(sprintf('...'));
elseif (success == -1)
disp(sprintf('Package could not be sent over the socket!!!'));
end
t=clock;
end
end

% 3. Resetting sending flag


if (success>0)
success=0;
end
% ------ END of SOCKET COMMUNICATION section ----end % end of big while loop triggerd by condition A
%Final representation of obstacle and every fourth calculated position
figure(1)
axis([-1000 1000 -1000 1000 0 1000])
grid
hold on
xlabel('x');ylabel('y');zlabel('z');
% Plot obstacle
plot_obstacle_0(P_obst,x_length,y_length,z_length,incline);
% Draw positions that were sent to the controller
for i=1:size(Pos,1)
if mod(i,4)==0
plot3(Pos((i),1), Pos((i),2), Pos((i),3),'rx')
end
end
plot3(Pos((size(Pos,1)),1), Pos((size(Pos,1)),2), Pos((size(Pos,1)),3),'rx')
%plot of initial and final position
plot3(x_ini,y_ini,z_ini,'bo')
plot3(x_fin,y_fin,z_fin,'bo')
% ------ MEMBERSHIP FUNCTIONS -------% MFs for rotational angles
function value=angle_0(x)
a=-45;
b=0;
c=45;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=angle_90_neg(x)
c=-45;
b=-90;
a=-135;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=angle_90_pos(x)
a=45;
b=90;
c=135;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
% MFs for distance to the obstacle
function value=Contact(x)
a=-40;
b=0;
c=0;
d=40;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=VClose_neg(x)
a=-80;
b=-40;
c=0;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=VClose_pos(x)
a=0;
b=40;
c=80;

value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=Close_neg(x)
a=-120;
b=-80;
c=-40;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=Close_pos(x)
a=40;
b=80;
c=120;
value=max([min([(x-a)/(b-a),(c-x)/(c-b)]),0]);
function value=NClose_neg(x)
a=-280;
b=-240;
c=-120;
d=-80;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=NClose_neg_compl(x)
a=-280;
b=-240;
c=-120;
d=-80;
if x<=-0.001
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
else
value=1;
end
function value=NClose_pos(x)
a=80;
b=120;
c=240;
d=280;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=NClose_pos_compl(x)
a=80;
b=120;
c=240;
d=280;
if x>=+0.001
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
else
value=1;
end
function value=Far_neg(x)
a=-240;
b=-280;
if x<=-0.001
value= max([min([1,(x-a)/(b-a)]),0]);
else
value=1;
end
function value=Far_pos(x)
a=240;
b=280;
if x>=+0.001
value= max([min([1,(x-a)/(b-a)]),0]);
else
value=1;
end
% MFs for distance to final destination = Goal
function value=Goal(x)

a=-2;
b=0;
c=0;
d=2;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=GVC_neg(x)
a=-6;
b=-4;
c=-2;
d=-1;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=GVC_pos(x)
a=1;
b=2;
c=4;
d=6;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=GC_neg(x)
a=-10;
b=-8;
c=-6;
d=-5;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=GC_pos(x)
a=5;
b=6;
c=8;
d=10;
value=max([min([(x-a)/(b-a),1,(d-x)/(d-c)]),0]);
function value=GF_neg(x)
a=-8;
b=-11;
value= max([min([1,(x-a)/(b-a)]),0]);
function value=GF_pos(x)
a=8;
b=11;
value= max([min([1,(x-a)/(b-a)]),0]);
% ------- END of MEMBERSHIP FUNCTIONS ------function s=int_string(Pos)
% Function that converts a numerical position and rotational configuration to a string.
% Format of the created string:
% 'sxxxx syyyy szzzz saa sbb' where:
% s: sign of numerical value
% xxxx, yyyy, zzzz: 4 characters for positions x, y and z
% aa, bb: 2 characters for angles
% INPUT:
%Pos: numerical values of positions and angles
% OUTPUT:
% s: string in the format earlier described
s_x='';s_y='';s_z='';w='';p='';
for i=1:size(Pos,2)
Pos(i)=round(Pos(i)) ;
end
% convert x position to string
if abs(Pos(1))<10
if Pos(1)>=0

s_x=strcat('+',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(1)))) ;
elseif Pos(1)<0
s_x=strcat('-',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(1)))) ;
end
elseif abs(Pos(1))<100
if Pos(1)>=0
s_x=strcat('+',num2str(0),num2str(0),num2str(abs(Pos(1)))) ;
elseif Pos(1)<0
s_x=strcat('-',num2str(0),num2str(0),num2str(abs(Pos(1)))) ;
end
elseif abs(Pos(1))<1000
if Pos(1)>=0
s_x=strcat('+',num2str(0),num2str(abs(Pos(1)))) ;
elseif Pos(1)<0
s_x=strcat('-',num2str(0),num2str(abs(Pos(1)))) ;
end
elseif abs(Pos(1))>=1000
if Pos(1)>=0
s_x=strcat('+',num2str(abs(Pos(1)))) ;
elseif Pos(1)<0
s_x=strcat('-',num2str(abs(Pos(1)))) ;
end
end
% convert y position to string
if abs(Pos(2))<10
if Pos(2)>=0
s_y=strcat('+',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(2)))) ;
elseif Pos(2)<0
s_y=strcat('-',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(2)))) ;
end
elseif abs(Pos(2))<100
if Pos(2)>=0
s_y=strcat('+',num2str(0),num2str(0),num2str(abs(Pos(2)))) ;
elseif Pos(2)<0
s_y=strcat('-',num2str(0),num2str(0),num2str(abs(Pos(2)))) ;
end
elseif abs(Pos(2))<1000
if Pos(2)>=0
s_y=strcat('+',num2str(0),num2str(abs(Pos(2)))) ;
elseif Pos(2)<0
s_y=strcat('-',num2str(0),num2str(abs(Pos(2)))) ;
end
elseif abs(Pos(2))>=1000
if Pos(2)>=0
s_y=strcat('+',num2str(abs(Pos(2)))) ;
elseif Pos(2)<0
s_y=strcat('-',num2str(abs(Pos(2)))) ;
end
end
% convert z position to string
if abs(Pos(3))<10
if Pos(3)>=0
s_z=strcat('+',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(3)))) ;
elseif Pos(3)<0
s_z=strcat('-',num2str(0),num2str(0),num2str(0),num2str(abs(Pos(3)))) ;
end
elseif abs(Pos(3))<100
if Pos(3)>=0
s_z=strcat('+',num2str(0),num2str(0),num2str(abs(Pos(3)))) ;
elseif Pos(3)<0
s_z=strcat('-',num2str(0),num2str(0),num2str(abs(Pos(3)))) ;
end
elseif abs(Pos(3))<1000
if Pos(3)>=0
s_z=strcat('+',num2str(0),num2str(abs(Pos(3)))) ;
elseif Pos(3)<0
s_z=strcat('-',num2str(0),num2str(abs(Pos(3)))) ;

end
elseif abs(Pos(3))>=1000
if Pos(3)>=0
s_z=strcat('+',num2str(abs(Pos(3)))) ;
elseif Pos(3)<0
s_z=strcat('-',num2str(abs(Pos(3)))) ;
end
end
%convert x rotation angle to string
if Pos(4)==90
s_w=strcat('+',num2str(90));
elseif Pos(4)==-90
s_w=num2str(-90);
elseif Pos(4)==0
s_w=strcat('+',num2str(0),num2str(0));
end
%convert y rotation angle to string
if Pos(5)==90
s_p=strcat('+',num2str(90));
elseif Pos(5)==-90
s_p=num2str(-90);
elseif Pos(5)==0
s_p=strcat('+',num2str(0),num2str(0));
end
%Final concatenation
s=strcat(s_x,s_y,s_z,s_w,s_p);
% ---- END of int_string.m -----

AppendixC

PROGRAM secure
-- Main program of the FANUC active security application
%NOLOCKGROUP
%NOPAUSE= COMMAND + TPENABLE + ERROR
CONST
num_nodes = 80
VAR
detect: INTEGER -- indicates if obstacle has been detected
mutex: INTEGER -- mutual exclusion semaphore
status_1, status_2: INTEGER -- status for task running
xyz_data: ARRAY[num_nodes, 5] OF INTEGER
-- for reading in positional and rotational data
halt: INTEGER -- indicates end of reading in
-- alternative path
count: INTEGER -- indicates number of read in
-- positions and rotations
-- defining semaphore and flag initialization routines
ROUTINE init_lock
BEGIN
CLEAR_SEMA(mutex) -- initialize semaphore at count = 0
END init_lock
ROUTINE init_flags
BEGIN
detect = 0
count = 0
halt = 0
END init_flags
BEGIN
--initializing semaphores and flags
mutex=1
init_lock
init_flags
--running tasks
RUN_TASK('comm',1,false,false,1,status_1)
RUN_TASK('moves',1,false,true,1,status_2)
END secure

----------------------------------------------------------

PROGRAM moves
-----

Child task of secure.kl:


Executes a normal working trajectory and cancels all movement upon
detection of an obstacle. An interrupt routine moving along the
alternative trayectory is then invoked from a condition handler.

%NOPAUSE= COMMAND + TPENABLE +ERROR


%STACKSIZE=5000 - needed to be able to invoke routine alternative
CONST
num_nodes = 80
VAR
detect FROM secure: INTEGER
mutex FROM secure: INTEGER
halt FROM secure: INTEGER
count FROM secure: INTEGER
xyz_data FROM secure: ARRAY[num_nodes,5] OF INTEGER
time_out: BOOLEAN - for PEND_SEMA built-in
start, final: XYZWPR
x, y, z, w, p, r: REAL
c: CONFIG
i: INTEGER
-- ROUTINE DECLARATION -ROUTINE alternative FROM moves -- is defined at END of program moves
-- MAIN of moves
BEGIN
-- Definition of condition handler----------------------------------CONDITION[1]: WITH $SCAN_TIME = 30
WHEN detect = 1 DO
CANCEL --cancels current motion
alternative --execution of alternative trajectory
ENDCONDITION
ENABLE CONDITION[1] -- activates condition handler
----------------------------------------------------------------------setting up motion characteristics
--setting of UFrame and UTool coordinate systems
$MNUFRAMENUM[1] = 2 -- Teach Pendant User Frame Number
$GROUP[1].$UFRAME = $MNUFRAME[1,$MNUFRAMENUM[1]]
$MNUTOOLNUM[1] = 1 -- Teach Pendant User Tool Number
$GROUP[1].$UTOOL = $MNUTOOL[1,$MNUTOOLNUM[1]]
-- setting of motion and termination type
$MOTYPE = LINEAR
$TERMTYPE = NODECEL
-- setting of motion speed
$GROUP[1].$speed= 200 --mm/sec
$GROUP[1].$rotspeed= 90 --deg/sec
-- Declare start and final XYZWPR variables
UNPOS(CURPOS(0,0),x,y,z,w,p,r,c)

start.x=250
start.y=1200
start.z=100
start.w=w
start.p=p
start.r=r
start.config_data=c
final.x=300
final.y=-700
final.z=150
final.w=w
final.p=p
final.r=r
final.config_data=c
--motion loop between final and start position
WHILE detect = 0 DO
MOVE TO start,
WHEN detect = 1 DO CANCEL
ENDMOVE
IF detect = 0 THEN
DELAY 1000 -- simulate operation time
-- in start position
ENDIF
IF detect = 0 THEN
MOVE TO final,
WHEN detect = 1 DO CANCEL
ENDMOVE
ENDIF
IF detect = 0 THEN
DELAY 1000 -- simulate operation time
-- in final position
ENDIF
ENDWHILE
DELAY 100 -- to give moves.kl time to enter in condition handler
-- before aborting program execution.
END moves
-- end MAIN of moves -------------------------------------------------- Routine definition
ROUTINE alternative
CONST
num_nodes = 50
VAR
xyz_array: ARRAY[num_nodes] OF XYZWPR
m: INTEGER
x, y, z, w, p, r: REAL
x_incr, y_incr, z_incr: REAL
c: CONFIG
x_axis, y_axis: VECTOR
alpha_diff, beta_diff: REAL
alpha_old, beta_old: REAL
BEGIN
-- Definition of x, y and z axis directions for rotational actions
-- MOVE ABOUT actions are defined with respect to vectors
-- in the activated TOOL coordinate system!!!

-- The z axis of TOOL is the x axis of the USER Frame,


-- due to the TOOLs declaration, the y axis of TOOL is the
-- negative y axis of the USER Frame.
y_axis.x=0
y_axis.y=-100
y_axis.z=0
x_axis.x=0
x_axis.y=0
x_axis.z=100
-- initialize old angle variables
alpha_old=0
beta_old=0
-- Semaphore actions
POST_SEMA(mutex) -- liberates comm.kl to send current TCP to MATLAB
PEND_SEMA(mutex,-1, time_out) -- suspends alternative until
-- first alternative positions and orientations are available
-- move along alternative positions
FOR m=1 TO num_nodes DO
-- Rotational actions
alpha_diff = xyz_data[m,4] - alpha_old
beta_diff = xyz_data[m,5] - beta_old
IF (alpha_diff <> 0.00) THEN
WITH $TERMTYPE = NOSETTLE MOVE ABOUT x_axis BY alpha_diff
alpha_old = xyz_data[m,4]
--update of actual angular position
ENDIF
IF (beta_diff <> 0.00) THEN
$TERMTYPE = NOSETTLE
WITH $TERMTYPE = NOSETTLE MOVE ABOUT y_axis BY beta_diff
beta_old = xyz_data[m,5]
-- update of actual angular position
ENDIF
-- Translational actions
UNPOS(CURPOS(0,0),x,y,z,w,p,r,c)
xyz_array[m].x = xyz_data[m,1]
xyz_array[m].y = xyz_data[m,2]
xyz_array[m].z = xyz_data[m,3]
xyz_array[m].w = w
xyz_array[m].p = p
xyz_array[m].r = r
xyz_array[m].config_data = c
MOVE TO xyz_array[m] NOWAIT
IF (halt = 1) AND (m = count) THEN
m=num_nodes - to force a termination of the FOR loop when
- all (=count) alternative positions and rotations have been executed.
ENDIF
ENDFOR
END alternative
----------------------------------------------------------------------

PROGRAM comm
-- Child task of secure.kl, executes the following communication steps:
-- 1. Receive an obstacle detection signal from MATLAB
-- 2. Send the current robot position to MATLAB.
-- 3. Receive positional and rotational data from MATLAB.
%NOLOCKGROUP
%NOPAUSE= COMMAND + TPENABLE + ERROR
%PRIORITY=49
CONST
num_nodes = 80
VAR
ComFile:FILE -- communication file for reading and writing
StatCon:INTEGER -- status from Ethernet connection
StatBuf:INTEGER -- status from Buffer
StatCur:INTEGER -- status of set_cursor build-in
StatFile: INTEGER -- status from read operations
entry:INTEGER
time_out: BOOLEAN -- for semaphore use
BufBytes_0, BufBytes_1:INTEGER
halt FROM secure: INTEGER
count FROM secure: INTEGER
detect FROM secure: INTEGER
mutex FROM secure: INTEGER
i, j, k: INTEGER
num_pos: INTEGER
clock_var: INTEGER
x, y, z, w, p, r: REAL
c: CONFIG
str_x, str_y, str_z, str_w, str_p: STRING[8]
index_1, index_2, index_3, index_4, index_5, index_6, index_7, index_8,
index_9, index_10, index_11, index_12, index_13, index_14, index_15,
index_16, index_17, index_18, index_19, index_20, index_21: INTEGER
index_row: INTEGER
ReadData: ARRAY[120] OF STRING[8]
xyz_data FROM secure: ARRAY[num_nodes, 5] OF INTEGER
-- ROUTINE DECLARATIONS ---------------------------------------------ROUTINE start_1
BEGIN
--CLOSING PORT BEFORE START
MSG_DISCO('S3:',StatCon)
WRITE TPDISPLAY('Disconnect Status',StatCon, CR)
--SETTING FILE AND PORT ATTRIBUTES-SET_FILE_ATR(ComFile,ATR_IA) --force reads to completion
SET_VAR(entry,'*SYSTEM*','$HOSTS_CFG[3].$SERVER_PORT',2007,StatCon)
--OPEN PORT-WRITE TPDISPLAY('Opening port 2007',CR)
WRITE TPDISPLAY('Listening....',CR)
MSG_CONNECT('S3:',StatCon)
WAIT FOR StatCon=0

WRITE TPDISPLAY('Connected',CR)
--OPEN FILE FOR READ AND WRITE-OPEN FILE ComFile ('RW','S3:')
StatFile = IO_STATUS(ComFile)
IF StatFile = 0 THEN
WRITE TPDISPLAY('FILE=OPEN',CR)
ELSE
MSG_DISCO('S3:',StatCon)
WRITE TPDISPLAY('Socket disconnected',StatCon,CR)
ENDIF
END start_1
ROUTINE start_2
BEGIN
--CLOSING PORT BEFORE START
MSG_DISCO('S4:',StatCon)
WRITE TPDISPLAY('Disconnect Status',StatCon, CR)
--SETTING FILE AND PORT ATTRIBUTES-SET_FILE_ATR(ComFile,ATR_IA) --force reads to completion
SET_VAR(entry,'*SYSTEM*','$HOSTS_CFG[4].$SERVER_PORT',2008,StatCon)
--OPEN PORT-WRITE TPDISPLAY('Opening port 2008',CR)
WRITE TPDISPLAY('Listening....',CR)
MSG_CONNECT('S4:',StatCon)
WAIT FOR StatCon=0
WRITE TPDISPLAY('Connected',CR)
--OPEN FILE FOR READ AND WRITE-OPEN FILE ComFile ('RW','S4:')
StatFile = IO_STATUS(ComFile)
IF StatFile = 0 THEN
WRITE TPDISPLAY('FILE=OPEN',CR)
ELSE
MSG_DISCO('S4:',StatCon)
WRITE TPDISPLAY('Socket disconnected',StatCon,CR)
ENDIF
END start_2
-- END ROUTINE DECLARATIONS -------------------------------------------- MAIN of comm.kl -------------------------------------------------BEGIN
WRITE(CHR(128),CR)--clear user screen
FORCE_SPMENU(TP_PANEL,SPI_TPUSER,1) to activate user screen

-- STEP 1: Receive detection signal ---------------------------------start_1 -- call routine connect for server tag S3
StatFile = IO_STATUS(ComFile)
clock_var=0
FOR i=1 TO 30 DO
ReadData[i]=''
ENDFOR
BufBytes_1=0
BufBytes_0=0

--Main loop reading as long as the file status StatFile is 0


WHILE StatFile =0 DO
BufBytes_0=BufBytes_0+BufBytes_1 --adjust count of
-- ReadData ARRAY
--check if new data in buffer
BYTES_AHEAD(ComFile,BufBytes_1,StatBuf)
IF (BufBytes_1 > 0) THEN
SET_CURSOR(TPDISPLAY,9,5,StatCur)
WRITE TPDISPLAY('Bytes in next package = ',BufBytes_1,CR)
ENDIF
--wait for first data in buffer
WHILE (BufBytes_1 = 0) AND (BufBytes_0 = 0) AND (StatBuf=0) DO
DISCONNECT TIMER clock_var
clock_var=0
CONNECT TIMER TO clock_var
BYTES_AHEAD(ComFile,BufBytes_1,StatBuf)
IF (BufBytes_1=0) THEN
SET_CURSOR(TPDISPLAY,7,5,StatCur)
WRITE TPDISPLAY('Buffer is empty',CR)
DELAY 40 -- give time to task normal to be executed
ELSE
SET_CURSOR(TPDISPLAY,8,5,StatCur)
WRITE TPDISPLAY('Bytes in first package = ',
BufBytes_1,CR)
ENDIF
ENDWHILE
--Reading Data out of buffer
FOR i = 1 TO BufBytes_1 DO
READ ComFile(ReadData[BufBytes_0+i]::1) -- ::1 forces to read
-- in one byte at a time
ENDFOR
StatFile=IO_STATUS(ComFile)
IF(ReadData[25]='0') OR (ReadData[25]='1') THEN
StatFile=1 - forces termination of while loop if
-- detection signal has entered
ENDIF
ENDWHILE
CNV_STR_INT(ReadData[25],detect)
DISCONNECT TIMER clock_var
WRITE TPDISPLAY('.......',CR)
WRITE TPDISPLAY('Detection = ',detect,CR)
WRITE TPDISPLAY('Reading time = ',clock_var,' milliseconds',CR)
-- closing communication file
CLOSE FILE ComFile
WRITE TPDISPLAY('Closing communication file',CR)
-- END of Step 1. ----------------------------------------------------

-- STEP 2: Send current robot position to MATLAB --------------------- Suspending communication task until routine ALTERNATIVE in moves
-- is started up:
PEND_SEMA(mutex, -1,time_out)
start_2
-- call routine connect for server tag S4
StatFile = IO_STATUS(ComFile)
clock_var=0
WRITE(CHR(128),CR) -- clear user screen
-- Register current position of Tool Center Point:
UNPOS(CURPOS(0,0),x, y, z, w, p, r, c)
-- Write Current position to communication port
WRITE ComFile(ROUND(x))
WRITE ComFile(ROUND(y))
WRITE ComFile(ROUND(z))
StatFile=IO_STATUS(ComFile)
IF StatFile=0 THEN
WRITE TPDISPLAY('Writing was successfull!',CR)
ELSE
WRITE TPDISPLAY('Writing error...',CR)
ENDIF
-- Disconnect timer
DISCONNECT TIMER clock_var
WRITE TPDISPLAY('Writing time = ',clock_var,' milliseconds',CR)
-- closing communication file
CLOSE FILE ComFile
WRITE TPDISPLAY('Closing communication file',CR)
-- disconnecting socket
MSG_DISCO('S4:',StatCon)
WRITE TPDISPLAY('Disconnecting socket',CR)
-- END of Step 2. ----------------------------------------------------- STEP 3: Read in Positional data from MATLAB ----------------------WRITE(CHR(128),CR) -- clear user screen
-- Connect ComFile to server tag S3, that is still connected:
OPEN FILE ComFile ('RW','S3:')
StatFile = IO_STATUS(ComFile)
IF StatFile = 0 THEN
WRITE TPDISPLAY('FILE=OPEN',CR)
ELSE
MSG_DISCO('S3:',StatCon)
WRITE TPDISPLAY('ComFile not opened. Socket disconnected',
StatCon,CR)
ENDIF
-- Reset the counter for total number of read in positions
count=0
-- Reset the halt variable of the read operations
halt=0

-- Perform read operations for every package sent over socket S3.
FOR i=1 TO num_nodes DO
IF (i=2) THEN
POST_SEMA(mutex) -- liberate moves.kl for moving
-- along alternative path
ENDIF
-- initializing working variables
StatFile = IO_STATUS(ComFile)
clock_var=0
FOR j=1 TO 120 DO
ReadData[j]=''
ENDFOR
BufBytes_1=0
BufBytes_0=0
num_pos=0 -- indicates the number of positions in new package
--Main loop reading as long as the file status StatFile is 0
WHILE StatFile =0 DO
BufBytes_0=BufBytes_0+BufBytes_1 -- adjust count of ReadData
-- ARRAY
--check if new data in buffer
BYTES_AHEAD(ComFile,BufBytes_1,StatBuf)
--wait for first data in buffer
WHILE (BufBytes_1 = 0) AND (BufBytes_0 = 0) AND (StatBuf=0)
DISCONNECT TIMER clock_var
clock_var=0
CONNECT TIMER TO clock_var
BYTES_AHEAD(ComFile,BufBytes_1,StatBuf)
IF (BufBytes_1=0) AND (i>1) THEN
DELAY 40 -- once the first package has been read,
-- give execution time to moves.kl
ENDIF
ENDWHILE
--Reading Data out of buffer
FOR j = 1 TO BufBytes_1 DO
READ ComFile(ReadData[BufBytes_0+j]::1)
ENDFOR
StatFile=IO_STATUS(ComFile)
IF (ReadData[46]='S') OR (ReadData[67]='S')
OR (ReadData[88]='S') OR (ReadData[109]='S') THEN
halt=1
ENDIF
IF(ReadData[108]='0') OR (halt=1) THEN
StatFile=1
ENDIF
ENDWHILE
-- Update the number of positions in the package:
IF (halt=0) THEN
num_pos=4
count=count+4
ELSE IF (ReadData[109]='S') THEN
num_pos=4

DO

count=count+4
ELSE IF (ReadData[67]='S') THEN
num_pos=2
count=count+2
ELSE IF (ReadData[88]='S') THEN
num_pos=3
count=count+3
ELSE IF (ReadData[46]='S') THEN
num_pos=1
count=count+1
ENDIF
ENDIF
ENDIF
ENDIF
ENDIF
IF (num_pos <> 0) THEN
FOR k=1 TO num_pos DO
index_1=24+(k-1)*21+1
index_2=24+(k-1)*21+2
index_3=24+(k-1)*21+3
index_4=24+(k-1)*21+4
index_5=24+(k-1)*21+5
index_6=24+(k-1)*21+6
index_7=24+(k-1)*21+7
index_8=24+(k-1)*21+8
index_9=24+(k-1)*21+9
index_10=24+(k-1)*21+10
index_11=24+(k-1)*21+11
index_12=24+(k-1)*21+12
index_13=24+(k-1)*21+13
index_14=24+(k-1)*21+14
index_15=24+(k-1)*21+15
index_16=24+(k-1)*21+16
index_17=24+(k-1)*21+17
index_18=24+(k-1)*21+18
index_19=24+(k-1)*21+19
index_20=24+(k-1)*21+20
index_21=24+(k-1)*21+21
index_row=(i-1)*4+k
str_x=ReadData[index_1]+ReadData[index_2]+ReadData[index_3]+
ReadData[index_4]=ReadData[index_5]
str_y=ReadData[index_6]+ReadData[index_7]+ReadData[index_8]+
ReadData[index_9]=ReadData[index_10]
str_z=ReadData[index_11]+ReadData[index_12]+ReadData[index_13]+
ReadData[index_14]=ReadData[index_15]
str_w=ReadData[index_16]+ReadData[index_17]+ReadData[index_18]
str_p=ReadData[index_19]+ReadData[index_20]+ReadData[index_21]
CNV_STR_INT(str_x,xyz_data[index_row,1])
CNV_STR_INT(str_y,xyz_data[index_row,2])
CNV_STR_INT(str_z,xyz_data[index_row,3])
CNV_STR_INT(str_w,xyz_data[index_row,4])
CNV_STR_INT(str_p,xyz_data[index_row,5])
ENDFOR
DISCONNECT TIMER clock_var

WRITE TPDISPLAY('.......',CR)
WRITE TPDISPLAY('Position package n ',i,' read in.',CR)
WRITE TPDISPLAY('Reading time = ',clock_var,' milliseconds',CR)
ENDIF
IF (halt=1) THEN
i=num_nodes - force the FOR loop to an end if string S
-- has been detected in input package
ENDIF
ENDFOR -- all positions read in
-- closing communication file
CLOSE FILE ComFile
WRITE TPDISPLAY('Closing FUZZY communication file',CR)
-- END of Step 3. ---------------------------------------------------WRITE TPDISPLAY('...',CR,'End of communication operations',CR)
END comm
----------------------------------------------------------------------

-- Script of client.pl ----------------------------------------------use IO::Socket;


$socket=IO::Socket::INET->new
(
PeerAddr
=>
'193.144.187.156',
PeerPort
=>
2008,
Proto
=>
"tcp",
Type
=>
SOCK_STREAM
) or die ("No puedo crear el Sock Cliente, CERRANDO... \n");

while($line=<$socket>)
{
print "$line";
}

-- end of client.pl --------------------------------------------------

% MATLAB script for the execution of the FANUC active security system
% Step 1: starting up the GUI CamAxis.
% Return variables:
% 1) Image matrices I1, I2, I3 of GICcam1, GICcam2 and GICcam3 of quiet object
% 2) Ethernet socket "sock" of connection with FANUC (IP address: 193.144.187.156) at port 2007
% 3) success: is positive for successful sending of detection signal to the FANUC robot
[I1,I2,I3,sock,success]=CamAxis;
% Step 2: Calculating the obstacle's exact 3D position
% 2.a: Search for pixel correspondences:
p_corresp=correspondence(I1,I2,I3);
% 2.b: Calculating 3D position of all correspondences:
Pos=Pos_3D(p_corresp);
% 2.c: Discarding of false correspondences and reconstructing the obstacle's position:
P_obst=obst_pos(Pos);
% Step 3: Receiving current Tool Center Point position
string_TCP=perl('Client.pl');
P_ini=string_int(string_TCP);
disp(sprintf('Current TCP position successfully received.'));
% Step 4: Determination of final position
P_fin_1=[300 -700 150];
P_fin_2=[250 1200 100];
dist_1=sqrt((P_ini(1)-P_fin_1(1))^2+(P_ini(2)-P_fin_1(2))^2+(P_ini(3)-P_fin_1(3))^2);
dist_2=sqrt((P_ini(1)-P_fin_2(1))^2+(P_ini(2)-P_fin_2(2))^2+(P_ini(3)-P_fin_2(3))^2);
if dist_1 > dist_2
P_fin=P_fin_1;
else
P_fin=P_fin_2;
end
% Step 5: Calculating alternative fuzzy path and sending it over the socket
socket=fuzzy_comm(P_ini,P_fin,P_obst,sock);
% ---- END of script -----function tcp=string_int(s_tcp)
% Function that converts the TCP position in string format to the integer format
% ENTRY:
% s_tcp: TCP position in string format
% OUTPUT:
% tcp: numerical TCP position
tcp_x='';
tcp_y='';
tcp_z='';
num_space=0;
for i=1:length(s_tcp)
if s_tcp(i)==' '
num_space=num_space+1;
elseif num_space==1
tcp_x=strcat(tcp_x,s_tcp(i));
elseif num_space==2
tcp_y=strcat(tcp_y,s_tcp(i));
else
tcp_z=strcat(tcp_z,s_tcp(i));
end
end
x=str2num(tcp_x);
y=str2num(tcp_y);

z=str2num(tcp_z);
tcp=[x y z];
% ----- END of string_int.m -----

You might also like