Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/259557862

Robot Control System Design Exemplified by Multi-Camera Visual Servoing

Article  in  Journal of Intelligent and Robotic Systems · March 2015


DOI: 10.1007/s10846-013-9883-x

CITATIONS READS

23 379

2 authors:

Tomasz Kornuta Cezary Zielinski


IBM Research, Almaden Warsaw University of Technology
56 PUBLICATIONS   259 CITATIONS    106 PUBLICATIONS   680 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

MeDeRo - Methodology of design and implementation of multi-sensory robotic systems for service purposes View project

FPGA-accelerated neural networks View project

All content following this page was uploaded by Tomasz Kornuta on 04 February 2016.

The user has requested enhancement of the downloaded file.


Journal of Intelligent and Robotic Systems manuscript No.
(will be inserted by the editor)

Robot control system design exemplified by multi-camera


visual servoing
Tomasz Kornuta · Cezary Zieliński

Received: date / Accepted: date

Abstract The article investigates the problem of de- through sensors), Plan (basing on an updated model
signing robot control systems. It starts with a brief of the environment and the chosen strategy determine
overview of general control architectures, followed by the required action) and Act (execute the planned ac-
the presentation of the design methodology based on tion), thus the approach is also known under the SPA
the concept of an embodied agent. Multi-level decompo- acronym.
sition of system structure enables the formulation of the In the mid-80s of the twentieth century R. Brooks [6]
operation of each of the resulting subsystems in terms proposed an alternative approach called reactive con-
of transition functions, specifying their state evolution. trol. It postulated that the robot action (reaction) should
The paper also introduces mathematical and graphical be directly related to stimuli received from the environ-
notations, complementing each other, defining formally ment, without deliberative planning nor the model of
the control system behaviour. As an example the spec- the environment. The idea was to express the opera-
ification of a control system of a robot utilizing multi- tion of the system in terms of a hierarchy of simple
camera visual servoing for object following is presented. behaviours triggered as responses to received stimuli
(Sense-Act). An example is the Subsumption Architec-
Keywords Robotics, Control Systems, Agents,
ture [5,6], an architecture of behavioural generalization
Machine Vision, Visual Servoing
hierarchy, in which the stimulus may come either from
the environment or from other modules implementing
diverse behaviours. Higher level modules are activated
1 Introduction
and take control when the lower level modules can-
Two extreme approaches to the design of robot control not handle the existing situation. Two basic mecha-
systems have been distinguished: deliberative and re- nisms of interaction between modules were proposed:
active. Deliberative approach is derived directly from Inhibition (stopping) of incoming stimuli and Suppres-
the work on artificial intelligence. It is assumed that sion (replacement) of the resulting reaction. All mod-
the system should perform periodically the following ules were arranged into layers (e.g. avoiding obstacles,
activities: Sense (retrieve the state of the environment exploration). The system evolved by adding new layers
and modules thus introducing new behaviours.
T. Kornuta (B) · C. Zieliński Soon it became evident that neither of the two ex-
Institute of Control and Computation Engineering treme approaches will suffice: SPA tends to be too slow,
Warsaw University of Technology
Nowowiejska 15/19
while the reactive approach is unable to express log-
00-665 Warsaw, Poland ical reasoning. However the advantages of these two
Tel.: (+48-22) 825 09 95, 234 73 97 extreme approaches were combined in the hybrid ar-
Fax: (+48-22) 825 37 19 chitecture in which one can distinguish both elements
E-mail: tkornuta@ia.pw.edu.pl
related to knowledge representation and reasoning, as
C. Zieliński well as low-level Sensorimotor Primitives [2]. a classic
E-mail: C.Zielinski@ia.pw.edu.pl example of a hybrid architecture is ATLANTIS [11],
which distinguishes three layers: a Controller imple-
2 Tomasz Kornuta, Cezary Zieliński

menting strict coupling between receptors and effec- put ports are connected together and the transfer of
tors, a Sequencer responsible for the selection of the data between them can be realized either by a pull or
performed behaviours and Deliberator responsible for push operation. Inputs are of three kinds: driving (data
high level reasoning and task planning. Over the years reception is mandatory), driving optional (data recep-
a plethora of specific hybrid robot control architectures tion is optional) and modulatory (data reception is op-
has emerged, in which the bias shifted either towards tional and infrequent — deals with parameter modifi-
deliberation or towards reactivity. The deliberative as- cation). System design is incremental. Systematica 2D
pect was fostered by the BDI architecture distinguish- provides units and input-output connectors and uses
ing three mental states: Belief (possessed information), its patterns to enforce constraints on the architecture
Desire (motivation), Intension (choice of action based so that the resulting system will operate without dead-
on a plan) [22, 25]. Belief corresponds to knowledge, locks. It does not provide mechanisms for the defini-
however differs by the fact that knowledge is certainly tion of how the units operate. It is inter-process com-
true, while beliefs might not be so. Beliefs encompass munication biased. This formalism was used for the
a body of information regarding the agent itself and its specification of several different architectures, includ-
environment, including other agents as well as the infer- ing a three-tier architecture mentioned earlier [11]. Al-
ence rules for creating new beliefs. In this case an agent though application of the formalism leads to the spec-
is understood as an entity that perceives its environ- ification of modular systems (from the set of units any
ment in order to decide on its further action upon the complex system may emerge), it lacks the mechanisms
environment (fig. 1). Desires represent objectives that for expressing the features of the domain to which it
the agent wants to attain either by its own actions or was applied. Each domain has in fact some specific
that can be enforced by some external factors. Goals features that should be considered during system de-
are the desires that are actively pursued by the agent. velopment, thus this omission may potentially lead to
Intensions are desires that the agent has committed it- the suboptimal system structure. Robotic systems con-
self to, by choosing and executing a plan that leads tain devices for extremely different purposes: effectors
to their fulfillment. Reactive behaviour of the agent is through which the robot affects its own environment,
caused by events, both externally and internally gener- receptors for gathering information about the state of
ated. BDI architecture relies on an interpreter that is both itself (Proprioceptors) and the environment (Ex-
triggered by events, updates beliefs and desires and as teroceptors) as well as the control system responsible
a consequence pursues new intensions. for decision-making. Those elements are specific to the
robot domain, however one may notice an analogy to
living organisms, whose study led to the theory describ-
Agent ing the operation of individual components in terms
of patterns called Schemas. In [1] the author distin-
Sensors Decision Actuators guished Perceptual Schemas (PS), associated with the
aggregation of data received from the receptors, and
percepts actions Motor Schemas (MS), which are patterns of motoric ac-
tions. Those animal specific mechanisms were later ap-
Environment plied to robot domain and used to determine the struc-
ture of the robot control system, which was specified
Fig. 1 Interaction between an agent and its environment
as a distributed system of cooperating computational
units called Robot Schemas (RS) [18, 19]. The concept
of Robot Schemas was later extended in the work [29]
Currently a shift from general and descriptive def-
to multirobot systems, where each robot besides the
initions of architectures to a more formal presentation
Perceptual and Motor Schemas possessed additionally
based on either linguistic or mathematical formulation
Communication Schemas (CS) utilized for the exchange
can be observed. a formal approach to the design of hi-
of information between robots. It should be noted that
erarchical embedded intelligence system architectures
the structure of the created systems emerged from the
is presented in [10]. a formalization measure is intro-
juxtaposition of the Perceptual and Motor Schemas and
duced that qualifies how well a formalism (architecture
assumed a one-way flow of information from percepts
description language) supports expression, construction
to motors. This one-way information flow should be re-
and reuse of intelligent systems. Systematica 2D is pre-
garded as a model of postulated structure of the control
sented as an example of such a formalization language.
system. As a consequence the solution lacks the possi-
It describes the system in terms of units (processes)
bility of information flow in the opposite direction, i.e.
having input and output ports. The input and out-
Robot control system design exemplified by multi-camera visual servoing 3

from effectors to the receptors, which may be of impor- question “What should be the structure of the system
tance for the proper functioning of the resulting robotic and how should it work?” should prevail over the ques-
system. tion of “How to implement such a system?”, thus tools
The quest for an ultimate robot control architecture for creation of the system structure and the description
follows diverse approaches. One of them is the definition of its inner workings have been the subject of this re-
and subsequent implementation of robot programming search. Those specification tools should be neutral to
frameworks. Programming frameworks are libraries of the implementation technology subsequently used, but
software modules for programming, together with a set of course the mapping from abstract concepts to the
of patterns used as a guide for assembling those modules implementation should remain simple.
into a ready control system [15,20]. Reuse of software is
A common feature of the presented architectures
at their focus, as it not only simplifies but also speeds
is modularization, which results in independent units
up the development of reliable software. The overview
(changes in one of them does not require the mod-
[8] singles out two approaches to software reuse that
ification of the others) and facilitates understanding
frameworks follow:
of their role and functioning in the developed system
– white box, where the framework defines an architec- (each module can be considered separately) [23]. De-
ture of classes enabling the creation of cooperating composition also shortens the overall system creation
objects which can be extended by the mechanisms time allowing independent testing and verification of
of inheritance and dynamic binding, thus providing modules, thus increasing reusability of finished modules
a reusable basic design for a domain of applications, in diverse conjunctions and configurations resulting in
– black box, where high-level modules are delivered, different systems. Therefore, modularity is a desirable
which can be customized by changing parameters or feature in the design of all complex systems, including
plugging in user defined components, and as a result robot control systems. However, it should be noted that
creating a specific system. in the case of robots ad hoc decomposition may not
However both approaches can coexist in a single frame- be sufficient for an effective development of the con-
work. As frameworks mature they evolve from the white trol system. The decomposition has to be guided by
to a black box stage. Robot programming frameworks some principles. Those principles have to be rooted in
try to solve distinct implementation problems. The fol- the application domain. In robotics the predominant
lowing exemplary types of frameworks were identified flow of information is from the receptors to the effec-
by [8]: tors, through the control system. However a secondary
flow of information from the effectors to the control sys-
– frameworks for device access and control (sensor and tem and from the control system to the receptors also
effector access), exists and should be taken into account. The former
– high-level communication frameworks (solve the inter- creates proprioceptive information while the latter is
component communication problem), associated with the configuration capability. It should
– frameworks for functionality packaging (e.g., focus be noted that the receptors fulfil two general functions:
on vision for robotics, but include the interface to proprioceptive (delivering the information about the
the other components of a robot system, so that state of the effectors (limbs) of the robot) and exte-
work on vision is not hindered by the work on other roceptive (gathering information about the state of the
components of the system, however enabling testing environment). This distinction is important not only
of the whole system not just its parts), from the point of view of the source of information,
– frameworks for legacy systems evolution (enabling but also from the subsequent implementation point of
reuse of well tested components implemented in old view. Proprioceptive data is usually created at a rate
technologies). between 200 Hz to 2 kHz, while exteroceptive data is
The majority of robot programming frameworks uti- usually produced at a rate between 5 Hz to 100 Hz.
lizes the computer science approach to their definition, In the former case inter-module communication delays
rather than the domain (i.e. robotics) based one, thus have to be reduced to the utmost, while in the latter
the method of defining interfaces between components case they are of secondary concern. Moreover hardware
prevails over the methods of defining robotics based dependent elements of the control structure should be
inner workings of those components. Although inter- separated from those that are task dependant. Last but
component communication and synchronization of their not the least, the discussion of the structure of the
operation is very important for the implementation of system should be conducted using concepts from the
a reliable control system it pertains to the realization of domain of interest (i.e. robotics) and not the imple-
the system rather than to its conception. In our view the mentation realm. The universal language of such a dis-
4 Tomasz Kornuta, Cezary Zieliński

course is mathematics, but if it is used without any Embodied Agent structure imposes the general archi-
supplementary tools although it tends to be exact it tectural pattern of the control system. This pattern en-
is usually difficult to follow. In engineering some forms ables both directions of information flow from the effec-
of diagrams are used to simplify understanding, thus tors to the receptors and vice versa. It also associates
here also they will be used. The presented formalism proprioceptors with the effectors, leaving exteroceptors
pertains to multi-robot systems described in terms of as independent entities. Above all this architecture does
agents, where each agent has an inner structure com- not determine the data abstraction level on which the
posed of effectors, receptors and the control subsystem. control subsystem operates, thus enabling the designer
The functioning (behaviour) of the components of the to work with the domain concepts of choice, i.e. no on-
system is described mathematically in terms of transi- tological commitment has been made at the stage of
tion functions supplemented by predicates singling out architecture definition.
the behaviour to be executed and determining condi- Besides that a consistent method for the descrip-
tions for termination of their activity. As the mathe- tion of actions of each subsystem was proposed. For
matical definitions of those functions tends to be com- this purpose attention was focused on Behaviour-based
plex the involved computations are also described in Control, which employs a set of Behaviours in order to
terms of data flow diagrams. The general approach will achieve the required goal [2, 21]. Behaviour-based Con-
be exemplified by creating a single-agent system rely- trol in [2] was influenced by Brooks, thus behaviour
ing on servovision using two cameras – one static and there was treated as a form of reaction to stimuli. Nev-
one mobile. On the one hand this example is very rele- ertheless Behaviours can be treated as general patterns
vant to current robotics research but on the other hand of robot activities arising from the interactions between
its complexity is limited thus illustrating the presented the robot and its environment [33]. Hence they can be
approach in an intelligible way. applied to all kinds of robotic systems, including the De-
liberative and Hybrid ones. Later [35] associated Tran-
sition Functions with an Embodied Agent Behaviour.
2 Embodied Agent
The idea evolved from the earlier works [31,32] in which
the operation of the control software modules was ex-
The presented specification method is rooted in three
pressed in terms of mathematical functions. Having de-
concepts rooted in computer science: Agent, Behaviour
composed the agent structure and following the oper-
and Transition Function. The former is the central tenet
ational semantics approach to the description of pro-
expressing the architecture of the system, while the lat-
gramming languages [26], agent’s actions were later de-
ter two are supporting concepts describing its activi-
scribed by a set of Transition Functions governing their
ties. In [24] the authors define an Agent as an entity
Behaviours [35]. In [37] the authors introduced the de-
that perceives and acts. However, right after they in-
scription of Behaviours of all subsystems forming the
troduce a Rational Agent, being an agent that possess
control system in terms of Transition Functions. Hav-
an additional imperative for reaching the desired goal.
ing defined the actions of all subsystems the behaviour
This imperative influences the undertaken actions in or-
of the whole Embodied Agent stems from the interoper-
der to maximize the agent’s expected utility. In [34] we
ation of its subsystems. Interoperation means that they
adapted the concept of Embodiment [7] and refined it,
work together effectively to achieve the goal, whereas
which resulted in the definition of an Embodied Agent,
the main goal is known only to the Control Subsystem,
underscoring that robotic systems possess a corporeal
which coordinates the work of associated Virtual Effec-
body in the form of physical effectors and receptors.
tors and Virtual Receptors.
The structure of an Embodied Agent emerged from
our previous works on multi-robot control systems [31],
which distinguished the control subsystem, effector and 2.1 General inner structure of an Embodied Agent
a set of receptors. Additionally, such an agent utilized
Virtual Sensors which aggregated the information ob- In general, a j-th agent aj contains: effectors E j , that
tained from the exteroceptors into a form acceptable enable it to influence the environment, receptors (ex-
by a control subsystem. Proprioceptors were associated teroceptors) Rj to collect data from the environment,
with the effector. In more recent work [37] we refined and the control subsystem cj , that is responsible for
the agent’s inner structure by introduction of five gen- the execution of the required behaviours. The extero-
eral types of internal subsystems: Receptor and Effec- ceptors of the agent aj are numbered (or named), hence
tor forming the agent’s physical body as well as Vir- Rj,l . Both the receptor readings and the effector com-
tual Receptor, Virtual Effector and Control Subsystem mands undergo transformations. Those transformations
forming its control system. Such a decomposition of the present to the control subsystem both the state of the
Robot control system design exemplified by multi-camera visual servoing 5

effectors and the receptor readings in a form that is Inter­agent


transmission
convenient from the point of view of control, hence the
concepts of virtual effectors ej and virtual receptors
aj cj
rj , responsible for those transformations, have been in- Control Subsystem
troduced. Sometimes they are termed views or images,
Effector Effector Receptor Aggregated
because through them the control subsystem, and thus control state commands readings
its designer, perceive the real effectors and receptors.
There can be several different virtual receptors in
Virtual ej,n Virtual rj,k
the system, hence they are indexed too: rj,k . Infor-
Effector Receptor
mation aggregation can consist in the composition of Effector Effector Receptor Receptor
control state commands readings
readings obtained from several exteroceptors or in the
extraction of the required information from one, but Real Ej Real Rj,l
complex sensor, and moreover the readings obtained Effector Receptor
from the same exteroceptors may be processed in dif-
ferent ways, so many virtual receptors can be formed.
Virtual receptor readings are transmitted to the con- Fig. 2 General structure of an agent aj
trol subsystem cj which generates commands influenc-
ing virtual effectors ej . The Control Subsystem must
be able to both reconfigure exteroceptors Rj and influ- agent is significant and, moreover, each of those com-
ence the method how the Virtual Receptors rj aggre- ponents has its own elements (described further on in
gate readings. Hence the reverse connections with those the paper). To make the description of such a system
subsystems within the agent exist. comprehensible a consistent denotation method is nec-
Usually one agent aj controls a single effector E j , essary. However to simplify the notation no distinction
but it can be presented to the control subsystem in is being made between the denotation of a component
different forms, hence the multiplicity of virtual effec- and its state – the context is sufficient in this respect.
tors ej,n . Each virtual effector influences the electro- In the assumed notation a one-letter symbol located in
mechanical elements of the effector E j . For instance, the centre (i.e. E , R, e, r, c) designates the compo-
assuming that the effector is a manipulator, the control nent. To reference its subcomponents or to single out
subsystem cj may produce commands containing ar- the state of this component at a certain instant of time
guments designating the desired end-effector poses ex- we place extra indices around this central symbol. The
pressed in the operational space (Cartesian coordinates left superscript designates the referenced buffer of the
of the origin of that frame supplemented by one of the component or, in the case of a function, its type. The
representations of its orientation, e.g. angle and axis or right superscript designates the time instant at which
Euler angles). In such a case two virtual effectors can the state is being considered. The left subscript tells us
be created, one using inverse kinematics to compute the whether this is an input (x) or an output (y) buffer. If
positions in the configuration space (joint coordinates) there is no left subscript we refer to the internal memory
and another one using the inverse Jacobian to compute of the subsystem. The right subscript may be complex,
either the configuration space velocities or joint posi- where its elements are separated by comas. They refer
tion increments. The control subsystem must be able to the ordinal numbers of: the agent, its component and
to establish the current state of the effector E j . Thus subcomponent, or the ordinal number of the function.
a reverse connection between the two must exist. Hence For instance xe cij denotes the contents of the Control
the readings produced by the proprioceptors (e.g. mo- Subsystem input buffer of the agent aj acquired from
tor encoders) are processed by the virtual effector ej to the Virtual Effector at instant i. Obviously one can use
produce a form acceptable by the control subsystem cj . names instead of numbers, however here we number the
An agent is also able to establish a two-way com- entities.
munication with other agents aj 0 , j 6= j 0 . The resulting Coordinate transformations are denoted as X Y TW ,
structure is presented in fig. 2. where X, Y ∈ {0, E, C, G} and W ∈ {c, d, p}, denote
the location of frame Y with respect to frame X, and
0 – global reference frame affixed to the base of the
2.2 Notation robot, E – end-effector frame, C – camera frame, G
– frame associated with goal, i.e. an object of interest
Even out of this superficial description of an agent it is (e.g. 0G T is the pose of the object with respect to the
evident that the number of different components of the global coordinate frame). The right subscript refers to:
6 Tomasz Kornuta, Cezary Zieliński

c – current, p – previous, d – desired values. The e


symbol denotes the place holder for a given variable in
memory, in contrast to the value of the variable stored Update the internal state and
h compute values
i of the output
 buffers 
in it (the same symbol but without the tilde). If the s si+1 , si+1 := s f s si , si
j y j j,u j x j
inner structure of the buffer has to be specified, then
square brackets are used, e.g. xe cj [0E T
e c ], i.e. the above
mentioned buffer contains a homogeneous matrix rep- Dispatch the results to the
resenting the current pose of the end-effector w.r.t. the associated subsystems
i+1
global coordinate frame. y sj → x sj •

Wait
2.3 General subsystem behaviour i := i + 1
The general work-cycle of any subsystem s, where s ∈
Retrieve the current state from the
{c, e, r}, of any agent aj is as follows: associated subsystems
e csi • → cx esι1,•
i
– acquire data from the associated subsystems through yy 1,•
j j
the input buffers,
– compute the next subsystem state by evaluating the
transition function sf j , False s f τ ( s si , si True
j,u j x j )
– dispatch the results via the output buffers to the
associated subsystems.
Fig. 3 General flow chart of a subsystem behaviour s Bj,u
Considering the functioning of the subsystem s of the j-
th agent let us assume that it possess a single function
that processes the data contained in its input buffers of the multi-step evolution of the subsystem in a form
s
x sj and its internal memory sj in order to produce of a Behaviour B j,u defined as:
the output buffer values y sj and update value s sj in-
s
B j,u , s B j,u sfj,u , sfj,u
τ

serted into its internal memory. Hence the subsystem (3)
behaviour can be described by the Transition Function
s Additionally, an execution pattern presented in fig. 3 is
f j defined as:
proposed. The sj • , where j • ∈ {j, j 0 }, denotes all sub-
systems associated with sj (in the case of Control Sub-
:= sf j (s sij , x sij ).
s i+1
sj , y si+1

j (1)
system some of those subsystems even may not belong
to the same agent, hence j 0 appears). Besides that, the
where i and i + 1 are the consecutive discrete time
Behaviours along with the Initial Conditions enable the
stamps. Function (1) describes the evolution of the agent’s
presentation of the general behaviour selection (activa-
subsystem state. As the function (1) should be useful
tion) mechanism in a form of a state graph as in fig. 4.
throughout the whole life of an agent, it is usually con-
The s B j,0 Behaviour is the default (idle) behaviour, ac-
venient to decompose it into a set of partial functions:
tivated when no other Behaviour is required. Behaviour
selection is executed as a stateless switch.
:= sf j,u (s sij , x sij ),
s i+1
sj , y si+1

j (2)

where u = 0, . . . , nfs . Capabilities of the agent lie in the


multiplicity and diversity of partial functions of its sub-
systems. Such a prescription requires rules of switching Behaviour selection
between different partial transition functions of a sub- sf σ cf τ s σ sf τ sf σ sf τ
system, thus two additional Boolean valued functions j,0 1,0 fj,1 j,1 j,nfs j,nfs

(predicates) are required: sB


j,0 sB sB
j,1 j,nfs
s σ
– f defining the Initial Condition and Idle
– sf τ representing the Terminal Condition. ¬cf1,0
τ ¬sfj,1
τ ¬sfj,n
τ
f s

The former selects the transition function for cyclic ex- Fig. 4 State graph of the behaviour selection automaton
ecution, while the latter determines when this cyclic ex-
ecution should terminate. This enables the introduction
Robot control system design exemplified by multi-camera visual servoing 7

2.4 Control Subsystem (s = c) All of them are computed using the data contained
in the internal memory c cj and input buffers xe cj , xr cj
The Control Subsystem is primarily responsible for task and Tx cj . Hence the resulting compound formula for the
execution. The data types it operates on form the high- Control Subsystem state evolution can be defined as
est level ontology. It is up to the designer of the sys- follows:
tem to establish this ontology. Once this is done those
cj := c,cf j,m (c cij , xe cij , xr cij , Tx cij )
 i+1
c
data types are used to construct the inner structure of 


the Control Subsystem buffers. Three types of input  e ci+1 := c,ef (c ci , e ci , r ci , T ci )

y j j,m j x j x j x j
and output buffers connecting the Control Subsystem , (6)
 yr ci+1 := c,rf j,m (c cij , xe cij , xr cij , Tx cij )
 j
to three different associated subsystems can be distin- 

:= c,Tf j,m (c cij , xe cij , xr cij , Tx cij )
 T i+1
guished: buffers to the Virtual Effectors, Virtual Recep- y cj
tors and Transmitters to other agents (fig. 5). Hence the
where m = 1, . . . , nfc . The presented decomposition en-
general Control Subsystem Transition Function can be
ables the definition of the general Control Subsystem
formulated as:
Behaviour:
:= cf j (c cij , xe cij , xr cij , Tx cij ).
c i+1 e i+1 r i+1 T i+1 
cj , y cj , y cj , y cj c
B j,m , c B j,m (c,cf j,m , c,e
f j,m , c,r
f j,m , c,T
f j,m , cf τj,m ).
(4)
(7)
Having in mind that function (4) can be decomposed
The rules for the behaviour selection and switching re-
main identical to the ones presented earlier, hence there
Inter­agent is no need to explain them once again.
transmission
Tc 0 Tc 0
x j,j y j,j
cj Transmission buffers cc 2.5 Virtual Effector (s = e)
j
Control Subsystem Internal The responsibility of the Virtual Effector is to present
memory the real effector to the control subsystem in a form that
Effector buffers Receptor buffers
facilitates computation of control values. To do this it
ec ec rc rc
y j x j y j x j transforms the proprioceptive information obtained di-
rectly from the effector into values accepted by the con-
Effector Effector Receptor Aggregated
control state commands readings trol algorithm (e.g. it transforms encoder readings into
a homogeneous matrix representing the pose of the end-
Fig. 5 Inner structure of the control subsystem cj
effector with respect to the world coordinate frame).
As the control values will also appear as, for instance,
into a set of partial functions the evolution of the Con- homogeneous matrices, those too must be transformed
trol Subsystem state can be described by the following into values acceptable to the effector control hardware
formula: (e.g. PWM ratios). This also is the responsibility of the
Virtual Effector. Thus the Virtual Effector operates on
:= cf j,m (x cij , xe cij , xr cij , Tx cij ),
c i+1 e i+1 r i+1 T i+1 
cj , y cj , y cj , y cj values delivered to it through buffers and in the same
(5) way it dispatches the values produced by it to the co-
operating subsystems. Some computations might need
where m = 1, . . . , nfc . On the other hand, taking into values from previous control cycles, thus internal mem-
account that the Control Subsystem Transition Func- ory is involved. The mentioned transformations usually
tion has to create values for different buffers basing on change the ontology – the sets of concepts that are used
the same inputs, the function can be further decom- by neighbouring subsystems.
posed, with respect to its outputs. Hence it can com- The inner structure of the n-th Virtual Effector of
pute values for: the j-th agent is presented in fig. 6. The state evolution
– internal memory c cj of the Control Subsystem by of ej,n can be described by a general Virtual Effector
using c,cf j , Function:
– output buffer of the Virtual Effectors ye cj using c,ef j ,
ej,n , x ej,n , x ej,n := ef j,n e eιj,n , xc eιj,n , Ex eιj,n . (8)
e ι+1 c ι+1 E ι+1  
– output buffer of the Virtual Receptors yr cj comput-
ing c,rf j and where ι denotes the discrete time. We introduce a new
– output buffer of the Transmitter Ty cj utilising c,Tf j . symbol due to the fact that the Control Subsystem and
8 Tomasz Kornuta, Cezary Zieliński

Effector Effector Receptor Aggregated


control state commands readings
ce ce cr cr
x j,n y j,n x j,k y j,k
ej,n Control subsystem buffers ee rj,k Control subsystem buffers rr
j,n j,k
Virtual Effector Proprioceptive Virtual Receptor Receptor
memory sensoric
Real effector buffers Real receptor buffers memory
Ee Ee Rr Rr
y j,n x j,n y j,k x j,k
Effector Effector Receptor Receptor
control state commands readings
Fig. 6 Inner structure of the virtual effector ej,n Fig. 7 Inner structure of the rj,k virtual receptor

Virtual Effector have their own, typically different, fre- agent rj,k is presented in fig. 7. The Transition Function
quencies of operations. The function (8) can also be de- of the Virtual Receptor can be defined as:
composed into partial functions, which results in a set h i
of nfe partial functions: rj,k , y rj,k , y rj,k := rf j,k (r rιj,k , xc rιj,k , Rx rιj,k ), (12)
r ι+1 c ι+1 R ι+1

e
eι+1 c ι+1 E ι+1

:= ef j,n,p (e eιj,n , xc eιj,n , E ι where ι is the discrete time, however it should be noted
j,n , y ej,n , y ej,n x ej,n ),
that it is distinct from the one associated with the Vir-
(9)
tual Effector. Analogically to the cases of the previously
described subsystems also the Virtual Receptor Func-
where p = 1, . . . , nfe . Considering further the decompo-
tion can be subsequently decomposed into a set of nfr
sition taking into account the output buffers, the func-
partial functions, each defined as:
tioning of a virtual effector can be described by using
Virtual Effector Memory Function e,ef j,n,p , Propriocep- h i
tive Function e,cf j,n,p and Real Effector Control Func-
r ι+1 c ι+1 R ι+1
rj,k , y rj,k , y rj,k := rf j,k,t (r rιj,k , xc rιj,k , Rx rιj,k ),
tion e,Ef j,n,p : (13)

where t = 1, . . . , nfr . The following further decompo-



e ι+1 e,e e ι c ι E ι
 ej,n := f j,n,p ( ej,n , x ej,n , x ej,n )


e,c
sition of each partial function takes into account the
c ι+1 e ι c ι E ι . (10)
 y ej,n := f j,n,p ( ej,n , x ej,n , x ej,n ) outputs, and thus Sensoric Memory Functions r,rf j,k,t ,
e,E
 E eι+1 := f j,n,p (e eιj,n , xc eιj,n , Ex eιj,n ) Receptor Reading Aggregation Functions r,cf j,k,t and

y j,n
Sensoric Memory Functions r,Rf j,k,t are distinguished,
The Behaviour e B j,n,p is defined as: what results in:

r ι+1 r,r r ι R ι c ι
e
B j,n,p , e B j,n,p (e,ef j,n,p , e,cf j,n,p , e,E
f j,n,p , ef τj,n,p ).  rj,k := f j,k,t ( rj,k , x rj,k , x rj,k )


c ι+1 r,c
(11) y r j,k := f j,k,t (r rιj,k , Rx rιj,k , xc rιj,k ) , (14)

 R rι+1 := r,Rf
 r ι R ι c ι
y j,k j,k,t ( r j,k , x r j,k , x r j,k )

2.6 Virtual Receptor (s = r) where t = 1, . . . , nfr . Having specified both the inner
structure as well as the functions describing the state
The Virtual Receptor primarily aggregates the sensor evolution for the Virtual Receptor we can define the
readings into a form acceptable by the control subsys- Behaviour r B j,k,t as:
tem. On the other hand the control subsystem might
want to configure or change the mode of operation of r
B j,k,t , r B j,k,t (r,rf j,k,t , r,c
f j,k,t , r,R
f j,k,t , rf τj,k,t ).
the sensors. Thus the Virtual Receptor operates on val- (15)
ues delivered to it through buffers and through the
buffers it dispatches the values produced by it to the co- Having defined the actions of all subsystems the be-
operating subsystems. Some computations might need haviour of the whole Embodied Agent stems from the
values of previous aggregations (e.g. for filtering or av- interoperation of its subsystems. Interoperation means
eraging), thus internal memory is involved. The general that they work together effectively to achieve the goal,
inner structure of the k-th Virtual Receptor of the j-th whereas the main goal is known only to the Control
Robot control system design exemplified by multi-camera visual servoing 9

Subsystem, which coordinates the work of associated 3.1 Description of the scenario
Virtual Effectors and Virtual Receptors.
It should be pointed out that the right subscript des- The goal is to utilize two cameras in the task of object
ignating the agent, its subsystem and a particular be- tracking. In such cases the availability of new visual in-
haviour respectively is quite complex. If this is a prob- formation should trigger a dynamic change of the sys-
lem, we postulate that it should be reduced when pos- tem behaviour. The following example focuses on the
sible, e.g. when there is only one subsystem of a given problem of aggregation of information obtained from
type present in the system, the associated index can be two cameras: one mounted on the end-effector (EIH)
omitted. However in the following example all indices and one fixed above the scene (SAC). Because the vis-
have been retained for clarity. ibility of the object of interest may change during the
execution of the task, hence four situations may be dis-
tinguished:
3 Example: multi-camera visual servoing
– the object is not visible to both cameras,
Out of the five human senses sight is the one that un- – the object is perceived only by the stationary cam-
doubtedly delivers most of the information about our era,
surroundings. Creation of an equivalent sense in robots – both cameras can see the object,
was always at the focus of interest of the robotics com- – only the camera fixed to the end–effector can spot
munity, e.g. [13] describes one of the first robots vi- the object.
sually guided in real-time. Since then cameras have The listed cases form a typical scenario, where the sys-
been utilized successfully in numerous industrial and tem waits until an object appears in the scene. First
service robotics applications, from disassembly of used to perceive it is the stationary camera, which is fixed
car batteries by a robot with its end-effector-mounted- above the scene, so it observes a larger fragment of the
tool observed by an independently moving camera [30], work space. After several steps of the SAC-based visual
through dual-arm robotic system solving a Rubik’s cube servoing the object will become visible also to the EIH
puzzle [38] up to the state-of-the-art task of the DLR camera, hence information received from both cameras
Justin catching balls thrown in its direction [3]. One of should be utilized for motion generation. In the last mo-
most commonly used techniques for utilization of sight tion phase it is quite probable that the end–effector will
in manipulator control is visual servoing [9, 14]. In this obscure the object, so it will disappear from the image
approach the camera is used to provide data for the obtained by the SAC camera.
computation of the error between the current and de- The agent’s goal is to localize and approach the ob-
sired object pose (or values of image features), enabling ject of interest with its end-effector to subsequently
generation of such motion of the end-effector that in grasp it, thus utilisation of visual servoing is neces-
consequence the error is reduced. sary. However, it should be stressed that for the sake
There are diverse classifications of visual servos [9, of brevity grasping is not in the scope of the presented
27], however in this paper we focus on two criteria: the example – just object following.
space in which the error is calculated and the location
of the camera. The former considers the space in which
the error is calculated: PB (pose-based), where the goal 3.2 Determination of the system structure
and current end-effector poses are expressed in opera-
tional space, and IB (image-based), where they are de- The general structure of the system is presented in
fined in terms of features that can be directly deduced fig. 8. Three control loops were distinguished: the in-
from the image. The second criterion distinguishes two ner joint loop controlling the manipulator joints and
cases (fig. 9): SAC (Stand-Alone Camera), where the two outer visual loops recognizing the object of interest
camera is fixed to the robot surroundings, or EIH (Eye- and estimating its pose on the basis of images incoming
in-Hand), where the camera is fixed to the end-effector from the associated cameras.
(thus moves). Both SAC and EIH cases have different The agent a1 is responsible for the control of a mod-
advantages. The major advantage of the stand-alone ified IRp-6 manipulator (E 1 ) endowed with a sense of
camera located above the scene is a wide field of view, sight in the form of two cameras: a SAC camera fixed
whereas the utilization of a camera integrated with the to the surroundings and an EIH camera integrated with
robot gripper may result in higher accuracy in grasping the manipulator end-effector, denoted as R1,1 and R1,2
and manipulation operations. Fig. 9 presents the co- respectively. This suggests that the agent control sys-
ordinate frames and transformations between them for tem should be decomposed into Control Subsystem c1 ,
both types of visual servos. responsible for the visual servoing task, supplemented
10 Tomasz Kornuta, Cezary Zieliński

Desired relative Effector


Cartesian pose control
Motion Control
Current absolute
Cartesian pose
Joint Feedback (500Hz)
Task execution,
Effector state
trajectory generation

Visual Feedback (25Hz) Object Pose


Object pose w.r.t. Estimation Analog signal
EIH camera frame

Visual Feedback (25Hz) Object Pose


Object pose w.r.t. Estimation Analog signal
SAC camera frame
Fig. 8 General structure of the controller

by the Virtual Effector e1 controlling E 1 and two Vir- Additionally, if there are no commands waiting to
tual Receptors r1,1 and r1,2 , each aggregating the infor- be executed we assume that the Virtual Effector should
mation obtained by the camera associated with it (i.e. hold the manipulator still. For this purpose we must
R1,1 , R1,2 ). The latter three subsystems (i.e. e1 , r1,1 , design additional, Idle Behaviour, i.e. e B 1,0 .
r1,2 ) form the agent hardware abstraction layer. The
designed agent’s inner structure is presented in fig. 10. 3.3.1 Internal structure/intefaces

The structure of the Virtual Effector e1 is presented in


3.3 Virtual Effector fig. 11. Its input buffer connected to the Control Sub-
system contains:
The Virtual Effector is clocked by the low-level con- c
[E T
x e1
e d,c ] – desired relative Cartesian pose w.r.t. the
trollers of the effector, which work with the frequency of current pose of the end-effector.
500 Hz. On the other hand, we assume that the Control
Subsystem can produce new commands for the Virtual The output to the CS contains:
Effector as soon as Virtual Receptors process images re- c
y e1 [0E T
e c ] – current absolute Cartesian pose of the end–
trieved from cameras. Hence commands will arrive with
effector w.r.t. the robot base coordinate system.
the frequency of 25Hz, i.e. 20 times less frequent. The
requirement for generation of a smooth motion of the The buffers associated with the real effector are defined
end-effector imposes the design of a behaviour that will as:
plan and execute multi-step trajectories, reaching the
desired pose in their final step, at the moment of the
new command arrival. This is the role of e B 1,1 . a1 Control Subsystem
c1

Desired Manipulator Object Object


0T C2 C1 pose pose pose pose
C increment
E Virtual e1 Virtual r1,1 Virtual r1,2
ET
C
CT Effector Receptor Receptor
G c2
Effector Effector Analog Analog
ET
CT control state signal signal
0T G c1
E G
G IRp­6 E1 SAC R1,1 EIH R1,2
Manipulator Camera Camera
0
Fig. 9 Coordinate frames and transformations Fig. 10 Structure of the designed agent a1
Robot control system design exemplified by multi-camera visual servoing 11

E
x e1 [m
e m ] – currently measured motor positions, where ισ indicates the first step of the behaviour. e B 1,0
E
y e1 [e
τd ] – desired values of motor PWM duty cycles. lasts one step, thus its Terminal Condition is defined
as:
Finally, the virtual effector has to store in its memory
the following variables: e τ
f 1,0 ( e e1 ) , (ι = ισ + 1) . (19)
e
e1 [m
e d ] – desired motor absolute positions,
e
e1 [Θ
ep ] – previous joint configuration, which is used The state graph of the Virtual Effector behaviour se-
for the selection of the appropriate solution of the lection automaton is presented in fig. 12.
inverse kinematic problem.
Besides those variables the Virtual Effector-related com-
putations will utilize several other temporary variables,
irrelevant from the point of view of the structure of the Behaviour selection
subsystem. ef σ ef τ ef σ ef τ
1,0 1,0 1,1 1,1

3.3.2 Behaviour selection automaton eB eB


1,0 1,1
Idle Cartesian
Having two Behaviours we must design the appropriate
conditions responsible for their switching. The Initial ¬ef1,0
τ ¬ef1,1
τ

Condition of e B 1,1 becomes active when a new com-


Fig. 12 State graph of the behaviour selection automaton of
mand from the Control Subsystem is present: the virtual effector e1
e σ
f 1,1 ( xc e1 ) , P (xc e1 ) , (16)

where P is a predicate indicating the presence of new


data in a given buffer. In every other case the Idle Be- 3.3.3 Behaviour e B 1,1
haviour is activated:
The goal of this Behaviour is to execute the multi-step
e σ
f 1,0 ( xc e1 ) , ¬P (xc e1 ) . (17) trajectory step by step and reach the desired pose in the
final one. Besides the control of the manipulator the Be-
It is assumed that e B 1,1 should control the motion for
haviour should also return the current Cartesian pose of
the whole time between capture of consecutive images,
the end-effector to the Control Subsystem. Moreover, at
hence it should last around 40 ms. Taking into consid-
the onset it should also memorize the commanded de-
eration that there might be slight variations in the time
sired pose. The postulated decomposition requires three
of both image acquisition and its analysis we decided
functions to be specified: effector proprioceptive func-
to expand the duration of the motion to 42 ms, i.e. 21
tion e,cf 1,1 that presents to the control subsystem c1
steps, and supplied a termination condition activated
the current pose of the effector, real effector control
when a new command from Control Subsystem will be
function e,Ef 1,1 and virtual effector memory function
received at the end of motion. The resulting formula for e,e
f 1,1 enabling the memorization of both commanded
the Terminal Condition of the e B 1,1 is:
(desired) and current effector poses.
e τ
f 1,1 ( e e1 , xc e1 ) , [(ι = ισ + 21) ∧ ¬P (xc e1 )] ∨
(18)
[(ι > ισ + 19) ∧ P (xc e1 )] , ce
y 1
e,c
f1,1,1 0T
E c
Desired pose Manipulator Θc
increment current pose Joint pose Direct
ce ce
computation kinematics
x 1 y 1
e1 Control subsystem buffers ee
mc
1 Ee
x 1
Virtual Effector Proprioceptive
memory
Real effector buffers Fig. 13 Virtual effector e1,1 proprioceptive function
e,c
Ee Ee
f 1,1,1
y 1 x 1
Effector Effector
control state
The Proprioceptive Function should return the cur-
Fig. 11 Inner structure of the virtual effector e1 rent end-effector pose to the Control Subsystem at the
12 Tomasz Kornuta, Cezary Zieliński

first step of behaviour. In all subsequent steps it is in- by the Real Effector Control Function e,Ef 1,1 . How-
active. Hence it is defined as a variant function: ever, the major difference is that the function mem-
( orizes the computed desired motor positions md . The
e,c
c ι+1 e,c c ι f 1,1,1 ( xc eι1 ) for ι = ισ , function e,cf 1,1,1 is defined as:
e
y 1 := f ( e
1,1 x 1 ) ,
— for ι 6= ισ .
e ι+1
e1 [me d] := e,ef 1,1,1 ( e eι1 , xc eι1 , Ex eι1 ) ,
(20) 
, J M IK E Td,c DK (MJ (mc )) , Θp .
The effector proprioceptive function e,cf 1,1,1 is pre- (23)
sented in fig. 13. It acquires through its input buffer Ex e1 The data flow of the second variant of the Proprio-
the vector of current motor positions mc obtained from
motor encoders, transforms them using MJ into joint
e,e
positions Θc and subsequently using DK transforms the f1,1,2
result into the homogeneous matrix representation 0E Tc ee
Θc Joint pose
1 computation
of the pose of the end-effector E with respect to the
global reference frame 0 (operational space), to finally mc
expedite this result to the control subsystem through Ee
x 1
c e,c
y e1 . Hence the Proprioceptive Function f 1,1,1 is de-
e,e
fined as: Fig. 15 Virtual effector e1,1 memory function f 1,1,2

c ι+1 0 e e,c
f 1,1,1 Ex eι1 , DK (MJ (mc )) . (21)

y e1 [E Tc ] :=
ceptive Function is presented in fig. 15, whereas the
The role of the Virtual Effector Memory Function definition of e,cf 1,1,2 is as follows:
also depends on the step of the behaviour execution, e ι+1 e e,e
e1 [Θp ] := f 1,1,2 ( e eι1 ) , MJ (mc ) . (24)
i.e. in the first step it has to interpret the received com-
mand and remember the desired motor pose, whereas in
the subsequent steps it has to memorize the current ma- ce
x 1
nipulator joint configuration. The former will be used e,E
f1,1,1 Θp
ee
1 ET
as a goal through the whole time of the execution of d,c
0T
behaviour, while the latter will be used as an input for Θd Inverse E d
Motor positions Global pose
computation kinematics computation
the inverse kinematic solution when the next command
0T
from the Control Subsystem will arrive. Hence e,ef 1,1 is md E c

defined as a variant function: Error Error Direct


slicing computation kinematics
md,c
e ι+1
e1 := (e,ef 1,1 (e eι1 , xc eι1 ) , md0 ,c
Θc
e,e
f 1,1,1 (e eι1 , xc eι1 , xc eι1 ) for ι = ισ , (22) Motor Joint pose
, e,e regulator computation
f 1,1,2 (e eι1 ) for ι 6= ισ .
τd0 mc
The data flow of the computations of the Virtual Ef- Ee Ee
y 1 x 1

e,E
ce Fig. 16 Real effector control function f 1,1,1 of e1,1
x 1
e,e
f1,1,1 ET
d,c

Inverse
0T
E d Global pose
0T
E c Direct The goal of the Real Effector Control Function e,Ef 1,1
kinematics computation kinematics is to control the motion of the manipulator, i.e. produce
Θp Θd Θc adequate PWM duty cycles, through the whole time
ee
Motor positions Joint pose
of behaviour execution. It is also defined as a variant
1
computation computation function, with the first variant responsible for execu-
ee
1 md mc tion of the first motion step basing on the desired pose
Ee obtained from the Control Subsystem. The second vari-
x 1
ant produces adequate duty cycles using the memorized
Fig. 14 Virtual effector e1,1 memory function e,e
f 1,1,1 desired motor pose. Hence the function is defined as:
E ι+1
y e1 := e,E e ι c ι c ι
( f 1,1 ( e1 , x e1 , x e1 ) ,
e,E
fector Memory Function e,ef 1,1,1 is presented in fig. 14. f 1,1,1 (e eι1 , xc eι1 , xc eι1 ) for ι = ισ , (25)
, e,E
Those computations are similar to the ones performed f 1,1,2 (e eι1 , xc eι1 ) 6 ισ .
for ι =
Robot control system design exemplified by multi-camera visual servoing 13

The Real Effector Control Function e,Ef 1,1,1 (fig. 16) fector Control Function is defined as:
acquires through Ex e1 the current positions of motors  
mc and converts them into joint positions Θc to subse- E ι+1 e,E md − mc
y e1 [e
τd ] := f 1,1,2 ( e eι1 , E ι
x e1 ) , RM .
quently compute the homogeneous matrix 0E Tc by solv- ισ + 21 − ι
ing the direct kinematics problem. The control subsys- (27)
tem delivers, through the virtual effector buffer xc e1 ,
Having the Terminal Condition associated with this Be-
the desired effector pose increment E Td,c . The desired
haviour defined by (18), the Behaviour e B 1,1 is defined
pose of the effector with respect to the global refer-
as:
ence frame is computed as 0E Td = 0E Tc E Td,c . The so
obtained matrix together with the memorized previous e
B 1,1 , e
B 1,1 (e,ef 1,1 , e,c
f 1,1 , e,E
f 1,1 , ef τ1,1 ). (28)
joint positions Θp are fed into the inverse kinematics
problem to obtain the desired joint positions Θd . The 3.3.4 Idle Behaviour e B 1,0
desired joint positions are used to compute the desired
motor positions md , which along with the current mo- Behaviour e B 1,0 is the Idle Virtual Effector Behaviour,
tor pose mc is used to compute the error in the motor thus if no new command was issued by CS it is selected
space md,c = md −mc , which is subsequently reduced to by default. Its goal is to servo control the manipula-
the partial error that should be realised in a given step tor motors taking into account the desired motors po-
md,c
md0 ,c = 21 . The result is passed to the motor regula- sitions stored in the internal memory. Those positions
tor RM in order to compute the desired value of every are the desired positions of the previous motion com-
motor PWM duty cycle τd0 , sent through Ey e1 to the mand, computed and memorized by e B 1,1 . As the result
motor drivers for execution. The resulting definition of the Virtual Effector stabilizes the effector position after
the first variant of the Real Effector Control Function reaching the desired pose. For example, if an external
is: force causes it to move (e.g. something or someone will
push the manipulator arm), the controller will compen-
E ι+1
:= e,Ef 1,1,1 ( e eι1 , xc eι1 , Ex eι1 ) , sate the disturbance.
y e1 [e
τd ]
 
RM (J M IK DK (MJ (mc )) E Td,c , Θp − mc )/21 .
e,E
(26) f1,0 md
ee
1

Motor Error
e,E regulator computation
Second variant of the function, i.e. f 1,1,2 , presented md,c
τd mc
Ee Ee
y 1 x 1
e,E
f1,1,2 md
ee
1 Fig. 18 Real effector control function e,E
f 1,0 of the virtual
effector
Motor Partial error Error
regulator computation computation
md0 ,c md,c
τd0 mc Data flow of the function e,Ef 1,0 is presented in
Ee
y 1
Ee
x 1 fig. 18. First it acquires through Ex e1 the current mea-
sured encoder–based position of motors mc . Taking into
e,E
Fig. 17 Real effector control function f 1,1,2 of e1,1 account the desired positions md (stored in the Virtual
Effector memory) it computes the error md,c = md −mc .
To reach the desired position md the value of md,c must
in fig. 17, is responsible for computation of the subse-
be reduced to zero. The error is thus passed to the reg-
quent intermediate motor poses and their transforma-
ulator RM, which computes the desired value of every
tion into adequate PWM duty cycles. For this purpose
motor PWM duty cycle, i.e. τd . The resulting values are
it acquires the desired and current motor positions from
finally sent via Ey e1 to the real effector motor drivers.
the memory e eι1 and input buffer Ex eι1 respectively and
utilizes them for computation of the remaining error Hence the Real Effector Control Function e,Ef 1,0 can
md,c = md − mc . On its basis it computes a partial error be formulated analytically as:
that should be reduced to zero in the considered step E ι+1 e,E
f 1,0 ( e eι1 , E ι
, RM(md − mc ). (29)
md,c y e1 [e
τd ] := x e1 )
md0 ,c = ισ +21−ι and passes it to the motor regulator
RM. The resulting τd0 is sent to the motor hardware The Terminal Condition associated with this Behaviour
for execution. Hence the second variant of the Real Ef- is defined by (19) causing the Behaviour to end after
14 Tomasz Kornuta, Cezary Zieliński

the execution of a single step. Hence Virtual Effector whereas the output to the Control Subsystem is defined
Idle Behaviour e B 1,0 is defined as: as:
c
e,E y r 1,1[C
G Tc ] – current pose of the recognized object (goal,
e
e
B 1,0 , e
B 1,0 (—, —, f 1,0 , ef τ1,0 ). (30)
G) in relation to the camera reference frame (C).
Besides that the subsystem contains in its memory the
3.4 Virtual Receptors following:
r
r1,1 [b H
e ] – threshold (H) used for the selection of black
Despite the fact that the Virtual Receptors r1,1 and r1,2
pixels (b),
are related to different cameras, they both have exactly r
r1,1 [M Ve ] – object model, consisting of inner chess-
the same structure and behaviour, hence only the r1,1
board corners (vertices, V ) along with the width
receptor will be described. Both Virtual Receptors work
and height of a single field,
in a discrete manner, i.e. they should perform the ob- r
r1,1 [M
e ] – camera intrinsic matrix (M ),
ject recognition only when the input image is available. r
r1,1 [L ] – parameters describing the lens (L ) radial
e
As cameras deliver a frame every 40 ms, we assume
and tangential distortion.
that the duration of image processing time of both r1,1
and r1,2 will be shorter than that, so there will be pe-
riods of inactivity. Computations will be activated at 3.4.3 Behaviour selection automaton
the moment of frame arrival. Hence two different be-
haviours are required: one for processing and analysis As it was explained earlier, the Virtual Receptor will
of the received image, and second, Idle, for waiting for possess two Behaviours: Idle r B 1,1,0 and r B 1,1,1 , per-
the arrival of the next image. forming the image analysis. The Initial Condition of
r
B 1,1,1 relies on the presence of new data in the buffer
from the camera:
3.4.1 Object of interest
r σ
f 1,1,1 Rx r1,1 , P Rx r1,1 .
 
(31)
Considering the goal of the developed system (utiliza-
tion of information incoming from both immobile and Thus its negation defines the condition for the Idle Be-
mobile cameras) the selection of an object of interest haviour:
was not crucial. Hence a simple object was selected: r σ
f 1,1,0 Rx r1,1 , ¬P Rx r1,1 .
 
(32)
a chessboard. Due to its clear inner structure (a grid of
equally distant fields arranged in the same plane) it can The Idle Behaviour deactivates when an image is re-
be described by simple features (i.e. inner corners being ceived, hence its Terminal Condition is defined as:
crossings of two white and two black fields). The result-
r τ
f 1,1,0 Rx r1,1 , P Rx r1,1 .
 
ing model contains only several parameters: width and (33)
height expressed as numbers of horizontal and vertical r
internal corners and size of the square. Its simplicity B 1,1,1 lasts until the currently possessed image is pro-
facilitates the estimation of its Cartesian pose. For this cessed, which is symbolized by an elapse of a single step:
reason the chessboard is commonly used for calibration r τ
f 1,1,1 r r1,1 , (ι = ισ + 1) .

(34)
of both single as well as multi-camera systems, includ-
ing their internal parameters such as the focal length, It should be noted that although the same symbol ι
lens distortions and so on. id used to denote discrete time in the Virtual Effector

3.4.2 Virtual receptor inner structure Object pose w.r.t.


SAC camera
cr
The inner structure of the Virtual Receptor r1,1 is pre- y 1,1
sented in fig. 19. Because the utilized camera can not be r1,1 Buffer to control subsystem rr
1,1
configured, thus the output buffers to the Receptor are Virtual Receptor Receptor
absent. Moreover, because the task does not have to be Buffer to real receptor sensoric
Rr memory
controlled by the Control Subsystem, the input buffer x 1,1
from the CS is also not required. The input buffer from Retrieved
the Receptor contains: image

R
[eIc ] – retrieved camera image, Fig. 19 Virtual receptor r1,1 inner structure
x r 1,1
Robot control system design exemplified by multi-camera visual servoing 15

and the Virtual Receptor they are distinct (have differ- where B Ic symbolizes a binary image, with black pix-
ent periods). Even the period of ι within the behaviours els corresponding to the background and white pix-
of the Virtual Receptor is not constant. The resulting els denoting the pixels that had been originally black
state graph of the behaviour selection automaton is pre- (hence, as a result, the colours of the chessboard fields
sented in fig. 20. are inverted). The process of morphological dilation [12,
p. 523] leads to a better separation of those fields:
B0
Ic = MD B Ic .

(36)

Behaviour selection Afterwards the contours (i.e. edges forming closed loops)
are extracted from the binary image, according to the
rf σ rf τ r σ rf τ
1,1,0 1,1,0 f1,1,1 1,1,1 method presented in [28]:
 0 
I
rB
1,1,0
rB
1,1,1 Cc = CE B Ic , (37)
Idle Recognize
where I Cc denotes the list of the detected contours
¬rf1,1,0
τ ¬rf1,1,1
τ
I
Cc = h I1 Cc , . . . , Icn Cc i. Subsequently every contour
Fig. 20 State graph of the behaviour selection automaton of is approximated by a quadrangle:
the virtual receptor r1,1 I
Qc = QA I Cc .

(38)
Next, the resulting list of quadrangles I Qc is sorted,
regrouped and filtered basing on the neighbourhoods
3.4.4 Behaviour r B 1,1,1 of the quadrangle corners:
I0
Qc = QS I Qc .

The goal of the Behaviour r B 1,1,1 is to determine the (39)
chessboard position and orientation with reference to Having all outliers rejected the resulting list of quad-
0
the camera coordinate frame. For this purpose a set of rangles I Qc forms in fact a chessboard, from which the
characteristic board points (vertices) must be extracted internal corners (vertices) must be extracted:
from the image and subsequently compared with the  0 
I
projection of the adequate points from the model. Data Vc = VE I Qc . (40)
flow diagram of the function r,cf 1,1,1 is presented in
fig. 21. Besides the variables stored in the subsystem The resulting I Vc represents a list of internal corner
buffers, the diagram contains several other temporary points, which is subsequently compared to the projected
variables, used for storing of the results of the subse- points of the chessboard model. The goal of finding
quently performed computations. the chessboard pose in relation to the camera reference
First, the subsystem obtains the color image Ic from frame is achieved by using the model-based perspective
the camera through the input buffer and subjects it to projection algorithm [17]:
C I
Vc , M V, M, L .

thresholding [12, pp. 595-612] G Tc = PE (41)
B
Ic = T R (Ic , b H ) , (35) The algorithm also uses the camera intrinsic matrix M
and lens distortions L as inputs. Finally the determined
cr
goal pose C c
G Tc is sent via the y r 1,1 to the Control Sub-
y 1,1
r,c
f1,1,1 system. Hence the Receptor Reading Aggregation Func-
CT
M V, M, L G c IV tion r,cf 1,1,1 can be formulated analytically as:
c
rr Estimation of Internal corners c Ce r,c
1,1 the goal pose extraction y r ι+1 1[G Tc ] := f 1,1,1 (r rι1 , Rx rι1 ) ,
I0 Q , PE(VE (QS (QA (CE (MD (T R (Ic , b H )))))) , (42)
c
IC IQ
c M
Contour Quadrangle c Quadrangles V, M, L).
extraction approximation selection
B0 I As it was mentioned earlier, the utilized camera can-
c
BI
c bH
not be controlled by software, thus the Receptor Con-
Morphological rr
Thresholding 1,1 figuration Function is absent. Besides that, the Virtual
dilation
Ic
Receptor does not need to update its memory, hence the
Rr
Sensoric Memory Function is also not required. This re-
x 1,1
sults in the following definition of r B 1,1,1 :
r,c
Fig. 21 Reading aggregation function f 1,1,1 of r1,1 r
B 1,1,1 , r
B 1,1,1 (—, r,cf 1,1,1 , —, rf τ1,1,1 ). (43)
16 Tomasz Kornuta, Cezary Zieliński

3.4.5 Idle Behaviour r B 1,1,0 can be found in the work [4]. As it was in the case
of Virtual Effector and Virtual Receptor, the Control
The only goal of the Idle Behaviour is to wait until im- Subsystem will utilize also other variables for storing
age incoming from the camera is received. Hence r B 1,1,0 of intermediate results of the computations. However,
is defined as follows: those variables will not be important from the point of
view of the structure of the subsystem.
r
B 1,1,0 , r
B 1,1,0 (—, —, —, rf τ1,1,0 ). (44)

3.5.2 Behaviour selection automaton


3.5 Control Subsystem
Taking into account the described scenario, behaviour
The Control Subsystem perceives the environment through switching will depend on the visual information received
its Virtual Receptors and acts upon it through its Vir- from both Virtual Receptors. Hence we will utilize the
tual Effector. For that purpose it uses the four already predicate P denoting whether new information is present
mentioned behaviours. The details are as follows. in a given buffer. The Initial Conditions for the c B 1,1 ,
c
B 1,2 and c B 1,3 Behaviours can be defined as follows:
3.5.1 Inner control subsystem structure
c σ r r
 r
 r

f 1,1 x c1,1 , x c1,2 , P x c1,1 ∧ ¬ P x c1,2 ,
The structure of the Control Subsystem c1 is presented c σ
f 1,2 r r
  
¬P xr c1,1 ∧ P xr c1,2 , (45)
x c1,1 , x c1,2 ,
in fig. 22. The buffers connecting it to the Virtual Ef- c σ r r
  
f 1,3 x c1,1 , x c1,2 , P xr c1,1 ∧ P xr c1,2 .
fector e1 contain:
e
y c1 [E T
e d,c ] – desired relative Cartesian pose, whereas the Terminal Conditions of the behaviours are:
e 0 e
x c1 [E Tc ] – current absolute Cartesian pose of the end– c τ
  
f 1,1 xr c1,1 , xr c1,2 , ¬ P xr c1,1 ∨ P xr c1,2 ,
effector.
c τ
  
f 1,2 xr c1,1 , xr c1,2 , P xr c1,2 ∨ ¬ P xr c1,1 , (46)
Input buffers from the Virtual Receptors r1 and r2 are c τ r r
 r
 r

f 1,3 x c1,1 , x c1,2 , ¬ P x c1,1 ∨ ¬ P x c1,2 .
defined as:
r
x c1,1 [C
G Tc ] – current pose of the recognized object in The default Idle Behaviour c B 1,0 becomes active
e
relation to the SAC camera reference frame,
r Ce
when none of the other behaviours is active, thus its
x 1,2 [G Tc ] – current pose of the recognized object with
c
Initial Condition is defined as:
respect to the EIH camera frame.
c σ
f 1,0 xr c1,1 , xr c1,2 , ¬ cf σ1,1 ∨ cf σ1,2 ∨ cf σ1,3 =
 
Finally the CS stores in its memory:   (47)
c = ¬ P xr c1,1 ∧ ¬ P xr c1,2 .
c1 [O
C T ] – pose of the SAC camera in the global refer-
e
ence frame, Idle is a single-step behaviour, thus the Terminal Con-
c
c1 [E
C T ] – pose of the EIH camera in relation to the
e
dition is defined as:
manipulator end–effector,
c
c1 [E
G Td ] – desired pose of the object in relation to the
e c τ
f 1,0 ( c c1 ) , (i = iσ + 1) . (48)
manipulator end–effector at the moment of grasping
(offset). The resulting state graph of behaviour selection is pre-
All three behaviours causing motion of the effector uti- sented in fig. 23.
lize the same pose regulator RT . More information on
the operation, parameters and tuning of the regulator

Behaviour selection
c1 Internal cc cf σ cf τ cf σ cf τ c σ cf τ c σ cf τ
memory 1 1,0 1,0 1,1 1,1 f1,2 1,2 f1,3 1,3
Control Subsystem
Buffers to virtual effector Buffers to virtual receptors cB cB cB cB
1,0 1,1 1,2 1,3
ec ec rc rc
y 1 x 1 x 1,1 x 1,2 Idle SAC EIH SAC&EIH
¬cf1,0
τ ¬cf1,1
τ ¬cf1,2
τ ¬cf1,3
τ
Desired relative Current absolute Object pose Object pose
Cartesian pose Cartesian pose w.r.t. SAC
camera
w.r.t. EIH
camera
Fig. 23 State graph of the behaviour selection automaton of
Fig. 22 Control subsystem c1 inner structure the control subsystem c1
Robot control system design exemplified by multi-camera visual servoing 17

c,e
3.5.3 Behaviour c B 1,1 e
y c1 . The resulting formal definition of f 1,1 is:

When the presented here PB-EOL-SAC servo is in ac- e i+1 E e c,e c i e i r i


tion it does not have to memorize any variable infor- y c1 [ Td,c ] := f 1,1
( c1 , x c1 , x c1,1 ) ,
−1 0 −1 
mation, it also does not have to configure its Virtual , RT E G Td
0
E Tc CT
C
G Tc ,
Receptor or contact other Agents, and the only pur-
pose of its activities is control of its Virtual Effector, so (49)
only the Effector Control Function c,ef 1,1 (fig. 24) has
to be defined. Other Transition Functions (i.e. c,cf 1,1 , while the Behaviour c B 1,1 is defined as follows:
c,T
f 1,1 and c,rf 1,1 ) are not required. c
B 1,1 , c
B 1,1 (—, c,ef 1,1 , —, —, cf τ1,1 ). (50)

c,e
f1,1 ET
cc
1 0T
cc
1 3.5.4 Behaviour c B 1,2
G d C
ET ET
d,c
Error G c Pose
Effector motion c
computation computation computation B 1,2 is responsible for the execution of the PB-EOL-
ET 0
d ,c
0T
E c
CT
G c
EIH servo. Analogically to the previously presented Be-
ec ec rc haviour c B 1,1 it utilizes only the Effector Control Func-
y 1 x 1 x 1,1
tion c,ef 1,2 , whereas the rest of the Transition Functions
Fig. 24 Data flow of the function c,ef 1,1 involved in the is irrelevant.
execution of the PB-EOL-SAC visual servo

c,e
Computations performed by the Effector Control f1,2 ET
cc
1 ET
cc
1
G d C
Function c,ef 1,1 are as follows. First the Control Sub- Effector motion Error Pose
system must compute the pose of the object of interest computation E computation ET computation
Td,c G c
with respect to the end–effector reference frame for its ET 0
d ,c
CT
G c
subsequent comparison with the desired value of the ec rc
y 1 x 1,2
pose. For this purpose it obtains the current object
(goal) pose C G Tc of the tracked object from the Virtual Fig. 25 Data flow of the function c,ef 1,2 involved in the
Receptor r1,1 through xr c1,1 . The Control Subsystem execution of the PB-EOL-EIH visual servo
c1 holds in its memory c c1 the pose 0C T of the immo-
bile camera (SAC) with respect to the global frame of
The computations associated with the Effector Con-
reference. The current pose of the effector is obtained
trol Function c,ef 1,2 are presented in fig. 25. The func-
from the xe c1 input buffer. Those three elements are re-
tion first determines the difference between the current
quired for the computation of the pose of the object
and desired location of the chessboard with respect to
with respect to the end–effector reference frame: E G Tc =
0
−1 0 C E
the end–effector reference frame. For this purpose the
E Tc C T G Tc . The desired displacement (offset) G Td current relation between the end–effector and the object
between the object and the end–effector is stored in must be computed. This is achieved by taking into ac-
memory c c1 . Taking into account the current and the count the current pose of the object with respect to the
desired displacement of the object and the end–effector camera and a constant E
−1 C T (stored in the internal mem-
the error is computed: E Td,c = E E
G Td G Tc . This dis- ory c c1 ) being the transformation between the camera
placement may be too large to be executed in one con- and the end–effector frames. Thus E E C
G Tc = C T G Tc
trol step. Next a realizable increment E Td0 ,c = RT (E Td,c ) is computed. In the next
is computed. The regulator RT is responsible for the −1 step the error is computed
as E Td,c = E E
G Td G Tc . Further computations are
high level regulation in Cartesian space. For this pur- equivalent to those produced by c,ef 1,1 , hence c,ef 1,2
pose it transforms the homogeneous matrix pose rep- can be formulated analytically as:
resentation E Td,c into a vector V = [X, Y, Z, x, y, z, φ],
where [X, Y, Z] represent the coordinates of Cartesian e i+1 E e c,e c i r i
y c1 [ Td,c ] :=  ( c1 , x c1,2 ) ,
f 1,2
position, whereas the angle φ and axis [x, y, z] describe −1  (51)
E E C
the orientation. The axis versor [x, y, z] is fixed for the , RT G Td C T G Tc .
given step, hence only the X, Y, Z, φ undergo regula-
tion. The result is subsequently being clipped. Finally Finally, the Behaviour c B 1,2 is defined as:
the results are retransformed into homogeneous matrix
and subsequently sent to the virtual effector through c
B 1,2 , c
B 1,2 (—, c,ef 1,2 , —, —, cf τ1,2 ). (52)
18 Tomasz Kornuta, Cezary Zieliński

3.5.5 Behaviour c B 1,3 The error is subsequently passed to the regulator RT ,


working in the same manner as in the previously de-
Basing on the presented SAC-PB and EIH-PB visual scribed PB-SAC and PB-EIH visual servo cases. The
c,e
servo structures an analysis was performed in order to f 1,3 function can be finally described by the follow-
find points in the data flow where information from ing formula:
both cameras can be aggregated. Three major cases
e i+1 E e c,e
were distinguished: y c1 [ Td,c ] := f 1,3 (c ci1 , xe ci1 , xr ci1,1 , xr ci1,2 ) ,
 −1 0 C −1
– aggregation of object positions retrieved from both , RT (EC(E G Td
0
E Tc C T G Tc1 ,
cameras (which requires their prior transformation E E C
 −1
G Td C T G Tc2 )).
to a common reference frame, e.g. to the global
(53)
frame 0),
– aggregation of errors computed independently in both As a result, the Behaviour c B 1,3 is defined simply as:
control flows (analogically to the first case it also
requires their transformation to the same reference c
B 1,3 , c
B 1,3 (—, c,ef 1,3 , —, —, cf τ1,3 ). (54)
frame, e.g. global reference frame 0 or end–effector
frame E), 3.5.6 Idle Behaviour c B 1,0
– aggregation of both of the computed end–effector
pose adjustments computed on the base of informa- The Behaviour c B 1,0 is active when both of the cameras
tion obtained from both cameras. do not see the object. If such a situation occurs the only
possibility is to hold the manipulator still and wait un-
In the following it was decided to focus on the ag- til the object appears. Knowing that without any new
gregation of computed errors. Analogically to the PB- commands the Virtual Effector will execute the Idle Be-
SAC and PB-EIH visual servos only the Effector Con- haviour e B 1,0 , the Control Subsystem Behaviour in fact
trol Function c,ef 1,3 has to be defined – other Transi- does not have to dispatch any data, and can only limit
tion Functions are not relevant. The model of compu- itself to receiving data from the associated subsystems.
Hence the Behaviour is defined by the formula:
c,e
f1,3 Error
ET
G c2
Pose
CT
G c2
rc
c
B 1,0 , c
B 1,0 (—, —, —, —, cf τ1,0 ). (55)
computation computation x 1,2

ET ET
d,c2
ET C
ET
d,c Error G d cc
composition 1
4 Experiments
ET 0T
Effector motion d,c1 C
computation
Error Pose rc The specified system has been implemented on the base
computation computation x 1,1
ET
G c1
CT
G c1
of MRROC++ [40] (Multi-Robot Research Oriented
ET 0 0T
d ,c E c Control), a programming framework facilitating the
ec ec
y 1 x 1 creation of multi-robot controllers. The framework struc-
c,e ture reflects the described agent structure: the ECP
Fig. 26 Data flow of the function f 1,3 executing PB-EIH
and PB-SAC composition

tations performed by the function c,ef 1,3 is presented


in fig. 26. Computations related to the SAC camera
(starting from the block xr c1,1 ) and the EIH camera
(starting from the block xr c1,2 ) are equivalent to those
presented previously. Having the two errors computed,
different aggregation methods can be considered, e.g.
simple switching on the base of error comparison (and
taking into consideration only the smaller of them),
weighted composition (with constant or varying weights),
fuzzy logic, etc. Here a simple superposition with equal
weights was selected, which means that the resulting er-
Fig. 27 The experimental setup consisting of the modified
ror E Td,c transferred to the input of the regulator is in IRp-6 and a chessboard playing the role of the object of in-
fact a mean value of errors E Td,c = EC E Td,c1 , E Td,c2 . terest
Robot control system design exemplified by multi-camera visual servoing 19

0,34
Composition
– It is ontology independent, i.e. both the task and the
0,32
EIH
0,3
0,28
SAC representation of the equipment can be expressed
0,26
0,24
using any abstract concepts, dependent on the re-
0,22
0,2 quirements of the designer,
Error [m]

0,18
0,16 – Implementation independent system specification,
0,14
0,12
0,1
enabling a formal discussion of the system structure
0,08
0,06
and the way it operates,
0,04
0,02 – Systematic formal notation univocally identifying
0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 the location of each entity (variable) in the system,
Time [step]
– All system components rely on the same operation
Fig. 28 Results of experiment in which manipulator tracked principle and thus their activities can be defined in
a static object
the same way,
– Component activities are expressed in terms of tran-
(Effector Control Process) implements the control sub- sition functions supplemented by initial and termi-
system, the EDP (Effector Driver Process) performs the nal condition predicates in conjunction forming a fi-
computations performed by the virtual effector, whereas nite state automaton. As purely mathematical defi-
the VSP (Virtual Sensor Process) is responsible for ag- nitions, although exact, tend to be hard to read, the
gregation of sensoric data. In the case of visual per- notation was supplemented by data flow diagrams
ception instead of the standard VSPs MRROC++ can illustrating the computations performed by transi-
work in conjunction with DisCODe [16] (Distributed tion functions in a more intelligible way. Transition
Component Oriented Data Processing), which facili- functions and data flow diagrams complement each
tates the implementation of complex sensory (including other in the process of controller specification,
vision) subsystems. In the experiment virtual receptors – A general robot control system model prone to do-
where implemented as separate DisCODe processes. main specific decomposition has been proposed,
The experiments were performed on a modified IRp- – The approach discloses general patterns and points
6 manipulator with one camera integrated with its end- out the differences implicating parametrization, thus
effector and the other one mounted above the scene. facilitates implementation,
The goal was to reach for a chessboard. For the clarity
Despite the fact that this paper presents an example of
we present results of one of the experiments, in which
a robot visual servo utilising two cameras, hence a sin-
the object was still (fig. 28). The experiment started
gle robot system, the design procedure is well suited to
when the object was out of a sight of the EIH camera,
multi robot (agent) systems. The most prominent ap-
thus in the first stage the c,ef 1,1 transfer function was
plications of the Embodied Agent-based approach for
utilized. In the 29th step of control the object became
specification of multi-robot systems include a dual-arm
visible to both cameras, thus c,ef 1,3 responsible for vi-
system able to solve a Rubik’s cube puzzle [38], a sys-
sual servo composition was activated. In 33rd step the
tem consisting of two mobile robots realising a stig-
end-effector entered into the SAC camera field of view,
mergic cooperative box pushing [39] and a multi-robot
making the ball invisible to that camera, thus the sys-
reconfigurable fixture supporting the work pieces dur-
tem switched to the EIH-driven visual servoing (c,ef 1,2 ).
ing milling and drilling operations performed by a CNC
machine [36].
5 Conclusions
Acknowledgements This publication was financially sup-
The paper presents a systematic approach to the de- ported by the grant no. 2011/01/N/ST7/03383 from the Na-
tional Science Centre, Poland. The authors would like to
sign of robot control systems. It exhibits the following thank to Mateusz Boryń and Anna Szymanek for help with
advantages: the system implementation and its experimental verification.

– Modularisation rooted in the application domain


(robotics) thus providing a clear and intelligible sys-
tem structure. Subsequently it facilitates the imple- References
mentation of the system and future reuse of its com-
ponents, 1. Arbib, M.: Handbook of Physiology – The Nervous Sys-
tem II. Motor Control, chap. Perceptual structures and
– Division of the system into a hardware and task de- distributed motor control, pp. 1449–1480. Wiley Online
pendent parts, also facilitating future reuse of com- Library (1981)
ponents, 2. Arkin, R.C.: Behavior-Based Robotics. MIT Press (1998)
20 Tomasz Kornuta, Cezary Zieliński

3. Bauml, B., Wimbock, T., Hirzinger, G.: Kinematically 24. Russell, S., Norvig, P.: Artificial Intelligence: A Modern
optimal catching a flying ball with a hand-arm-system. Approach. Prentice Hall, Upper Saddle River, N.J. (1995)
In: IEEE/RSJ International Conference on Intelligent 25. Shoham, Y.: Agent-oriented programming. Artificial In-
Robots and Systems (IROS), pp. 2592–2599 (2010) telligence 60(1), 51–92 (1993)
4. Boryń, M., Kornuta, T.: A controller tuning method for 26. Slonneger, K., Kurtz, B.L.: Formal Syntax and Seman-
Visual Servoing (in Polish). In: K. Tchoń, C. Zieliński tics of Programming Languages: A Laboratory Based Ap-
(eds.) Proceedings of the 12th National Conference on proach. Addison-Wesley Publishing Company, Reading
Robotics – Advances in Robotics, Scientific Papers – (1995)
Electronics, vol. 2, pp. 617–626. Publishing House of War- 27. Staniak, M., Zieliński, C.: Structures of visual ser-
saw University of Technology (2012) vos. Robotics and Autonomous Systems 58(8), 940–954
5. Brooks, R.: Elephants don’t play chess. Robotics and
(2010). DOI 10.1016/j.robot.2010.04.004
autonomous systems 6(1-2), 3–15 (1990)
6. Brooks, R.A.: A robust layered control system for a mo- 28. Suzuki, S., Abe, K.: Topological structural analysis of
bile robot. IEEE Journal of Robotics and Automation digitized binary images by border following. Computer
2(1), 14–23 (1986) Vision, Graphics, and Image Processing 30(1), 32–46
7. Brooks, R.A.: Intelligence without reason. Artificial in- (1985)
telligence: critical concepts 3, 107–63 (1991) 29. Tang, F., Parker, L.: A Complete Methodology for Gen-
8. Brugali, D., Broten, G.S., Cisternino, A., Colombo, D., erating Multi-Robot Task Solutions using ASyMTRe-D
Fritsch, J., Gerkey, B., Kraetzschmar, G., Vaughan, R., and Market-Based Task Allocation. In: Robotics and Au-
Utz, H.: Trends in robotic software frameworks. In: tomation, 2007 IEEE International Conference on, pp.
D. Brugali (ed.) Software Engineering for Experimental 3351–3358. IEEE (2007)
Robotics, pp. 259–266. Springer-Verlag (2007) 30. Tonko, M., Schurmann, J., Schafer, K., Nagel, H.: Vi-
9. Chaumette, F., Hutchinson, S.: The Handbook of sually servoed gripping of a used car battery. In: Pro-
Robotics, chap. Visual Servoing and Visual Tracking, pp. ceedings of the IEEE/RSJ International Conference on
563–583. Springer (2008) Intelligent Robots and Systems (IROS), vol. 1, pp. 49–54
10. Dittes, B., Goerick, C.: A language for formal design of (1997)
embedded intelligence research systems. Robotics and 31. Zieliński, C.: A Quasi-Formal Approach to Structuring
Autonomous Systems 59(3–4), 181–193 (2011) Multi-Robot System Controllers. In: Second Interna-
11. Gat, E.: On three-layer architectures. In: D. Kortenkamp, tional Workshop on Robot Motion and Control, Ro-
R.P. Bonnasso, R. Murphy (eds.) Artificial Intelligence MoCo’01, pp. 121–128 (2001)
and Mobile Robots, pp. 195–210. AAAI Press Cambridge, 32. Zieliński, C.: By How Much Should a General Pur-
MA (1998) pose Programming Language be Extended to Become
12. Gonzalez, R.C., Woods, R.E.: Digital Image Processing,
a Multi-Robot System Programming Language? Ad-
2nd edn. Prentice Hall (2002)
vanced Robotics 15(1), 71–96 (2001)
13. Hill, J., Park, W.: Real time control of a robot with a mo-
bile camera. In: Proceedings of the 9th ISIR, pp. 233–246 33. Zieliński, C.: A unified formal description of behavioural
(1979) and deliberative robotic multi-agent systems. In: 7th
14. Hutchinson, S.A., Hager, G.D., Corke, P.I.: A tutorial International IFAC Symposium on Robot Control (SY-
on visual servo control. IEEE Tran. on Robotics and ROCO), vol. 7, pp. 479–486 (2003)
Automation 12(5), 651–670 (1996) 34. Zieliński, C.: Specification of behavioural embodied
15. Kaisler, S.: Software Paradigms. Wiley Interscience agents. In: K. Kozłowski (ed.) Fourth International
(2005) Workshop on Robot Motion and Control (RoMoCo’04),
16. Kornuta, T., Stefańczyk, M.: DisCODe: a component pp. 79–84 (2004)
framework for sensory data processing (in Polish). Pomi- 35. Zieliński, C.: Transition-function based approach to
ary Automatyka Robotyka 16(7-8), 76–85 (2012) structuring robot control software. In: K. Kozłowski (ed.)
17. Lepetit, V., Moreno-Noguer, F., Fua, P.: Epnp: An ac- Robot Motion and Control, Lecture Notes in Control and
curate o (n) solution to the pnp problem. International Information Sciences, vol. 335, pp. 265–286. Springer-
Journal of Computer Vision 81(2), 155–166 (2009) Verlag (2006)
18. Lyons, D.M.: Prerational intelligence, Studies in cogni- 36. Zieliński, C., Kasprzak, W., Kornuta, T., Szynkiewicz,
tive systems, vol. 2: Adaptive behavior and intelligent W., Trojanek, P., Walęcki, M., Winiarski, T., Zielińska,
systems without symbols and logic, chap. A Schema- T.: Control and programming of a multi-robot-based
Theory Approach to Specifying and Analysing the Be- reconfigurable fixture. Industrial Robot: An In-
havior of Robotic Systems, pp. 51–70. Kluwer Academic ternational Journal 40(4), 329–336 (2013). DOI
(2001) 10.1108/01439911311320831
19. Lyons, D.M., Arbib, M.A.: A formal model of computa-
37. Zieliński, C., Kornuta, T., Boryń, M.: Specification of
tion for sensory-based robotics. IEEE Transactions on
robotic systems on an example of visual servoing. In:
Robotics and Automation 5(3), 280–293 (1989)
20. Markiewicz, M., de Lucena, C.: Object oriented frame- 10th International IFAC Symposium on Robot Control
work development. ACM Crossroads New York, NY, (SYROCO), vol. 10, pp. 45–50 (2012)
USA 7(4), 3–9 (2001) 38. Zieliński, C., Szynkiewicz, W., Winiarski, T., Staniak,
21. Matarić, M.J., Michaud, F.: The Handbook of Robotics, M., Czajewski, W., Kornuta, T.: Rubik’s cube as a bench-
chap. Behavior-Based Systems, pp. 891–909. Springer mark validating MRROC++ as an implementation tool
(2008) for service robot control systems. Industrial Robot:
22. Padgham, L., Winikoff, M.: Developing Intelligent Agent An International Journal 34(5), 368–375 (2007). DOI
Systems: A Practical Guide. John Wiley & Sons (2004) 10.1108/01439910710774377
23. Parnas, D.: On the criteria to be used in decomposing sys- 39. Zieliński, C., Trojanek, P.: Stigmergic cooperation of au-
tems into modules. Communications of the ACM 15(12), tonomous robots. Journal of Mechanism and Machine
1053–1058 (1972) Theory 44, 656–670 (2009)
Robot control system design exemplified by multi-camera visual servoing 21

40. Zieliński, C., Winiarski, T.: Motion generation in the


MRROC++ robot programming framework. Interna-
tional Journal of Robotics Research 29(4), 386–413
(2010). DOI 10.1177/0278364909348761

View publication stats

You might also like