Professional Documents
Culture Documents
A Perspective On Agent Systems Paradigm Formalism Examples
A Perspective On Agent Systems Paradigm Formalism Examples
KrzysztofCetnarowicz
A Perspective
on Agent
Systems
Paradigm, Formalism, Examples
Studies in Computational Intelligence
Volume 582
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
About this Series
A Perspective on Agent
Systems
Paradigm, Formalism, Examples
123
Krzysztof Cetnarowicz
Institute of Computer Science
AGH University of Science and Technology
Krakow
Poland
Despite the research on the concept of an agent and agent systems conducted in
many global research centres, some of the problems have not found satisfying
solutions. It even applies to the terms connected with the denition or the basic
properties of an agent.
This monograph presents the concept of agent and agent systems from a formal
approach to examples of practical applications. Starting with a certain formal
denition of an algorithm (using such terms as a set and partial function), the goal
of introducing the agent was dened as a certain paradigm of designing and
programming computer systems, specifying its basic properties at the same time
(Chap. 2).
In order to form the principles of construction of autonomous agents, a model
of the agent was introduced (Chap. 3). Subsequent parts of the monograph (Chap. 5)
include several examples of applications of the term agent. Descriptions of different
examples of applications of agent systems in such elds as evolution systems,
mobile robot systems, articial intelligence systems are given.
In the authors opinion, the whole material presented in the monograph may
constitute an outline of methodology of the design and realization of agent systems
based on the M-agent architecture oriented on different areas of applications.
I am most grateful to my colleagues, thanks to whom the following work
could be completed. I would like to express my deep sense of gratitude to
Prof. E. Nawarecki whose precious comments were generally most helpful, as well
as to Prof. S. Ambroszkiewicz who provided me with constructive assessment.
v
Contents
3 M-agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 The Notion of an Agent and the Concept
of Its Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
vii
viii Contents
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Author Biography
xi
Chapter 1
Introduction to the Subject of an Agent
in Computer Science
The development of computer hardware is one of the basic factors influencing the
advancement of software, technology and particularly the development of systems
that allow for more advanced computer applicationsthe operating systems. They
are becoming more extended, and therefore enable the realization of greater and more
complex algorithms prepared in the form of software packets. Processing complex
programs creates the need for the introduction of new concepts and solutions in the
field of operating systems.
One of the most significant solutions in the software development was the
possibility of running a program in the form of independent process in the oper-
ating system area. The development of MULTICS and then UNIX operating system
concepts contributed to the introduction of the process term constituting an inde-
pendent entity that came into being, went through the subsequent stages until it
became dispensable and eliminated. In the meantime, the operating system provided
characteristic space in which subsequent cycles of life of the process could be real-
ized. There are certain similarities between the process, particularly its cycle of life,
and the existence of living creatures in the natural environment. We can imagine that
the operating systems constitute a certain environment, where processes operate
figuratively speakinglive in the same way as in the natural environment.
The introduction of communication between operating systems residing on
different computers, connected with the network resulted in further development
of these operating systems. It allowed regular communication, cooperation, and uni-
fication of particular operating systems, integrating them into a single one that con-
nected computers of a certain company, city and country trough the network in order
to operate on a global scale. Connected operating systems did not lose their property
of being the environment for the activity of processes, but they created even greater
space on a global scale. This virtual space was called cyberspace and has provided
a basis for further development of information systems.
To illustrate our considerations, the application of the agent system was presented in
order to counterbalance the distribution of resources in a certain environment.
Subsequent parts of the monograph (Chap. 5) include several examples of appli-
cations of the term agent. Descriptions of different examples of applications of agent
systems in such fields as evolution systems, mobile robot systems, artificial intelli-
gence systems are given.
The intention of the author was to present and specify formal aspects connected
with the autonomous term agent as well as the agent system, providing a starting
point for designing and realization of agent systems. These considerations were
made complete with the whole range of various applications that expose interesting
possibilities resulting from the agent approach and technical aspects connected with
system formation of this category.
Certain passages are repeated in the work with the aim of associating particular
problems from one part of the book with those outlined in different chapters on which
they depend. Therefore it is possible to read certain chapters independently of each
other, which mainly refers to Chaps. 2 and 3.
In the authors opinion, the whole material presented in the monograph may
constitute an outline of methodology of the design and realization of agent systems
based on the M-agent architecture oriented on different areas of applications.
Chapter 2
Agent Versus Decomposition
of an Algorithm
Abstract This chapter looks at the notions of the partial function and the Cartesian
product. The presentation of the problem opens with a formal approach to the defin-
ition of the agents properties. This part explores the reasons for the introduction of
the concept of the agent and give an interpretation of such definitions as the autonomy
of the agent or its capability to observe the environment.
In this chapter we will try to illustrate the concept of an autonomous agent and espe-
cially the reasons why it was necessary to establish this notion. Briefly speaking, why
the concept of the autonomous agent was invented and what should be understood by
autonomous. Although the concept of the autonomous agent has existed in computer
science for some time, it has not been clearly and precisely defined [154, 161, 174].
Particularly, there is no formal or at least more precise definition of the term agent
and its basic characteristic features that could differentiate the agent from the object.
The lack of this definition renders it difficult and often makes it impossible to carry
out research on agent systems, not only in the area of formal research but also in
practical applications.
There have been some attempts to solve these problems [81, 82, 98, 184], which
concentrated on making reference to numerous examples illustrating notions intro-
duced through analogy, or with reference to the analysis of the meanings of the
notions, e.g., the term autonomous, used in the philosophical basis for the theory
of evolution [122, 123]. Such an approach may intuitively have brought these notions
closer, but it did not contribute to a more precise definition. Finally, the effort did not
lead to satisfying results.
It may be accepted that considerations we present in this monograph, which are
an attempt to analyse and find solutions to those problems, are the development of
the previous suggestions or are inspired by them.
Below, we will present an attempt to define the agent based on an algorithm model,
well-known from the literature [149, 180].
This approach is based on the concept of autonomy of the agent, which is
considered in comparison with the concept of the object. Particularly, problems
Springer International Publishing Switzerland 2015 7
K. Cetnarowicz, A Perspective on Agent Systems, Studies in Computational
Intelligence 582, DOI 10.1007/978-3-319-13197-9_2
8 2 Agent Versus Decomposition of an Algorithm
concerning the interaction between agents (similar to those between objects) are
considered here and the solution to these problems will be suggested with the use of
a communication process and the operation of observation [31, 34, 60, 133].
The basic initial concept for further considerations is the idea that a particular
problem (task) may be solved not by one algorithm but by a group of cooperating
algorithms. In the beginning, the problem with cooperation of two (or more) coop-
erating algorithms appears. The model and then an attempt to define an agent will
be illustrated in the following steps:
Accepting as the starting point a general definition of an algorithm, used for solving
a specified task, we will consider the possibility of application (in the simplest case
two) of mutually cooperating algorithms in the realization of this task.
Further, we will consider the problem of algorithm decomposition which is too
complex (sophisticated) to be easily designed and realized; specifically we will
analyze how that kind of decomposition can be realized with the use of a few
cooperating simpler algorithms. The above- mentioned considerations on algo-
rithm decomposition will make it possible to define the relationship between the
cooperating algorithms.
The analysis of these relationships between cooperating algorithms leads to pro-
viding a more formal definition of the notion autonomy of a particular algorithm
towards other algorithms it cooperates with, as well as determining what conse-
quences arise from the lack of this autonomy.
Summing up, the above considerations lead us to the following conclusions:
such that
ui+1 = F(ui ), uk final state (2.3)
The above sequence, being the realization of the algorithm Alg, is finite if the
final state uk exists and it proceeds when the state uk does not belong to the
domain of the function F (but it belongs to the range of function F). So the final
states of the algorithm Alg are those elements of the set U which belong to the
domain of the function F. Further, we will take into account only these algo-
rithms whose realization constitute the finite sequences (Formula 2.2) mentioned
above.
Let us accept that the algorithm is used as a solution to a certain problem. With
the use of the above denotations a given problem is represented by u0 , however,
the solution to a problem is represented by uk and the sequence Exec(Alg, u0 ) =
(u0 , u1 , . . . , ui , ui+1 , . . . , uk ) is the schema or the method of solving a given
problem.
In practical applications, when we form an algorithm which is too complicated,
it is interesting to decompose it into a few simpler algorithms. The algorithm Alg
will be referred to as a complex algorithm, and the algorithms it was decomposed
(distributed) tocomponent algorithms Alg1 , . . . , Algn .
For this reason, it is necessary to define the decomposition process of a certain
algorithm. We may say that a certain algorithm Alg was decomposed into component
algorithms Alg1 , Alg2 , . . . , Algn , when by using component algorithms we receive
the same result as with the use of a complex algorithm Alg. What is more, we expect
component algorithms Alg1 , Alg2 , . . . , Algn to be independent to such an extent that
they could be created (designed, programmed) separately and in parallelat the same
time (which should accelerate the process of complex algorithms). It is possible in
certain cases.
Along with further considerations we will often limit ourselves to two component
algorithms Alga , Algb (sometimes referred to as Alg1 , Alg2 ), which does not limit
the generality of the above considerations. The problem of decomposition may
be considered from different perspectives. Here, we will limit ourselves to two
approaches: decomposition inspired by the division of the set of states into a few
subsets (e.g. two), and decomposition inspired by the concept of the Cartesian
product [180].
10 2 Agent Versus Decomposition of an Algorithm
F = F1 F2 , F1 F2 = and F1 : U1 U1 , F2 : U2 U2 , (2.5)
F1 : U1 U, F2 : U2 U, (2.6)
where the problem (u0 ) is denoted by u10 , and the result by uk1 .
F2
u20 u2m
Alg2
2.3 Decomposition Inspired by the Division of a Set 11
In order to define the elements u of the certain set U (u U) we may use characteristic
features of a given element u. It is connected with the fact that in practice it is easier to
describe an element of a set with the use of a given set of features which take values
from this defined set (e.g., the set of natural numbers, real numbers, etc.) These
features may now be considered as the variables taking values from the defined sets
(Fig. 2.2).
Then, we assume that the elements of the set U are associated with the elements
of a given set X, that represent the sets of features characteristic of these elements x
(x X), so they have the form of n-tuples:
where ui U,
x i X = X1 X2 Xn .
Each element xji defines the characteristic feature (or attribute) j of the element ui
(or corresponding to its element xi). In consequence, instead of the set U we may use
ui
Fig. 2.2 Schema of the Cartesian product applicationin the form of features of the elements of
the set U ((x1i , x2i , ...xni )the set of features describing the element ui )
12 2 Agent Versus Decomposition of an Algorithm
f (x1k , x2k , . . . , xm
k
) = (x1k+1 , x2k+1 , . . . , xm
k+1
) F(uk ) = uk+1 . (2.11)
In further considerations without the loss of generality we may limit the decompo-
sition of the set U into only to two sets X1 and X2 . The constraint to only two sets does
not limit further considerations and all of the significant problems may be further
successfully analyzed. The set X is the Cartesian product of two sets X = X1 X2 .
This limitation may be treated as the result of grouping the elements of the Carte-
sian product: X = X1 X2 , where X1 = X1 X2 Xi , X2 = Xi+1 Xi+2 Xm
(Fig. 2.3).
The projection of the set X onto the set X1 and X2 may be introduced by
The decomposition may now come down to the fact that it is necessary to form two
component algorithms Alg1 and Alg2 as in Fig. 2.4. For this reason, it is necessary to
2.4 Decomposition Inspired by the Concept of the Cartesian Product 13
define the sets of states of these algorithms as well as their (partial) functions of the
transition.
The following problems and questions appear:
The component algorithms Alg1 and Alg2 resulting from the decomposition of the
algorithm Alg should have defined sets of states X1 and X2 , specified with the use
of the set of states of the algorithm Alg. Decomposition of the sets of states X
based on the concept of the Cartesian product (Fig. 2.3) may be the starting point
of realizing such decomposition.
Another problem is that the algorithms should have the transition partial functions
f1 and f2 , which should be formed on the basis of the transition function f , so the
function f should be decomposed into two functions f1 and f2 . That will allow the
algorithms Alg1 = (X1 , f1 ) and Alg2 = (X2 , f2 ) to create, with the appropriate
transition functions (Fig. 2.4).
It should also be analyzed whether the algorithms Alg1 and Alg2 are related to
each other through mutual interactions. In particular, the mutual relationship of
the algorithms should be analyzed. We need to define what the notion autonomous
means (or should mean) and how we should really understand it. In other words,
how the notion autonomous should be defined to make it clear that we deal with
the autonomous algorithm.
Further, it should be considered whether it is possible (and to what extent) to make
algorithms Alg1 and Alg2 independent so that they can be autonomous algorithms.
The question arises as to whether the decomposition of the algorithm Alg into the
algorithms Alg1 and Alg2 ensures that they may solve the problems that are solved
with the use of the algorithm Alg = (X, f ). The term of equivalence of algorithms,
which is considered in Sect. 2.5.1, seems to be useful here.
14 2 Agent Versus Decomposition of an Algorithm
In further considerations, we will try to answer these questions and solve the
problems, and particularly discuss the notion of autonomy, define it more precisely
and show that by using appropriate methods it is possible to realize decomposition
of a given algorithm into component algorithms that are considered as autonomous.
(a)
(b)
Fig. 2.5 Schema of the relationships between the algorithms; the case when the algorithm Alg1
is not autonomous towards the algorithm Alg2 (the interoperating relationship), a for calculating
the transition function f1 both states are indispensable; the state of the algorithm Alg1 as well as
the state of the algorithm Alg2 , b the calculation of the transition function f1 has an influence on the
change of states of the algorithms Alg1 and Alg2
2.4 Decomposition Inspired by the Concept of the Cartesian Product 15
the algorithm Alg2 . The problem of autonomy should be considered, taking account
of the cooperation of algorithms, and the forms of the functions f1 and f2 are crucial.
The following cases of the autonomy of the algorithm Alg1 towards the algorithm
Alg2 may be considered:
The algorithm Alg1 is autonomous towards the algorithm Alg2 if for the transition
function f1 of the algorithm Alg1 the following relationships occur:
and
(x1 ,x2 )Df1 : (Proj2 f1 )(x1 , x2 ) = x2 , (2.14)
f1 : X1 X1 . (2.15)
It means that the state of the algorithm Alg1 is only needed for the calculation
of the transition function f1 whereas the state of algorithm Alg2 is not necessary.
However, the calculation of the transition function f1 influences only the change
of the state of the algorithm Alg1 and does not have any influence on the state of
the algorithm Alg2 . The schema of that relationship is shown in Fig. 2.6.
The algorithm Alg1 is not autonomous towards the algorithm Alg2 with inter-
information relationship if for the transition function f1 of the algorithm Alg1 the
following relationships occur:
(x1 ,x2 )Df1 , (x1 ,x2 )Df1 , (x2 = x2 f1 (x1 , x2 ) = f1 (x1 , x2 )) (2.16)
(a)
(b)
Fig. 2.6 Schema of the relationships between the algorithms; the case when the algorithm Alg1 is
autonomous towards the algorithm Alg2 , a for calculating the transition function f1 only the state
of the algorithm Alg1 is necessary, b the calculation of the transition function f1 influences only
the change of the state of the algorithm Alg1 , and does not have any influence on the state of the
algorithm Alg2
16 2 Agent Versus Decomposition of an Algorithm
and
(x1 ,x2 )Df1 : (Proj2 f1 )(x1 , x2 ) = x2 (2.17)
f1 : X1 X2 X1 . (2.18)
It means that for the calculation of the transition function f1 both statesthe state
of the algorithm Alg1 as well as the state of the algorithm Alg2 are necessary,
however, the calculation of the transition function f1 influences only the change of
the state of the algorithm Alg1 and does not have any influence on the state of the
algorithm Alg2 . The schema of this relationship is shown in Fig. 2.7.
The algorithm Alg1 is not autonomous towards the algorithm Alg2 , with interaction
relationship if for the transition function f1 of the algorithm Alg1 the following
relationships occur:
and
(x1 ,x2 )Df1 : (Proj2 f1 )(x1 , x2 ) = x2 (2.20)
f1 : X1 X1 X2 . (2.21)
It means that for the calculation of the transition function f1 only the state of the
algorithm Alg1 is necessary, however, the calculation of the transition function
(a)
(b)
Fig. 2.7 Schema of the relationships between the algorithms; the case when the algorithm Alg1
is not autonomous towards the algorithm Alg2 with inter-information relationship, a for calculat-
ing the transition function f1 both statesthe state of the algorithm Alg1 as well as the state of
the algorithm Alg2 are necessary, b the calculation of the transition function f1 influences only
the change of the state of the algorithm Alg1 , and does not have any influence on the state of the
algorithm Alg2
2.4 Decomposition Inspired by the Concept of the Cartesian Product 17
(a)
(b)
Fig. 2.8 Schema of the relationships between the algorithms; the case when the algorithm Alg1
is not autonomous towards the algorithm Alg2 with interaction relationship, a the sources of data
indispensable to calculate the transition function f1 , b the influence of the transition function f1 on
the change of the state of algorithms
f1 influences boththe change of the state of the algorithm Alg1 as well as the
change of the state of the algorithm Alg2 . The schema of that relationship is shown
in Fig. 2.8.
The algorithm Alg1 is not autonomous toward the algorithm Alg2 , with
interoperating relationship (or completely non-autonomous) if for the transition
function f1 of the algorithm Alg1 the following relationships occur:
(x1 ,x2 )Df1 , (x1 ,x2 )Df1 (x2 = x2 f1 (x1 , x2 ) = f1 (x1 , x2 )) (2.22)
and
(x1 ,x2 )Df1 : (Proj2 f1 )(x1 , x2 ) = x2 , (2.23)
f1 : X1 X2 X1 X2 . (2.24)
It means that for the calculation of the transition function f1 , both states are
necessarythe state of the algorithm Alg1 as well as the state of the algorithm
Alg2 however, the calculation of the transition function f1 influences both, the
change of the state of the algorithm Alg1 as well as the change of the state
of the algorithm Alg2 . The interoperating relationship exists if boththe inter-
information relationship as well as the interaction relationshiptake place at the
same time.
The algorithm Alg1 is not autonomous towards the algorithm Alg2 if there is an
inter-information, interaction or interoperating relationship.
The algorithms Alg1 and Alg2 are mutually autonomous if the algorithm Alg1
is autonomous towards the algorithm Alg2 and the algorithm Alg2 is autonomous
towards the algorithm Alg1 .
18 2 Agent Versus Decomposition of an Algorithm
Example 1
Let us consider the following example of an algorithm:
U = {a, b, c, d},
(2.25)
F(a) = b, F(b) = c, F(c) = d
Let us apply decomposition, using the concept of the Cartesian product applied
to the set U:
X1 = {0, 1}, X2 = {0, 1}
(2.26)
X = X1 X2 = {(0, 0), (0, 1), (1, 0), (1, 1)}
We may define the following correspondence between the elements of the set U
and the set X:
a corresponds with (0, 0), b corresponds with (0, 1),
(2.27)
c corresponds with (1, 0), d corresponds with (1, 1)
f ((0, 0)) = (0, 1), f ((0, 1)) = (1, 0), f ((1, 0)) = (1, 1) (2.28)
Alg2 = (X, f2 )
(2.30)
f2 (0, 0) = (0, 1), f2 (1, 0) = (1, 1)
(which allows to state that the algorithm Alg2 is autonomous towards the algorithm
Alg1 ).
However, Alg1 is non-autonomous (interaction relationship) because it changes
the state x2 of the algorithm Alg2 .
The function f1 or f2 is sometimes informally denoted by f2 = (, 0) = (, 1),
where denotes an optional value (in example 0 or 1).
However, if we change the functions and the component algorithms will be defined
as follows:
Alg1 = (X, f1 ) f1 (0, 0) = (0, 1), f1 (1, 0) = (1, 1)
(2.32)
Alg2 = (X, f2 ) f2 (0, 1) = (1, 0)
2.4 Decomposition Inspired by the Concept of the Cartesian Product 19
then neither the algorithm Alg1 is autonomous towards the Alg2 nor the algorithm Alg2
is autonomous towards the Alg1 . As we can see from the example, gaining autonomy
of one component algorithm towards another component algorithm depends on the
appropriate decomposition of the initial complex algorithm.
Let us consider decomposition of the algorithm Alg (Alg = (U, F)), applying the
Cartesian product to the set of states U. Such a decomposition method seems to
be useful in practical applications, nevertheless, it results in component algorithms
which may often be mutually dependent and therefore that is more difficult for
parallel (independent) forming. As was mentioned above, the set X was associated
with the set of states U, which was the Cartesian product X = X1 X2 (in the
general case X = X1 X2 Xm , see Sect. 2.4). The function f , according to
the set theory definition of the function notion, may be considered as a set of pairs
(Sect. 2.4.1):
(x, x ) such that f (x) = x ,
where x = (x1 , x2 ) X
x = (x1 , x2 ) X
X = X1 X2
As a result, we may divide the function f into two disjoint subsets (generally, a
few disjoint subsets), each of them represents a certain component function being
the result of decomposition (division) of the initial function f (Fig. 2.4).
Such functions as f1 and f2 map the set X onto X( f1 : X X, f2 : X X), and
the function f is decomposed into two functions f1 and f2 such that f = f1 f2 , f1
f2 = . Because of the fact that the function F in the algorithm Alg = (U, F) is a
partial function, the functions f and f1 as well as f2 are also partial functions.
As a result, we receive two algorithms Alg1 = (X, f1 ) and Alg2 = (X, f2 ) for
which x1 and x2 may be treated as variables gaining the values form the sets X1
and X2 that realize the evolution of the states of the algorithms Alg1 and Alg2 .
These two algorithms may be treated as the decomposition of the algorithm Alg =
(U, F) (or corresponding to it algorithm Alg = (X, f )) realized with the use of the
Cartesian product applied to the set U.
Of course, from the practical point of view, in order to let the algorithms Alg1
and Alg2 be created independently (designed, programmed) these algorithms should
be mutually autonomous. Their transition functions would have the following form:
f1 : X1 X1 and f2 : X2 X2 .
In practice these functions have the form f1 : X X and f2 : X X for X =
X1 X2 , which makes the component algorithms Alg1 = (X, f1 ) and Alg2 = (X, f2 )
non-autonomous.
20 2 Agent Versus Decomposition of an Algorithm
We may improve this relationship of algorithms and so that the algorithms Alg1
and Alg2 become autonomous (mutually). It can be realized in two steps, which will
be described further in later chapters.
The first step is to extract a certain part of variables as the global variables, which
involves the introduction of an environment, described by these global variables
representing the state of that environment.
The inspiration for the introduction of the environment concept is derived from the
observation of mobile robots action. Their environment plays a crucial role in robots
action and it has an influence on the design of their algorithms of action, in a sense, the
concept of the environment has accompanied the development of computer science
sine operating systems were introduced. It was operating systems that constituted
and constitute the environment for the realization of programs (algorithms) of the
user. The introduction of the environment concept is realized as follows:
We choose certain variables and consider them as the parameters describing the
state of the environment. These variables (a set of these variables denoted by X0 )
are available to each partial algorithm (see Alg1 and Alg2 in Fig. 2.9).
Other variables are grouped in such a way that each group (of variables) may be
considered as variables describing the state (also referred to as internal state) of
individual partial algorithm (X1 , X2 in Fig. 2.9).
We may now consider the component algorithms Alg1 = (X0 X1 X2 , f1 )
and Alg2 = (X0 X1 X2 , f2 ) cooperating (described in the next subchapter) in the
environment X0 . The introduction of the environment, the state which is available to
both algorithms and can be changed by these algorithms, may create the impression
that the algorithms to some extent become mutually dependent and lose autonomy
towards each other. Nevertheless, new possibilities appear. Such dependence through
the environment allows cooperation between the algorithms and a replacement of one
algorithm for another, an equivalent one.
The concept of the environment used for the realization of cooperation between
the algorithms may be presented with a scenario of cooperation of algorithms, which
is shown in Fig. 2.10.
Let us accept that a problem is a pair (x00 , and x0k ) where x00 is the initial state,
being the task of the problem, and x0k is the final state, representing a solution to the
problem. It is necessary to emphasize that x00 and x0k are the states of the environment
and thus are not the states of either of algorithm.
Alg 1
f1 f1 f1
f2 f2 f2
x20 x21 x22 x23 x2k2
Alg 2
Fig. 2.10 The concept of the environment and the model of cooperation between the algorithms
Alg1 and Alg2 through the environment X0
22 2 Agent Versus Decomposition of an Algorithm
The algorithm solves this problem if its application leads from the initial to the
final state. This problem may be solved by two algorithms cooperating through the
environment in the following way (Fig. 2.10):
Let us accept that a problem which has to be solved is modelled with the use of the
state of the environment X0 and that the solution to the problem has to be achieved
by two cooperating algorithms Alg1 = (X0 X1 , f1 ) and Alg2 = (X0 X2 , f2 ).
It is necessary to adjust the problem in such a way that it could be solved by the
algorithms that cooperate with each other by the appropriate encoding the problem
(task) in the form of the chosen state of the environment x00 X0 .
Further, it is necessary to prepare the cooperating algorithms for an action properly
choosing the initial internal states of the algorithms, which for Alg1 may be denoted
by x10 , and x20 for the algorithm Alg2 . These states should be chosen in such a way
that there would be a situation in which either the pair (x00 , x10 ) belongs to the
domain of the function f1 , and then the algorithm Alg1 starts the action, or the
pair (x00 , x20 )) belongs to the domain of the function f2 , which makes the algorithm
Alg2 perform the first move.
Cooperation in problem solving involves transformation of the state of the environ-
ment from the initial into the final state in the appropriate order made either by the
algorithm Alg1 or the algorithm Alg2 . For example, in Fig. 2.10 we may observe
a situation in which during the cooperation of the algorithms we have the state of
the environment x02 , and the pair (x02 , x11 ) belongs to the domain of the function f1 ,
which makes the algorithm Alg1 undertake an action and transform the environ-
ment into the state x03 . However, the state (x03 and x21 ) belongs to the domain of the
function f2 , which makes the algorithm Alg2 transform the state of the environment.
Synchronization of cooperation between the algorithms takes place as a result
of appropriate states absorbed by the environment and algorithms. In practice,
cooperating algorithms may not only be used for describing of different kinds of
cooperation between algorithms, but also for decomposing more complex algorithms
into autonomous component algorithms, which will be presented further in later
chapters.
such internal states of algorithms as x10 and x20 that the realization of algorithms
Exec(Alg1 , (x00 , x10 )) and Exec(Alg2 , (x00 , x20 )) exists in the following form:
Exec(Alg1 , (x00 , x10 )) = ((x00 , x10 ), (x01 , x11 ), . . . , (x0k1 , x1k1 )))
Exec(Alg2 , (x00 , x20 )) = ((x00 , x20 ), (x 10 , x21 ), . . . , (x 0k2 , x2k2 )))
If additionally it occurs that x0k1 = x 0k2 , then these algorithms are equivalent in
a given problem solving (internal states of the algorithms x1k1 and x2k2 are of no
importance here, and only the final state of the environment is essential). If a solu-
tion to a given problem x00 through the realization of the first algorithm Alg1 is
equivalent to the solution to this problem through the realization of the algorithm
Alg2 , then we may replace the algorithm Alg1 with the equivalent algorithm Alg2
and choose the one which is, e.g., faster or makes better use of the resources.
The algorithms Alg1 = (X, f1 ) and Alg2 = (X, f2 ) cooperate through the
environment but X = X0 X1 X2 . It means that certain changes in the environ-
ment, those necessary for problem solving, are realized by the former algorithm,
and the other ones by the latter (Fig. 2.10), but simultaneously the algorithm Alg1
affects the changes of the state x2 of the algorithm Alg2 and vice versa. It may
be noted that the algorithm Alg1 (or Alg2 ) is non-autonomous, then it is neces-
sary to broaden the definition of autonomy to algorithms cooperating through the
environment.
The concept of the environment and the cooperation of algorithms through the envi-
ronment, under the definition of autonomy presented above, is that the cooperating
algorithms are not autonomous but dependent through the environment. However,
apart from the states of the environment there are internal states of individual algo-
rithms, and therefore we may broaden definitions of autonomy introduced earlier.
On the other hand, the concept of the environment allows consideration of the phe-
nomenon of communication through the environment. For example, the algorithm
Alg1 may make such changes in the environment that can be read by the algorithm
(Alg2 ) as changes connected with certain established information, which is a kind of
sending a message by the algorithm Alg1 to the algorithm Alg2 . This way of commu-
nication allows for more complex forms of cooperation such as negotiations, planning
and forming groups of cooperating algorithms, or realization of cooperation as such.
The notion of autonomy has been defined and discussed in Sect. 2.4.2, but after the
introduction of the environment concept and the possibility of cooperation between
the algorithms based on these environments it needs certain modification.
24 2 Agent Versus Decomposition of an Algorithm
f1 : X0 X1 X0 X1 (2.34)
It means that for the calculation of the transition function f1 only the state of the
algorithm Alg1 and the state of the environment X0 are necessary and the state of
the algorithm Alg2 is not necessary at all. However, the calculation of the transition
function f1 influences only the change of state of the algorithm Alg1 and the state
of the environment X0 , and does not influence the state of the algorithm Alg2 .
The algorithm Alg1 is not autonomous with inter-information relationship towards
the algorithm Alg2 if the function f1 has the following form:
f1 : X0 X1 X2 X0 X1 (2.35)
It means that for the calculation of the transition function f1 the state of the algo-
rithm Alg1 , the state of the algorithm Alg2 and the state of the environment X0 are
necessary. However, the calculation of the transition function f1 influences only
the change of state of the algorithm Alg1 and the state of the environment X0 and
does not influence the state of the algorithm Alg2 .
The algorithm Alg1 is not autonomous with inter-action relationship towards the
algorithm Alg2 if the function f1 has the following form:
f1 : X0 X1 X0 X1 X2 (2.36)
It means that for the calculation of the transition function f1 only the state of the
algorithm Alg1 and the state of the environment X0 are essential and the state of the
algorithm Alg2 is not necessary at all. However, the calculation of the transition
function f1 influences the change of state of the algorithm Alg1 , the state of the
algorithm Alg2 and the state of the environment X0 .
The algorithm Alg1 is not autonomous with inter-operation relationship towards
the algorithm Alg2 (or completely non-autonomous) if the function f1 has the
following form:
f1 : X0 X1 X2 X0 X1 X2 (2.37)
It means that for the calculation of the transition function f1 the state of the algo-
rithm Alg1 , the state of the algorithm Alg2 and the state of the environment X0
are essential. However, the calculation of the transition function f1 influences the
2.5 Decomposition with the Use of the Concept of the Cartesian . . . 25
change of state of the algorithm Alg1 , the state of the algorithm Alg2 and the state
of the environment X0 .
Summing up, it can be informally said that a given algorithm (e.g., Alg1 ) is
autonomous towards the other algorithm (e.g., Alg2 ) when in order to appoint its
next state, apart from the information about its own state, the only thing it needs is
the information about the state of the environment, and the information about the
state of the other algorithm is not necessary.
However, the algorithm Alg1 is not autonomous towards the algorithm Alg2 if
there is an inter-information relationship, inter-action relationship or inter-operation
relationship.
We may say that the algorithms Alg1 and Alg2 , cooperating through the environ-
ment X0 , are mutually autonomous if the algorithm Alg1 is autonomous towards the
algorithm Alg2 and the algorithm Alg2 is autonomous towards the algorithm Alg1 . If
we deal with a large number of cooperating algorithms, then the property of auton-
omy may be extended to the whole group. A given algorithm is autonomous towards
the whole group of algorithms if it is autonomous towards each algorithm in the
group.
A given problem denoted as the state of the environment x 0 X, whose solution is the
state of the environment x k X, may also be solved by two cooperating algorithms
Alg1 and Alg2 (Fig. 2.10).
We may consider the algorithm Alg = (X, f ) which is decomposed into two com-
ponent, autonomous, cooperating algorithms Alg1 = (X1 , f1 ) and Alg2 = (X2 , f2 ),
and the set of parameters X0 , representing the state of the environment, i.e., the
global data. The sets X1 and X2 correspond to the internal data of the appropriate
algorithms Alg1 and Alg2 which define their states and also X0 the global data. The
partial functions f1 and f2 define the action of the algorithms, i.e., the evolution of
their states. We may consider the following cases of the autonomy influence on the
realization of decomposition:
The algorithms Alg1 and Alg2 are mutually autonomous, which means that the
functions accept the following forms: f1 : X0 X1 X0 X1 and f2 :
X0 X2 X0 X2 . In this case component algorithms may be formed (designed,
realized) independently, separately and at the same time (Fig. 2.10).
The component algorithms are not mutually autonomous, which means that there is
inter-information, inter-action or inter-operation relationship. The algorithm Alg1
needs for the assignment of its next state not only knowledge about the state of an
environment but also information on the state of the other algorithm (in this case
Alg2 ), and changing its state it modifies not only the state of the environment but
also the state of the algorithm Alg2 . This case can be often found in practice, and
26 2 Agent Versus Decomposition of an Algorithm
(a) f 1 () f 2 ()
x0 x1 x2
(b) f1 () f 2 ()
x0 x1 x2
Fig. 2.11 Schema of decomposition of an algorithm based on the notion of autonomy, a for the
calculation of the transition function f1 the state of the algorithm Alg1 , the state of the algorithm Alg2
and the state of the environment X0 are all indispensable, b calculation of the transition function f1
has an influence on the change of states of the algorithms Alg1 and Alg2 and the change of state of
the environment X0
then decomposition of a given algorithm into component algorithms does not give
possibilities to form component algorithms independently (and especially parallel
in time).
The above-presented definition of the autonomy of cooperating algorithms has
the principal meaning for further considerations.
In practice there are cases when the implementation of the concept of the
environment does not solve completely the problem of algorithm decomposition
into component algorithms. Although decomposition of a given algorithm Alg into
component, mutually autonomous algorithms is not always possible (Fig. 2.11),
decomposition into component algorithms, which not necessarily have to be mutually
autonomous, is easier and more often possible.
The question arises whether it is possible to use that kind of non-autonomous
decomposition as a starting point for finding a method of reducing (bringing) coop-
erating, non-autonomous, component algorithms to mutually autonomous (at least
in some scope) algorithms.
These methods will be presented as another step of decomposition of algorithms
and will be dealt with in later chapters.
Let us accept that a given algorithm Alg = (X, f ) may be decomposed into two
component algorithms Alg1 = (X1 , f1 ) and Alg2 = (X2 , f2 ) cooperating through the
external environment X0 . In effect, a given problem encoded as x 0 X0 and solved
with the use of the algorithm Alg may also be solved with two cooperating algorithms
Alg1 and Alg2 (Sect. 2.5, Fig. 2.10).
2.5 Decomposition with the Use of the Concept of the Cartesian . . . 27
If cooperating algorithms Alg1 and Alg2 are mutually autonomous, then solving
a problem x 0 with these algorithms does not cause any difficulties, as shown earlier.
However, a problem arises when the cooperating, component algorithms are not
autonomous. It results from the fact that the algorithm Alg1 (properly Alg2 ) needs
but has no access to the internal (local) data of the algorithm Alg2 (properly Alg1 ),
so it is unable to read the value of variables which are essential for the calculation
of its function f1 (properly f2 ) and for making the next step (Fig. 2.11).
However, there are possibilities of ensuring access of the algorithm Alg1 (properly
Alg2 ) to the internal (local) data of the algorithm Alg2 (properly Alg1 ). It comes
down to the replacement of algorithms which are mutually non-autonomous with the
algorithms that gain autonomy to some extent, which will be presented further.
Two methods may be considered which enable access of one algorithm to the inter-
nal (local) data of the other algorithm, that is to say that non-autonomous algorithms
are replaced with autonomous algorithms:
with the use of communication process between the algorithms,
with the use of observation operation of action of one algorithm (behaviour) by
the other algorithm.
These methods are presented in later parts of the monograph.
method12
x0 x1 x2
Obj1 Obj2
(b) f1() f2()
method22
x0 x1 x2
Obj1 Obj2
Fig. 2.12 Schema of the concept of the object, formed as the communicating algorithms which
are to some extent autonomous, a the object Obj1 receives the information about the state of the
object Obj2 , necessary for the calculation of the transition function f1 through calling an appropriate
method, the method12 , b the result of calculation of the function f1 causes changes of the state of
object Obj2 realized through calling the method22
The solution using the process of communication is burdened with some imper-
fections, which results from the fact that the mechanism of communication does
not guarantee full independence (and at the same time autonomy) of cooperating
algorithms.
An algorithm which is asked by another algorithm to send (and similarly receive
transferred) data should agree to do that. It means that the algorithm Alg1 while
calling a method, the method2 of the algorithm Alg2 must be provided with access
to the appropriately working method by the algorithm Alg2 .
2.5 Decomposition with the Use of the Concept of the Cartesian . . . 29
A similar situation takes place in the case of transferring results of the function f2
with the use of the method2 .
A different approach to the realization of cooperation in problem solving may
be proposed, however, the cooperating algorithms realizing this cooperation will
be more independent (autonomous) than in the presented object-oriented approach
which has already been presented.
This approach is possible due to the observation operation which a given partial
algorithm may be equipped with. With the use of observation one algorithm may trace
the behaviour (action in the environment) of another cooperating algorithm. A given
component algorithm observes the environment and especially changes that occur
in that environment resulting from the action of another component algorithm. On
the basis of these observations, it may learn (indirectly and probably approximately)
about the internal state of another algorithm, and through the change of the state
of the environment it may influence (indirectly) the change of the internal state of
another algorithm (Fig. 2.13).
This method of solving a problem leads us to the agent notion (Ag) which will be
identified with the algorithm equipped with the capability of observing. Particularly,
this algorithm (agent) will be denoted by Ag = (X, f ) with the appropriate indices,
if necessary.
The approach to cooperation between the agents may be specified with the use of
the following reasoning:
Let us consider cooperation between the agents Ag1 = (X1 , f1 ) and Ag2 = (X2 , f2 )
through the environment X0 .
f1() f2()
observation of
the environment
by the algorithm
(agent) Ag1
x0 x1 x2
Ag1 Ag2
Fig. 2.13 Schema of decomposition of an algorithm with the use of the agent concept. The agent
Ag1 observes changes occurring in the environment through the agent Ag2 , which gives him the
capability to define the internal state x2 of this agent
30 2 Agent Versus Decomposition of an Algorithm
For the calculation of values, the function f1 , which is responsible for the realization
of the agent Ag1 algorithm, needs internal parameters of that agent (x1 ), global
data (x0 ), and internal parameters (all or some) (x2 ) of the agent Ag2 . It should be
emphasized that the agent Ag1 does not have access (direct) to parameters (x2 ).
For the purpose of achieving the appropriate data, the agent Ag1 observes the
behaviour of the agent Ag2 . It means that the agent Ag1 observes changes in the
environment (global data x0 ) which result from the action (realization of the action)
of the agent Ag2 . On the basis of the observation result, the agent Ag1 may define
(estimate) the state of the agent Ag2 , in other words the state of (values) its internal
parameters. In order to do that, the agent Ag1 must possess some knowledge about
the agent Ag2 , and especially some knowledge about the function f2 and its effect
on the changes of the state of the environment X0 , as well as its influence on the
state of that agent X2 (Fig. 2.13).
The process of defining the values of parameters (x2 ) may be realized with greater
or lesser precision, depending on specific, practical possibilities. In effect observed
(estimated) data do not have to give precise, complete information about the state
of the agent Ag2 , but they should be sufficient for continuing actions by the agent
Ag1 (for the calculation of the values of the function f1 ).
The agent Ag1 possessing essential information (values of parameters x1 , x0 and
indirectly x2 ), using (calculating) the function f1 may change its state (parameters
x1 ), and the state of the environment (global data- parameters x0 ). The changes of
the environment state are realized through calling subsequent actions (actions of
the agents Ag1 and Ag2 ). However, the agent Ag1 does not have the access to the
internal data of the agent Ag2 and it is not capable of effecting directly the change
of its state, though it should be done as a result of the calculation of the function f1 .
Nevertheless, it is possible to achieve it indirectly with the use of changes of the
environment state which forces the change of the state of the agent Ag2 . The agent
Ag2 similar to the agent Ag1 observes changes in the environment (parameters x0 )
resulting from the actions of the agent Ag1 , and on the basis of the information
makes changes of its state, in other words modifies parameters x2 .
The procedures of gaining autonomy by an agent give greater independence than
that received in the object-oriented approach because it is not engaged directly in the
internal states of another agent. The solution based on the observation process is more
difficult in realization, however, the range of interaction and cooperation between the
agents gives greater possibilities in the field of forming agent systems (multi-agent).
Numerous problems occur such as intentionality, suitability of actions, awareness,
cooperation between the agents, as well as problems of interaction between the
agents and others which remain open. Some of them will be discussed further in
later chapters. The two approaches to the realization of autonomy of cooperating
algorithms (with the use of communication and observation) provide a basis for
distinguishing between the object notion and the agent notion.
Summing up, it may be noted that the source of information for the agent (Ag1 )
is the state of its local data (x1 ), the state of the environment (global data x0 ) and
the information received as a result of observation of behaviour of other agents (for
2.5 Decomposition with the Use of the Concept of the Cartesian . . . 31
j x0 j x1j x2 j = ?
j-1 x0 j-1 x1j-1 x2 j-1
Fig. 2.14 Schema of the concept of the application of observation process in defining the parameters
of the agent
instance, the agent Ag2 ). This concept of the agent action has been schematically
presented in Fig. 2.14. Changes in time of one parameter (x0 ) make it possible to
estimate data values of the other parameter (x2 ).
Example 2
Let us consider the following example:
Between the elements of the sets U and X there is the following relation:
However, the algorithms Alg1 and Alg2 have the following forms:
32 2 Agent Versus Decomposition of an Algorithm
Alg1 = (X1 , f1 )
f1 : X0 X1 X2 X0 X1 X2 (2.42)
f1 (0, 0, 1) = (0, 1, 0), f1 (1, 1, 0) = (1, 0, 0)
Alg2 = (X2 , f2 )
f2 : X0 X1 X2 X0 X1 X2 (2.43)
f2 (0, 0, 0) = (0, 0, 1), f2 (0, 1, 0) = (1, 1, 0)
Obs12 : X0 X0 X2
Obs12 (x0i1 , x0i ) = x2 (2.44)
Obs12 (0, 0) = 1, Obs12 (0, 1) = 0
In effect, the algorithm Alg1 may be considered as autonomous towards the algo-
rithm Alg2 due to the observation function Obs12 , which informally can be denoted
as follows:
Alg1 = (X1 , f1 , Obs12 )
(2.45)
f1 (x0 , x1 , x2 ) = f1 (x0 , x1 , Obs12 (x0i1 , x0i )) = (x0 , x1 , x2 )
(a)
(b) (c)
Fig. 2.15 Schema of two approaches to the problem of decomposition of an algorithm: a the schema
of a column i corresponding to the parameter Xi , b linear division leading to decomposition into
subprograms, c column decomposition being the basis of the concept of the object and the agent
The set X may be presented as the lines of the parameters (x1i , x2i , . . . , xm
i , ) corre-
i
sponding to the individual states u , which may be presented as a table form, shown
in Fig. 2.15a. Both approachesdecomposition based on the division of the set and
decomposition based on the Cartesian product conceptmay be presented in the
form of two kinds of division of the table (Fig. 2.15):
Method of decomposition, based on the linear division of the table of parameters
(Fig. 2.15b). This method of decomposition based on the concept of division of
the set U and F leads to the concept of a subprogram.
The column decomposition of the table of parameters inspired by the notion of
the Cartesian product (Fig. 2.15c) provides a basis for the decomposition of an
algorithm with the use of the concept of an object as well as the concept of an agent.
Summing up, we consider two methods of decompositionlinear decomposition
leading to the notion of a subprogram, and column decomposition that makes it
possible to define the notion of an object and the notion of an agent.
Using both methods we may define the manner of decomposition of an algorithm
that leads to receiving the multi-agent system.
This process may be realized as follows:
Let us consider the algorithm Alg = (U, F). The set of states of this algorithm U
may be presented as the Cartesian product of the sets of parametersin other words,
the set X, where X = X1 X2 Xn given in the table form (Fig. 2.16a). When
analyzing the table, we will present the way of constructing the multi-agent system.
34 2 Agent Versus Decomposition of an Algorithm
(a) (b)
(c) (d)
Fig. 2.16 Schema of the process of decomposition of an algorithm, which in effect defines the agent
system (multi-agent), b the division of the system into subprograms, c the division into agents within
each subprogram, d the integration of the agent systems into one multi-agent system
The first step is the decomposition of the set U, which corresponds to the linear
division of the table. In effect, we receive the decomposition of the algorithm Alg
into subprograms (Fig. 2.16b).
In the next step, each component algorithm (or subprogram) may be considered
as an algorithm and subjected to another process of decomposition. In that case,
we use the process of division based on the concept of the Cartesian product,
and the component algorithms we receive may take the form of agents, creating
decomposition into agent systems within each subprogram (Fig. 2.16c).
The multi-agent systems resulting from decomposition (Fig. 2.16c) may be
connected and create the multi-agent system with different kinds of agents
so called heterogenic multi-agent system (Fig. 2.16). The agents resulting from
decomposition of one subprogram may observe through the environment other
agents from a different world i.e., another subprogram. This make cooperation
of these agents possible.
The connection of environments from separate agent systems provides a basis for
connecting these systems together. Two agent systems have defined environments X01
and X02 (Fig. 2.17a) and connecting them into one environment (X01 and X02 ) provides
a basis for the realization of cooperation between agents from different agent systems
(Fig. 2.17b).
The agent system (multi-agent) realized in this way is a kind of algorithm (the
set of algorithms) which can be considered as a result of decomposition of a certain
complex algorithm.
2.7 Decomposition with the Use of the Cartesian Product 35
(a)
(b)
Fig. 2.17 Practical interpretation of the process of decomposition of an algorithm leading to the
agent system (multi-agent) using the concept of connecting the environments X01 and X02
1 2 i n
A1 A2 Ai=(Ai, i) An
A = (A, ) obj(Alg)
(2.47)
A set, A = , : AA
Summing up, the category Alg consists of the set of objects obj(Alg) and a set of
morphisms (Alg) as defined above.
The Cartesian product of objects that belongs to the category Alg above the set of
indices I = {1, 2, . . . , i, . . . , n} may be defined as follows (Fig. 2.18):
B = iI Ai (2.48)
where:
B obj(Alg), B = (B, ), : B B,
iI Ai obj(Alg), Ai = (Ai , i ), i : Ai Ai
iI i (B, Ai ) i.e. i is a morphism from B to Ai
and b B as that iI i (b) = ai Ai , i ((b)) = i (i (b)) Ai
The application of this Cartesian product to a given algorithm (in the category of
algorithms) enables decomposition of the algorithm Alg into component algorithms
Algi = (Ai , i ). Each of the component algorithms (Algi ) obtained in this way
is autonomous towards another from the rest of component algorithms (Algj for
i = j), which results from the definition of the Cartesian product in the category of
algorithms.
2.7 Decomposition with the Use of the Cartesian Product 37
On the basis of the above considerations we may conclude that there are two
approaches to the application of the Cartesian product to decompose an algorithm:
Decomposition based on the application of the Cartesian product to the set U of the
algorithm Alg = (U, F). It enables decomposition of an algorithm into component
algorithms, but the component algorithms we obtain are not usually mutually
autonomous. The decomposition itself is in that case easier than the decomposition
presented in the category of algorithms and more often possible in practice. Lack
of autonomy of the component algorithms creates a certain problem, however,
with the use of methods presented in the previous chapters, the autonomy of
component algorithms may get and guarantee the possibility of creating component
algorithms, which is of great importance in practice.
Decomposition realized by the application of the concept of the Cartesian product
concerning the whole algorithm Alg considered as an object in the category of
algorithms. As a result, we receive component algorithms that are independent of
one another, which would enable the creation (designing, programming) of the
algorithms independently (in parallel). However, for the problems occurring in
practice the realization of such decomposition of an algorithm into component
algorithms may turn out to be difficult because it requires to the application of the
concept of the Cartesian product to the whole structure which is an algorithm. It is
necessary to meet certain demands (the form of the Cartesian product in the theory
of a category: [1, 93, 152]) which are difficult to provide in practical applications.
Particularly, these difficulties result from the fact that our considerations presented
in this chapter only confirm the possibility of decomposition of an algorithm within
the category of algorithms, but they do not provide practical suggestions on how
for a given algorithm (object of a category) B we can find algorithms (as objects
of a category) Ai , into which we decompose the initial algorithm, that is how the
sets Ai or the functions i should look like (see works on the theory of a category,
for instance [1, 93, 152]).
It cannot be excluded that the development of research on the properties of the
Cartesian product in the theory of the category (and especially its role in the decom-
position of an algorithm in the category of algorithms) may lead to the possibility of
application of algorithm decomposition in practice, on the basis of the application
of the Cartesian product concept to the category of algorithms.
This chapter presented the concept of decomposition of a given algorithm into compo-
nent algorithms. The analysis of the process of decomposition and different methods
of its realization resulted in the concept of an object and an agent, as well as the
agent system. A basic role was played by the analysis of such property as autonomy
and the notion of an autonomous agent and its capability to observe the behaviour of
another agent in the environment, which constitutes a method of gaining autonomy.
38 2 Agent Versus Decomposition of an Algorithm
m1 x2
x0
Ag1 Ag2
Fig. 2.19 Schema of actions of the algorithm of the agent on the basis of the model of the
environment. The agent Ag1 on the basis of observation of the environment builds the model m1
with which it may define the internal state of the agent Ag2
(a)
Alg1 Alg2
environment
X1 X2
X0
(b)
Alg1 Alg2
environment
X1 X2
X0
Fig. 2.20 Schema of perceiving the environment by the algorithm Alg1 : a perceiving the non
extended environment, b inclusion of Alg2 in the perceived environment (extended environment)
Thus, for the agent Ag1 information on its state x1 , the state of the environment
x0 , and the state x2 of the agent Ag2 provides a basis for creating the model m1 .
On the basis of this model (m1 ) the agent Ag1 plans and realizes its actions. It is
concerned with the extension of the environment perceived by the algorithm Alg1 .
So far, the algorithm Alg1 has been able to perceive the environment as the variables
X0 (Fig. 2.20a). Becoming an agent, it incorporates other agents (algorithms), e.g.
Alg2 (Fig. 2.20b) into the range of the environment it observes. The above mentioned
concept ensures considerable independence of agents actions, with the possibility
of their mutual interaction.
A given agent receives information on the state of the environment and on
behaviour of other agents by means of the observation operation of its surrounding
environment. It is the main difference between the notion of an agent and the notion
of an object which does not possess that kind of capability.
Summing up the features of the agent, it may be concluded that:
The property of autonomy is not the feature distinguishing the notion of an agent
from other notions in the field of computer science, and especially from the notion
of an object, as both the agent and the object may be considered as autonomous.
The distinguishing feature of the agent is its capability to observe behaviour of
other agents operating in the environment.
The agent may possess the capability to communicate with other agents through the
environment, but also through direct communication (similar to the communication
between the objects), but this capability is not the distinguishing feature for the
agent with reference to the object.
The agent Ag1 observes the environment X0 (Fig. 2.20) and if we accept the above
concept of an algorithm, it can observe neither the agent Ag2 , nor its influence on
40 2 Agent Versus Decomposition of an Algorithm
decomposition decomposition
based on the based on the
concept of the concept of the
division of sets Cartesian product
the application of the Cartesian product
subprogram to the category of sets results in non-
autonomous component algorithms, but
the realization is easier
object agent
the application of the the application of the
communication process observation process
the environment or changes that occur in that environment (Fig. 2.20a). Under the
concept of the agent, the agent Ag1 incorporates the agent Ag2 into the area of the
environment it observes, and in particular:
The agent Ag1 is able to observe not only the environment X0 (Fig. 2.20), but also
the fact that the agent Ag2 exists in the environment,
The agent Ag1 is able to observe changes in the environment caused by the agent
Ag2 , which means that the agent Ag1 is able to associate the event occurring in
the environment that changes the state of the environment with a specific agent),
the doer of this event (e.g., with the agent Ag2 , Fig. 2.20b). It can be said that the
agent Ag1 extends its own model of the environment it observe by incorporating
the agent Ag2 into it, as presented in the Fig. 2.20.
Summing up the process of decomposition of a component algorithm that leads to
the concept of an object, and especially to the concept of an agent, we may use a
schema from Fig. 2.21, which shows the following possibilities of transformation of
an algorithm:
An algorithm we are to realize (design, program, activate) is too complicated and
has to be decomposed into more or less independent component algorithms. There
are two possible ways of decomposition: one based on the concept of the set
division, and on the other on the Cartesian product.
The decomposition based on the division of sets leads to the concept of a sub-
program. This approach is often successfully applied in practice but not always
effectively.
2.8 SummaryDecomposition, Agent, Autonomy 41
The application of decomposition based on the Cartesian product gives two pos-
sibilities: it can be used in the category of algorithms and in the category of sets
and mapping.
The use of the concept of the Cartesian product in the category of algorithms makes
it possible to receive autonomous component algorithms, which are independed,
but this process of decomposition is difficult and not always possible in practice.
The application of the concept of the Cartesian product in the category of a set
and mapping is easier in practical applications, but it usually does not result in
autonomous partial algorithms. However, it is possible to regain (at least to some
certain extent) this autonomy.
One method of gaining autonomy is the application of the communication process
between the partial algorithms. It leads us to the concept of an object.
The other method of gaining autonomy is the use of operation of observation of
the environment (including other agents) which makes it possible to formulate the
concept of an agent. It is noteworthy that this observation may be used for some
kind of communication between agents, so that one agent makes a certain change
of the chosen attribute of environment (or a characteristic modification of the state
of the environment), and the other agent observes this change and interprets it.
The aim of the considerations was to define such notions as decomposition and
autonomy, which resulted in defining an agent as an autonomous (to a certain extent)
algorithm (program) equipped with the capability to communicate and cooperate
whose distinguishing feature is the capability to observe the environment in order to
recognize actions undertaken by other agents existing in this environment.
The formulations we obtained may be used for defining agents operating in
computer systems as well as for constructing agent systems, oriented towards
defined areas of applications, which will be presented later in further chapters
(Chaps. 3 and 4).
Chapter 3
M-agent
Abstract This chapter offers a more intuitive approach based on the concept of the
M-agent architecture. In this chapter, the inner structure of the agent is given with an
attempt to keep the balance between a universal approach with its broad application
of the agent, and a more detailed approach which could help to understand the basic
elements of the agents structure and its action.
3.1 Introduction
The development of computer systems and especially their applications had and still
has influence on the introduction of new notions and concepts in the field of software
development.
In the process of development, the concept of characteristic space was established
in which subsequent cycles of life of systems realized in the form of computer
processes (programs, procedures, objects or agents), from their creation to elimina-
tion can be realized.
We can imagine that this program constitutes certain entities, existing (and being
executed /run/) in the computer, or more precisely in the environment of the computer
operating system which show some similarities between their cycle of life and those
of living creatures in the natural environment.
Furthermore, we can imagine that the operation system is a certain environment,
in which different entities, e.g., processes similar to the natural environment operate,
are run, or figuratively speaking live.
The environment created in this way enabled the cooperation of remote systems
(and agents) as well as certain unification of individual environments into one system
comprisingthrough the networkcomputers of a given group, with the range from
the local to the global. The environment creates characteristic space and is referred
to as cyberspace.
The development of the cyberspace concept created the need for working out new
elements of software, which can exist and operate in the abovementioned space, as
well as new models of cooperation between processes (computer systems) operating
in cyberspace (e.g., a client-server model of cooperation).
In the present considerations, the notion of an agent was introduced on the basis of
analysis of decomposition possibilities of an algorithm into component algorithms
which should be autonomous, if possible.
Under these considerations we may define features an agent should be equipped
with, however, it is not clearly enough how we should approach the design and the
structure of an agent or the agent system.
In this part, based on the analysis of development of different approaches to
creating software development (e.g., classical approach [182]), we will attempt to
look at an agent from the practical point of view and form certain directions that have
application to the design of agents.
The M-agent architecture will be used for connecting theoretical approach with
practical, i.e., a formal approach to the analysis of methods used in the field of
software with an informal (intuitive) point of view on an algorithm as an active
element existing (and operating) in a definite environment.
The notion of a procedure is, inter alia, characterized by the fact that a new
procedure (referred to as called) called in a given procedure (referred to as call-
ing) has usually the environment of the calling procedure as part of its environment.
It results in the fact that through the common environment the calling procedure,
as well as the called procedure are related to each other and they cannot proceed
independently. Particularly, the calling procedure cannot be completed before the
termination of the called procedure.
The introduction of the process notion was aimed, inter alia, at increasing the
independence of action. That was mainly caused by problems and demands created
in the process of creation of such systems as parallel processing systems with the
time division.
The concept of a process is distinguished by the fact that environments of processes
(two or more) are not mutually independent (enough) and the existence of one process
does not have to influence the proceeding of another process. Therefore, a given
process may exist and be proceeded independently of the process which contributed
to its creation.
The process itself does not possess complete information about its state, and
its formation requires that the environment should be complemented with certain
elements memorized by the system (as a global environment of the system). In
effect, the process may be realized in strictly defined conditions and in a given
environment and therefore it does not possess the full independence of action (is not
an autonomous element).
The notion of an object had to be introduced in the course of development
of distributed processing systems. The object was established as a result of the
development of the process notion. The significant part of information indispens-
able for creating the object is recorded in a given object, and because of so-called
encapsulation the object takes full control of it. The object is, therefore, to a large
extent independent of the environment and may operate in different conditions that,
i.e., in different environments. The initiative to operatein other words, to create an
algorithm (or at least its part) comes from the outside of the object, most often from
another algorithm, therefore it depends on the communication process between the
objects.
The object as an independent entity may relocate in the computer networks
and may be used in different environments constituting a handy element for the
construction of distributed systems in practice. However, the object needs an exter-
nal signal of activation to undertake the action.
Introducing the concept of an object, despite encapsulation, the possibility of
access of one object to the features (data) of another object was secured with the use
of the mechanism of a certain kind of communication. The realization of data transfer
between the objects uses the concept of a method for communication with internal
structures of data of the object. It involves activating by one object an appropriate,
aim-oriented and accessible method of another object which guarantees the transfer
of data. Access to data of one object may be realized by another object in two ways.
One is based on the fact that the object collecting data possesses the initiative to start
the operation of collecting. In the other case, the initiative is on the side of the object
46 3 M-agent
forcing (by calling the method) the action of another object, providing it with data
indispensable for this action.
However, the above schemas of cooperation between the objects limit to a large
extent the independence of a given object from other objects. The action of an object
taking the initiative to start communication may be completely blocked when the
object which is to respond positively to the proposal of communication does not
agree to the cooperation and will make access to its data impossible, which means
blocking the realization of the communication process.
Despite these inconveniences, by using the concept of an object and in particular
the ways of access of one object to the data (resources) of the other, it is possible to
realize (in some sense) decomposition of a system into subsystems of cooperating
objects and ensure the operation of the whole system (see [67]) realized with the use
of so-called object-oriented approach.
Further enhancement of independence of partial algorithms is possible due to the
application of the mechanism of observation of the action (behaviour) of one object
by another object. It leads to the introduction of the agent notion as well as the local
data considered as the state of an agent, whereas, the global data as the environment of
interaction of agents. The agent observing its surrounding environment must notice
other agents in it. We may, therefore, define the notion of an environment, specifying
it as the global data of other agents and relationships taking place between the agents
under observation and changes appearing in the environment. This image of the envi-
ronment will be referred to as the notion of surrounding environment of a given agent.
Consequently, the action of a given agent involves the agents observation of
its surrounding environment and changes taking place in this environment, caused
by other agents actions, and building a certain model m which represents the sur-
roundings. Using the concept of the agents action concerning the m model of the
surrounding environment we may consider the following scenario of the agents be-
haviour in a given environment in which other agents and resources exist (Fig. 3.1):
The agent with the use of the observation operation receives the information from
the environment and creates the model (m) of this environment. To create this
model we have to take into account a certain level of abstraction, i.e., only certain
features of the surrounding environment, in particular those which are useful for
realising a given of the concrete action of a given agent.
On the basis of the analysis of this model, the agent plans actions it is to realize in
its surrounding environment.
Through the influence on its surrounding environment the agent realizes these
events, and at the same time changes the state of the environment.
The notion of an autonomous agent (later also referred to as an agent) is the
development of the notion of an object. In this case, an agent keeps a property of
encapsulation, characteristic of objects, however, the mechanism of the activation of
an agent has been extended and the manner of contact with the surrounding environ-
ment has been ordered. Like an object the autonomous agent contains information
about its state which is indispensable for an action, however, it undertakes the initia-
tive to act independently on the basis of observation of the (external) environment.
3.2 The Notion of an Agent and the Concept of Its Architecture 47
sending a message
through calling the event
agent A1
process of
creation environment help!
the agent
agent A 2
receiving a message through
the observation of an event
Fig. 3.1 Schema of the notion of an agent used as an element of the construction of a computing
system. The agent calls an event in the environment, which is a message for another agent
Therefore, the agent becomes an active element of the system capable of making the
system change in an advantageous way and consequently in the way accepted during
the design of the system (see Fig. 3.1).
The notions presented above have provided a basis for the realizing many computer
systems. It should be emphasized that the subsequent introduction of the notions of
an object and an agent does not result in the fact that other existing elements used
for the creation of systems lose their validity, and are not or should not be used. On
the contrary, the newly introduced notions enrich (but do not replace) the existing
set of tools for the construction of systems and may be applied in parallel with them,
giving greater possibilities for creating more complex computer systems.
Summing up, it may be concluded that the role of an agent in the environment
involves maintaining the process of changes, i.e., developing dynamically changing
populations of autonomous agents which, through their operation in the scattered
environment, proceed the integration and processing of resources indispensable for
realizing a given task in a decentralized manner.
Research on defining the agent approach and a description of an agent itself and
multi-agent systems need specifications of basic properties the autonomous agent
should be equipped with. It should operate according to certain intuitive common
sensical rules, and be predicted as well as observed while analyzing real decentralized
systems which can be treated as a certain source of description of (a model) an agent.
Some features have already been described; this time however, let us try to look from
48 3 M-agent
the practical point of view and define the properties of an agent intuitively. It seems
possible to accept the following set of features of an agent which can be a starting
point for further considerations:
The agent comes into existence as a result of being created (e.g., through the
operation create) by another autonomous agent acting in a given system. However,
at the moment of running the system a certain number of appropriate agents may
be introduced into the system and run.
The agent may be eliminated (finish its activity) on its own initiative, or on the
initiative of another active agent (e.g. as a result of the operation kill). There is
a possible option according to which the influence of another agents action is
limited in a given definite range.
The agent may gain information from the outside or, more precisely, from its
surrounding environment, performing the act of appropriate observation due to
his capability to observe, i.e., the operation of observation. The existence and
behaviour of other agents have an influence on the image (model) of the environ-
ment observed by a given agent.
The state of the environment has an influence on the behaviour of the agent or its
action in the environment (inter alia, on the decision about relocation of a given
agent, self-elimination, generation of another agent etc.).
The new agent may be similar to the one that created it (i.e. it may be of the same
type and even its identical copy) or it may differ from within certain limits (which
is defined with the use of the type of an agent).
The agent possesses memory and in decision making it may take into account
memorized events as well as observed states of the environment (capability of
learning).
The agent may not only collect information from the environment but also change
its state.
The flow of information in the multi-agent system may be divided into two groups:
The flow of the information agent-agent. The flow of information between agents
takes place via their mutual communication. During communication the agent
may address its messages to another chosen agent, using the addressed communi-
cation, in which a given agent identifies (and remembers at least for the duration
of communication) its interlocutor or communication with a more closely uniden-
tified group of agents. Moreover, the agent may have communication for strictly
defined purposes (the purposeful communication), in which the aim of commu-
nication within a certain aspect is memorized.
The flow of information agent-environment. Here, we may distinguish two kinds
of the information flow:
The flow of information from the environment to the agent realized through the
observation of the environment. This observation involves the recognition of the
state of the environment (including other agents and their behaviour).
3.2 The Notion of an Agent and the Concept of Its Architecture 49
The flow of information from the agent to the environment, i.e., the influence of the
agent on the environment which results in changes taking place in the environment
caused by the agents calling events (see [139]).
Communication via called events performed by the agent (sender) in the environment
should be recognized as the basic method of communication. The agent sending a
message calls an event (or a few events). This event (or events) is observed by another
agentthe receiver of a message, and then properly interpreted. Single events (called
simple events) may be grouped and create socalled complex events. The agent
receiver analyzes complex events and receives them interpreting as one complex
message (see [90, 139, 140]). Summing up, we may distinguish two approaches
direct and indirect communication. Indirect communication means making changes
in the environment by the agent which can be observed by another agent. In direct
communication a message is sent directly from one agent to another. The possibility
of that kind of communication is often considered in practice (however, the question
is what direct sending means). If we accept that sending a message is realized
through appropriate changes in the environment, we may distinguish certain parts
of the environment (depending on the character of a given agent system) which can
be considered as the channels for sending information between the chosen agents.
These channels may be further classified as the channels for direct and indirect
communication.
Using the elements of the description of the agent and the agent system, we
may move on to the description the structure of the agent taking into account its
action in the environment. One of the starting points for these considerations is
an approach presented in the literature which formulates a problem synthetically
program = algorithm + data (see [182]). The fundamental idea standing behind
this formulation helped to place the notion of an autonomous agent in the struc-
tures of information technology algorithms. Generalizing this formulation, we may
conclude that an algorithm is realized (or realizes itself) in a given environment.
Therefore, the functioning (operating) of an algorithm (i.e. an agent) involves the
observation of its surrounding environment (collecting data) as well as affecting the
environment, which results in its changes (creation of results). Different parts of the
environment are in different ways available for a given agent (algorithm), therefore,
we may distinguish its parts, areas (see Fig. 3.2).
Remote environment (or environment)is a set of resources (elements, data) which
are within the direct or indirect view of a given agent. Within the indirect view
the agent may be forced to undertake certain actions which do not belong to
the procedures of observation in order to notice certain parts or elements of the
50 3 M-agent
environment (i.e., the agent must relocate in order to notice certain elements of
the environment).
Close environmentis an environment which is in the scope of observation of a
given agent only with the use of the procedures of observation.
Surroundingsis part of the environment (close environment) which is within the
agents capabilities, i.e., which the agent may affect (process and change).
Neighbourhoodis part of the surroundings which a given agent may affect using
other greater powers (and capabilities) than other agents.
Proprietary environmentis that part of neighbourhood which can be changed (mod-
ified) only by a given algorithm called the owner. Other algorithms (other agents)
may change that part of the environment only with the consent of the owner.
An algorithm and its proprietary environment create the basic unit of structure of
decentralized systems in the form of an autonomous agent. The above division is
based on capabilities to observe a given action of the agent. That division takes
account of other agents existing in the environment. The environment may also be
divided in terms of observation of the content, and in this case we may distinguish
two basic components.
Resources constitute the first component of the environment which is processed
by the agent. Resources do not possess the capability to take the initiative which is
typical for an agent but can be changed according to their own set algorithm (e.g.,
process of renewal, outdating, growing older etc.).
Agents, which constitute the basic active part of the environment are another
component. A given agent perceives other agents as part of the environment, which
may be noticed and influenced by this agent according to a set rules. It is also
possible for a given agent to take account of itself in the model of the environment
it perceives. This manner of perceiving itself is the way the concept of awareness
of the agents existence is analysed.
One of the basic tasks which should be solved while creating multi-agent systems
is the selection of the appropriate environment in which agents will operate.
It seems that the problem of creating agent systems is so important and extensive that
it should be treated not as an approach to a single case, a certain set of computing
tools or a specific algorithm, but as a general point of view or the way or reasoning.
Therefore, we should aim to develop the agent approach to the problem, the agent
analysis, the agent methodology of designing systems and their programming with the
use of agent methods, algorithms, languages, i.e., the appropriate tools. Accepting
an informal, intuitive point of view, in order to create the multi-agent model of a
given problem we should define first:
A certain space constituting the area of agents activity which enriched with
appropriate resources may create their environment. It is necessary to define this
environment taking account of features of the task to be solved with the use of the
system.
A set of agents (of different kind as the need arises) which will exist and operate
in the environment. This set should be defined by grouping agents according to
already established types, aiming at distinguishing common features of all agents
and characteristic properties for particular groups.
Mutual relationships between the defined agents as well as between the agents and
the environment. It should be taken into consideration that the possibilities of a
given agent (observation and action) are limited and a given agent may cooperate
in given conditions only with a certain group of agents and certain part of the
environment.
The rule that proves right in practice, suggested as a starting point for the analysis and
object design (see [67]) contained in a laconic saying perform the role of the object,
which means assume that you are an objectact, behave like a given objectmay
be applied to agent programming: you are an agent, and you must manage to exist
in the environment just like an agent.
According to this rule, we accept that the observation of the world which is useful
for the analysis and agent design will be performed from the point of view of the
agent.
Therefore, the boundary line should be drawn between what the environment is
and what it is not (and possibly what it is?) through accepting the point of view of
the agent (this division should be universal to such an extent that it could possibly be
applied to all agents). It can easily be noticed that the division between the elements
of the agent-environment system does not have to be quite very simple and the sets
of those notions are not completely separable as they would seem to be. It may not
be the case that a given agent may treat certain elements (parts) as other elements of
the environment. Therefore, it seems to be more correct to ask whether there are any
elements of the system which cannot be treated by a given agent in the same way as
the elements of the environment.
52 3 M-agent
algorithm
proprietary
environment
neighbourhood
surroundings
close environment
remote environment
Fig. 3.2 The division of the environment around the agent with regard to its accessibility
In consideration of the above, we may conclude that having analysed the envi-
ronment from the point of view of a given agent it is difficult to establish which part
of the environment belongs to it completely.
Accepting the classification resulting from the division of the environment pre-
sented in Fig. 3.2, it would refer to the whole or part of the proprietary environment
of the agent. Drawing a dividing line would distinguish part of the environment that
does not belong completely to the agent and the part that belongs to the agent,
which is informally referred to as its most private property.
The idea of such division of the environment seems to provide a basis for the
introduction and development of the concept of an agent. Distinguishing certain parts
of the environment and granting them an active status of having an impact on other
areas of the environment is the basis of the agent systems (or multi-agent systems).
In some considerations, the notion of an agent contains parts of the environment
which may not necessarily be the private property of the agent, therefore, we suggest
referring to that part of the environment, which is the private property of the agent,
as the agents mind. Later, we will use the agent instead the agents mind only when
it does not lead to ambiguity.
It should be emphasized that the above division is connected with that part of
the environment which was called the proprietary environment on the basis of the
previous classification. The area called the agents mind may constitute part (or the
whole) of the environment classified as the proprietary environment from the point
of view of accessibility.
3.3 M-agent in the Agent System 53
environment
Fig. 3.3 Schema of the division of the environment into the part which belongs and the part which
does not belong to the agent
Considering the agent system we may precisely specify the above classification by
using the following division of the environment from the point of view of a given
agent (Fig. 3.3):
The environment which includes resources, the space which enables the agent to
move as well as other agents operating in the environment and generally speaking,
some parts of a given agent.
The mind of a given agent in which it uses the reasoning, i.e., analyzes the state
of the environment, makes a decision. The elements of this mind are treated by
the agent on different principles from those of the environment, constituting the
private property of a given agent.
The contact of these two domains: environment-mind of an agent provides a basis
for the existence and activity of the agent, as well as a source of inspiration for
undertaking its own activity.
The agent is in its surrounding environment v. The events ev (and rather
ev1 , ev2 , . . . , evk ) take place in this environment and change it (its properties). The
agents mind is a place in which the operation of making decisions by the agent
is realized, concerning its behavior in its surrounding environment. The communi-
cators between the environment and the mind are two operations: the operation of
observation (I ) and the operation of the realization of strategy (X ). In the area of the
agents mind we may distinguish the model of the environmentm, created within
the activity of the model of the agent m , strategies s, which the agent may apply
within its activity in the environment, and the function of the goal q used for the
assessment of the planned activity.
54 3 M-agent
ev1 ev2
agents mind
m I ev3
q s X ev4
m
environment
Fig. 3.4 General functional schema of an agent based on the concept of the M-agent architecture
Informal scenario, being the first approximation of the agents activity (based on
the concept of the M-agent architecture) is as follows (Fig. 3.4):
The agent observes its surrounding environment v and builds its (abstract) model
m in its mind, taking into consideration certain (significant from its point of
view) elements and features (accepting a certain established level of abstraction).
For this purpose the operation of observation I is used (or maybe the operation of
observation and imagination). The model m in accordance with the needs and
the type of the agent may be simple or very complicated, it may also take into
account the agent itself or its elements within a certain range.
The agent anticipates what kind of changes in the environment will be caused by the
realization of its particular strategy s (one of the available strategies of the agent).
For this purpose, the agent performs an analysis of carrying out the strategy s on
the model m and creates the model m which corresponds to anticipated changes in
the environment. These changes arise as a result of the realization of the strategy s.
The agent evaluates whether anticipated changes in the environment are compatible
with its intended goals comparing the models m and m with the use of the function
(evaluation of the realization) of the aim q. Analyzing the results of the application
of different strategies s, an agent may choose the one which, while being realised,
brings the most advantageous changes in the environment (from the point of view
of goals of a given agent).
The best (in the sense described above) strategy s, chosen on the basis of the
evaluation of the models m and m with the use of the function q, is realized by the
agent in the environment with the use of the operation X . The realization of the
strategy s results in the new, changed environment: v = X (s, v). The realization
of a strategy may take place through calling (performing) an appropriate event or
the series of events in the environment (e.g., the event ev3 , Fig. 3.4).
3.3 M-agent in the Agent System 55
In order to emphasize the fact that the agent in its activity uses and processes the model
(m) of its surrounding environment, as well as to distinguish the present approach to
the structure of an agent from other approaches, we will refer to it as the architecture
of the M-agent. This informal approach and a description of the architecture of the
M-agent, which will be presented later, is based on the concepts formed in the years
19901999 and presented in works [30, 36, 38, 45, 61].
Other approaches to a more formal description of an agent or those associated with
practice have been developed alongside the concept of the A-agent. They include,
inter alia, such concepts as Agent-0 [161] or architecture BDI [154]. Based on the
results of the research on the development of the agents architecture, numerous
studies have been carried out on the application of the concept of an agent in different
fields [73, 74].
For the purpose of more precise and accurate description of agent systems, we propose
to introduce the following terms and notions for area:
aan agent (any definite active element of the system which has been considered
informally as an agent, meeting the postulates from Sect. 3.3).
Aa set of agents existing at a given moment in the system later also referred to as
the configuration of agents or the society of agents (a A).
Various kind of agents often exist in the system. While defining and analyzing such
societies of agents, it is convenient to introduce a division or rather the grouping
of agents into kinds, later referred to as types of agents. It gives the possibility of
easier defining the agents, using a description of common features of an agent within
a given type. What is more, sometimes it is necessary to consider a given agent (in
g
the society of agents) in relation to its identity in a group. Therefore, the notation ai
denotes agent i of the t ypeg.
Let us introduce the following definition of the notion of an environment connected
with the activity of agents:
vthe environment of agents. We will refer to the environment v as a triad:
where
E is space,
A is a valid set (configuration) of agents operating (existing) in the environment,
C is relationship (connection) between the space E and the agents which belong
to the configuration A. For example, the relationship C may define the present
location of agents in space for certain multi-agent systems.
V is a set of environments considered in the system (v V).
56 3 M-agent
Considering the above notions, we may introduce the following definitions, where
the agent a is defined as follows:
where
m is a model of a given environment (v), in which the agent a exists. This envi-
ronment is referred to as the surrounding environment of a given agent a which
includes, according to the classification presented in the Fig. 3.2, the proprietary
environment and the neighbourhood and if it is required became of the charac-
ter of a specific application, it may also include the surroundings as well as the
environment.
M is a set of models (also referred to as a configuration of models) of environments
which may be surrounding environments of a given agent a. The models included
in the set M are within the scope of knowledge of the agent a and may be used
by it.
M information stored in the memory of the agent a and used for the construction
of models, which is a kind of knowledge of the agent.
s is a strategy defining the activities of the agent a:
s : M M, m = s(m). (3.3)
q : M M , q(m, m ) (3.4)
where
is a set of the real numbers,
m is a model of an environment in which a given agent exists at the moment,
m is a model of an environment which an agent intends to achieve (and as a
result to exist and operate in it) realizing the goal q. The model m corresponds
to an environment which can be created out of the environment described with
the use of the model m, as a result of anticipated changes caused by of the
agents activity (the application of the strategy s),
Q is a set (also referred to as the configuration) of possible goals of the agent a,
which usually has the form of the set of goals q (q Q) with a certain defined
order. In further considerations, for simplification, the agent will only possess one
aim q, therefore, the set Q will be a singleton. However, it is possible to consider
agents which possess more of aims (with the appropriate hierarchy).
3.4 The Model Based on the M-agent Architecture 57
where
v is an environment created as a result of the activity of the agent a realizing
the chosen strategy s in the environment v on the basis of the analysis of the
model m.
The above dependencies have been schematically illustrated in a schema in the
Fig. 3.5.
Based on the notions introduced, the algorithm of the autonomous agent may be
presented as follows:
1. Start is a moment of creation (generation) of the agent a : (M, Q, S, I, X,
m, q, s), and then placing this agent in the specific environment v = (E, A, C).
At this stage, the initial state of relationship is also set between the agent and the
space E (i. g. the initial placement of the agent is set). Continue to step 2.
2. The observation of the surroundings and the construction of the model m of the
surrounding environment (v) with the use of operation I : m = I (v, M). Continue
to step 3.
3. The evaluation of the possibility of realizing of the goal with the use of strategies
available for a given agent and choosing the best strategy realizing the goal q,
i.e., the optimal strategy s .. A choice of the optimal strategy s . is made with
the use of the evaluation to what extent the application of the strategy s . in a
specific situation (defined with the use of the model m) leads to the realization of
the goal q. The search for the optimal strategy s may come down to the search
for the (global) extreme.
Continue to step 4.
4. The realization of the chosen strategy s in the specific environment v: X (s , m, v)
= v . Continue to step 2.
The above algorithm is illustrated in a schematic form in Fig. 3.5.
Fig. 3.6 Schema of the multi-profile agents activity. A particular profile (e.g. the profile n) repre-
sents the role which a given agent plays at a given moment in a particular multi-agent system
The formulation of the result strategy may involve connecting chosen elements
of the strategy from individual profiles, or it may be a selection the one strategy from
the agents profiles (a strategy from the currently dominating profile).
If we consider the agents activity as different roles, according to specific circum-
stances, played by an agent in a particular environment, we may notice that a given
profile is responsible for one of the roles mentioned above.
This approach enables great flexibility in the agents activity for the purpose
of adjusting its activity to specific tasks realized in the environment, i.e., playing
different roles in particular situations. It makes it possible to use multi-agent systems,
inter alia, in the multi-criteria optimization.
The concept of the multi-profile agent may also be used for modelling more
complex agents behaviors such as the agents activity on the basis of its emotional
states, for which individual profiles are responsible.
The above-presented architecture of an agent may be used for the description of the
agents behaviour in different situations taking place in a given environment.
60 3 M-agent
environment
agent
ev1
m0 I ev2
q s1
q m1
q s2 X evx
m2
s3
m3
environment
agent A agent B
m0A (m0A)B
qA sA qB
m1A (m1A)B
yes / no!
In the above example, the process of planning is realized by one particular agent.
When a plan of activity is created by a group of agents, it is necessary to consider
the process of exchanging information between them, i.e. the process of negotiation.
An example of the process of negotiation may be described with the use of the
M-agent architecture. The simple process of negotiation between two agents (the
agent A and the agent B) is shown in Fig. 3.8.
Let us accept that the agent A considers the application of the strategy s A which
transforms the model m 0A into the model m 1A as advantageous to itself. The agent,
possessing the knowledge of the existence of another agent (e.g. an agent cooperating
with the agent A) in the environment, referred to as the agent B, informs that agent
about its intentions.
It is possible that B is a different kind of agent from the agent A and it is not familiar
with the strategy s A (information about the strategy s A does not say anything to it in
the slightest), however, it can receive and understand (in its own way) changes in the
environment which are to be realized by the agent A (i.e. the models m 0A and m 1A )
due to the fact that the agent A sends information about the models m 0A and m 1A to
the agent B. The agent B receives the information and builds two models in its mind
(with certain modifications in the way it understands): the initial model (m 0A ) B and
the destination (target) model (m 1A ) B .
Afterwards, it compares these two models with the use of their objective function
q B . If the comparison result is positive for it (which means that changes in the
environment proposed by the agent A are advantageous to it), it agrees that the agent
A will realize the chosen strategy s A . However, if these changes are disadvantageous,
it sends information that it does not agree with the activity proposed by the agent A
(Fig. 3.8).
It is possible to conduct negotiation on fixing the common plan. Such a case is
shown in Fig. 3.9. Here, similarly, the agent A determines whether the strategy s A it
chooses is advantageous to it and informs the agent B about it.
62 3 M-agent
environment
agent A agent B
m0A (m0A)B
qA sA qB
qA m1A (m1A)B qB
sB
(m2B)A m2B
Fig. 3.9 Schema illustrating the process of negotiation of the common plan
The agent B compares both models it received from the agent Athe initial
(issue) model (m 0A ) B and the target (destination) model (m 1A ) B and after evaluating
the models (changes) with the use of its objective function q B , states that these
changes are not advantageous from its point of view. However, it goes further and it
finds that if it applies its strategy s B to the model (m 1A ) B then the cumulative changes
in the environment are positive from its point of view (due to the evaluation with the
use of the objective function q B the comparison between the models (m 0A ) B and
m 2B ) and sends information about the application of the strategy s B to the agent A
(and precisely the information about models (m 1A ) B and m 2B ).
The agent A receives information about the intentions of the agent B, but par-
ticularly it creates a model of suggested changes in the environmentthe model
(m 2B ) A , which is evaluated with the use of the objective function q A . If the changes
in the environment are advantageous to both agents, i.e. the evaluation with the use
of the objective function q A (m 0A , (m 2B ) A ) by the agent A and q B ((m 0A ) B , m 2B ) by
the agent B give a satisfying result, then the agents realize their strategies. In effect,
the strategies s A and s B a negotiated common plan (and goals)are realized by
the agent A and the agent B (the strategy s A by the agent A and the strategy s B by
the agent B).
As a result of the negotiations, a plan of activity is created which will be realized
by a pair of agents and further by a group of agents within the multi-agent system.
It allows for cooperation of agents for the purpose of group problem solving.
The model of the M-agent may also be used for the description of the agents process
of learning. In particular we may define two ways of realizing the process of learn-
ing, which seems very useful for improving the functionality of an agent in the
environment.
3.6 Extensions and Applications of the M-agent Concept 63
These include: the method of agents learning from its own mistakes and the
method of agents learning through imitation. Obviously, these ways of learning do
not use all possibilities of the agents learning and other methods of learning, which
are popular and developed in the field of artificial intelligence, may also be used
successfully.
The agents process of learning based on its mistakes may be described with the
use of a scenario realized in the following stages:
The agent chooses the optimal strategy s based on the comparison between the
models m and m and realizes this strategy in the surrounding environment with
the use of the operation X (Sect. 3.4).
After realizing this strategy the agent builds the new model m based on the
observation of the surrounding environment.
The agent compares the models of the environment, the one it wanted to receive
(m ) and the one which was created in reality (m ).
If the difference between the models m and m is too great (above a certain
estimated level), then the process of learning from mistakes L is activated which
modifies the sets: the knowledge of the agent M and the set of strategies S
respectively.
The process of learning from its own mistakes is illustrated in a schema in Fig. 3.10.
The process of learning through imitation is based on the fact that the agent may
observe the behaviour of other agents and particularly the changes which result
from their activity (the realization of their strategies) in the environment (Fig. 3.11).
The process of learning through imitation takes place when a given agent observes
the surrounding environment and the events that happen in this environment. This
process may be realized in the following stages:
agent
I V
q s X
L
the m
procedure
of learning
from m
mistakes
I V
Fig. 3.10 Schema illustrating the process of the agents learning based on mistakes
64 3 M-agent
environment
agent A agent B
v
mA mB
S?A I X SB
(mA) (mB)
v
Fig. 3.11 Schema illustrating the process of agents learning through the imitation of another
agents activity
A given agent observes an event (e.g., an event caused by realizing a certain strategy
by a different agent), i.e. changes in the environment that make us move from the
environment v to the environment v .
On the basis of the environments v and v , the agent builds the models m A
and (m A ) , which correspond to them.
Afterwards, with the use of the appropriate procedure of learning through imitation
it tries to choose strategies (one or a few) so that it can be used to transfer the model
m A into (m A ) .
The strategy s?A that was found (or constructed) is memorized and completes a
given process of learning through imitation. Afterwards, this strategy may be used
by a given agent in further actions.
The above scenario of a learning process through the imitation of other agents
behaviour gives possibilities of disseminating skills and experience between the
agents in the multi-agent system.
agent
M I v
R q s X
m
M v
Fig. 3.12 Schema illustrating the process of remembering conclusions by an agent from the process
of predicting its activity
The operation constitutes a kind of the agents memory, which makes it possible to
use the history of the agents activity to map out its strategy of activity.
Consequently, during the agents activity, a change (evolution) of stored
information represented by M takes place, and consequently M may be considered
as the state of the agent at a particular moment.
The architecture of the M-agent we suggested may be used for the classification of
agent systems from different points of view.
In particular, when we use the properties of the environment v for the classification,
we may distinguish the agent systems in the following way:
If the environment v is cyberspace, then the agent system consists of software
agents (also referred to as the mobile software agents).
If the environment v is realspace, then we may deal with the mobile robots with
a built-in (embedded) agent operating, e.g., in the processor of a robot (so-called
embodied agent)
From the point of view of the complexity of the model m, we may consider the
following kinds of agents constituting the elements of the system:
If m is a simple model realized on the basis of a finite-state machine, then we deal
with so-called reactive agent.
66 3 M-agent
If the model m is more complex, and in particular it takes into consideration the
surrounding environment and other agents (if they exist in this environment), then
we deal with so-called cognitive agent.
If we deal with the agent a, which contains the surrounding environment and the
existence of the agent a itself in the model m, then we deal with the cognitive agent
that is aware of its existence. This kind of agent is referred to as the deliberative
agent.
Similarly, other divisions may be realized on the basis of different elements of the
M-agent architecture.
Chapter 4
The Agent System for Balancing
the Distribution of Resources
Abstract This chapter deals with the agents application in practice. The system
of balancing the resources in multi-processor environment is presented. It is a very
good illustrative example of the application of the multi-agent systems, and allows
for the discussion of the main properties of the agent and agent systems.
4.1 Introduction
In this chapter we present the application of the concept of the agent based on the
architecture of the M-agent. This multi-agent system is responsible for the division
of resources in the scattered environment to make their distribution as uniform as
possible.
The underlying assumption is that the distribution of resources has to be done
under conditions of intensity of production and consumption of the resources in dif-
ferent places of the scattered environment, which is changing in time, yet impossible
to predict.
The problem of dynamic division of resources is the aim of numerous theoretical
as well as practical studies [60, 110, 111, 142]. It is also connected with the problem
of division of resources known as the transportation domain or the supply chain [97,
112, 113, 143]. The general form of this task may come down to different types of
practical applications, inter alia, to the balance of the server load in cyberspace or to
task processing in the cloud. Different variants of the problem are described in the
following papers: [33, 60, 133, 134, 188].
As was mentioned in the previous chapters, in order to define the agent system it is
necessary to specify:
The environment of agents activity
Agents (of different kinds)
Springer International Publishing Switzerland 2015 67
K. Cetnarowicz, A Perspective on Agent Systems, Studies in Computational
Intelligence 582, DOI 10.1007/978-3-319-13197-9_4
68 4 The Agent System for Balancing the Distribution of Resources
bus
random
MAX
MIN
MAX MAX MAX
MIN MIN MIN
random
Fig. 4.1 Schema illustrating the structure of a graph constituting the environment of the agents
activity
The relationships between the agents and the environment and, when necessary,
mutual relationships between the agents.
Briefly speaking, in order to create a project of the agent system it is necessary
to specify the structure of the system, i.e. the environment and the agents and the
relationship between them.
The environment is defined as follows (Fig. 4.1):
The environment has the form of a graph consisting of nodes (T a set of nodes)
and edges (Ba set of edges). The edges are connected with the nodesdirect
connections (neighbouring nodes). What is more, connections can be made with
all nodes (bus), which enables two optionally chosen nodes to send resources as
well as pieces of information directly between themselves, however, it is accepted
that the efficiency of such transfer is lower than sending through the connections
between the neighbouring nodes.
Through these connections the agents may also relocate in the environment.
The resource we consider exists in the nodes of the environment. The quantity of
this resource in the node t (t T ) is denoted by the real number rt .
4.2 The Agent Environment of Balancing the Distribution of Resources 69
Resources may be transferred between the nodes with the use of:
The quantity of the resource in individual nodes varies. The resource in a given
node is produced with varied intensity and similarly consumed, however, both
processes change independently of each other. The intensity of production is not
known (and its changes in time), nor is the intensity of consumption.
For each node the minimum (rtmin for the node t T ) and the maximum (rtmax for
the node t T ) quantity of resource which can be held in each node is specified.
In the environment there is not any centre which would store any information about
the quantity of the resource in individual nodes. Information about the quantity
of the resource in a given node may be accessible locally and sent, if necessary,
between the neighbouring nodes.
As was mentioned in the beginning, the system is responsible for keeping (as
much as it can) the quantity of the resource in each node within certain limits
(rtmin rt rtmax ).
The task presented in this general form may be a model for the real problems with
the division of resources such as:
distribution of products on the market in economic instability and changes in
demand and supply;
control of the scattered operation system under changing conditions of operation
and demand;
division of tasks in multiprocessor systems in irregularly appearing different types
of calculations;
control of the realization of operations in multi-server systems SOA;
division of different kinds of resources in scattered structures cloud.
The method for solving a problem of balancing the resources is a transfer of an
appropriate quantity of a given resource between the nodes which have been chosen
in the way that one has an underflow of the resource and the other has an overflow.
The task of the agent system comes down to the search for appropriate nodesthe
sender and the receiver.
For the purpose of realizing this task, the agent system is dedicated. That system
includes agents placed in the nodes of a graph and capable of relocating between
these nodes.
In order to define the agent system, apart from the environment described above, it
is necessary to specify:
70 4 The Agent System for Balancing the Distribution of Resources
MAX
MIN
Fig. 4.2 Schema presenting the nodes structure of the graph which constitutes the environment of
agents
bus
MAX
MAX MAX
MIN MIN MIN
Ag 1
MAX MAX
MAX
MIN MIN
MIN
Ag 2
Fig. 4.3 Schema illustrating the activity of the agents Ag1 and Ag2
Agent of type one (Ag1 ). This agent is created by the agent Ag0 and is able to
travel between the nodes. It is responsible for searching for the node which has
an overflow of the resource and which could send it to the home node of the agent
Ag0 , which created a given agent of type Ag1 .
Agent of type two (Ag2 ). It is created by the agent Ag0 and can travel between the
nodes. The agent is responsible for searching for the node which has an underflow
of the resource and could take it from the node with an overflow.
The operation of the whole system results from a definition of the function of different
types of agents. A scenario of the activity of the agent Ag1 may be presented in the
following stages:
1. The agent Ag0 states that there is an underflow of the resource (which is below
the lower limit) in the home node in which it resides.
2. The agent Ag0 checks the amount of the resource in the neighbouring nodes. If
it finds the one that has an overflow of the resource and is able to send it, then
the agent negotiates the amount of the resource to be given and then a process
of sending takes place. However, if there is no potential giver of the resource
among the neighbouring nodes, the agent starts searching for the potential giver
among the nodes placed beyond the neighbouring nodes.
72 4 The Agent System for Balancing the Distribution of Resources
3. The agent Ag0 generates (creates) the agent Ag1 and sends it to the environment
to other nodes in order to search for the nodes with an overflow of the resource
and is able to send it.
4. The agent Ag1 travels through the environment relocating from one node to
another in order to find the potential giver of the resource.
5. If it finds the appropriate node which is the potential giver, it starts the process of
sending the negotiated amount of the resource to its home node.
Similarly, the scenario of the activity of the agent Ag2 includes the following
stages:
1. The agent Ag0 ascertains that there is an overflow of the resource (the level of the
resource is above the upper limit) in the home node.
2. The agent Ag0 checks the amount of the resource in the neighbouring nodes. If
it finds there the one that has an underflow of the resource and is able to take
a certain amount of it, then the agent negotiates the amount of the resource to
be given, and afterwards a transfer takes place. However; if there is no potential
receiver of the resource in the neighbouring nodes, the agent starts searching
for the potential receiver in the nodes placed beyond the neighbouring nodes.
3. The agent Ag0 generates (creates) the agent (of type) Ag2 and sends it to the
environment to other nodes in order to search for the nodes with an underflow of
the resource.
4. The agent Ag2 travels through the environment moving from one node to another
in order to find the potential receiver of the resource.
5. If the agent Ag2 finds the appropriate node which is the potential receiver of the
resource, it starts the process of sending the negotiated amount of the resource
from the home node Ag0 .
As a result of the agents activity described above, the resource is relatively
uniformly distributed in the nodes of the graph.
On the basis of these scenarios, the agents (of the types) Ag1 and Ag2 are generated
by the agent Ag0 in a given node, which is the beginning of their existence (life).
However, the end of existence (life) of the agent (of type Ag1 or Ag2 ) takes place in
two cases:
1. The agent Ag1 finds the node of the giver (Ag2 of the node of the receiver) of the
resource (the negotiated amount of the resource is sent between the appropriate
nodes), a given agent (of type Ag1 or Ag2 ) realizes the operation of self-destruction.
2. A given agent (of type) Ag1 or Ag2 is not able to realize its task (there is an
underflow or an overflow of the resource in the whole graph environment), it
must stop its activity, i.e. perform self-destruction. To this purpose, each agent
of type Ag1 and Ag2 is equipped with a life energy reserve at the moment of
creation. During each displacement between the nodes agents work off a certain
amount of life energy. If the amount of energy drops to 0 (or below the established
threshold), then the agent is eliminated.
At the moment of elimination a given agent (of type Ag1 or Ag2 ) may inform the
agent of type Ag0 in its home node about the termination of its activity (that it is
4.3 Agent System 73
eliminated). The information may be used for the optimization of the activity of the
system (the assessment of the global state of resources).
Making decisions by a given agent existing in the node depends on what kind of
information it may receive as a result of observation and in what way it actually uses
information. The extension of the observation process makes it possible to take more
rational decisions about the activities of a given agent. The observation may involve
various areas and allows the agent to gain different information.
As to the area subject to observation, we may consider the following situations:
a given agent existing in the node denoted as wn,m may observe only the node in
which it exists,
the agent may observe the neighbouring nodes: wn1,m1 , wn1,m , wn1,m+1 ,
wn,m1 , wn,m+1 , wn+1,m1 , wn+1,m , wn+1,m+1 , as presented in Fig. 4.4.
In the agent system we described, there are many possibilities of getting pieces
of information about the environment that may make the model m more precise,
and consequently allow the improvement of effectiveness of the decision-making
process. Below, we present an overview of some types of observation and ways of
decision-making that seem to be most characteristic. In particular, the agent (of type
Ag0 , Ag1 or Ag2 ) existing in a given node may get the following information:
It has access to information about the amount of the resource in this node, as well
as to the amount of the resource in the neighbouring nodes.
m m'
Fig. 4.4 Schema illustrating the structure of neighbouring nodes and the use of the models m and
m by the agents
74 4 The Agent System for Balancing the Distribution of Resources
It may observe the number of agents (the agents of type Ag1 or Ag2 ) existing in
this node at a given moment, or observe the number of agents existing at a given
moment in the neighbouring nodes.
It has the possibility of observing some features of agents (the agents of type Ag1
ot Ag2 ) existing in this node at a given moment (e.g., the amount of life energy of
particular agents).
Similarly, the agent existing in a given node has the possibility of observing chosen
features of agents existing at a given moment in the neighbouring nodes.
The above ways of gaining information from the surrounding environment enable
the agent to make the following decisions:
The agents decision about the direction of the displacement. The agent tries
to relocate in the direction of the environment that, according to its assessment,
has more (for the agent of type Ag1 ) or less (for the agent of the type Ag2 ) of a given
resource. For instance, the agent Ag1 ascertains that there is more resource in the
node Wn1,m+1 than in the node Wn,m (Fig. 4.4). Even if this difference is slight
(it does not guarantee the possibility of getting the resource), the agent accepts
the fact that in farther nodes in that direction (Wn2,m+2 , Wn2,m+1 , Wn1,m+2 )
the amount of the resource will be even larger. Therefore, having compared the
models m and m (Fig. 4.4) a decision on the displacement from the node Wn,m to
the node Wn1,m+1 is made.
The operation of the agents meeting. The realization of the operation of
meeting between the agent of type Ag1 and the agent of type Ag2 may take place
according to the following scenario. The agent Ag1 searches for the node which
could give a certain amount of the resource, and the agent Ag2 searches for the
node which could take an overflow of the resource. If these two agents meet in a
given node, they may exchange information and realize the negotiated transfer of
the amount of the resource between their home nodes, thus realizing their tasks
(Fig. 4.5). As a result, the connection of the nodes identifierthe potential giver
(sender) of the resource and the nodes identifier of the receiver takes place, which
allows the start of transfer. On the basis of these pieces of information, the agents
of types Ag1 and Ag2 may make a decision about the realization of the operation
of transfer of the resource that is satisfactory for both of them.
The observation of agents in a given node. The agent may observe the number
of agents of a particular type which are waiting in this node for being allowed
to move to other (neighbouring) nodes (Fig. 4.4). If a given agent notices in a
given node (in which it exists at that particular moment) a relatively large number
of agents of type Ag1 searching for the resource for their home nodes, it may
assume that a relatively large number of agents of that type circulate,in the whole
environment, and as a result the intensive search for the resource takes place
in the whole environment, which indicates an underflow of the resource in the
environment. Similarly, the existence of a large number of agents of type Ag2
may indicate that there is an overflow of the resource in the whole environment.
On the basis of the information, the agent of type Ag0 may decide whether it
should create new agents searching for a node-giver (or receiver) of the resource.
4.4 Information in the Agent System 75
magistrala
MAX
MAX MAX
MIN MIN MIN
Ag 1
MAX MAX
MAX
MIN MIN
MIN
Ag 2
Fig. 4.5 Schema illustrating the operations of meeting between the agents of type Ag1 and Ag2 .
The agents of type Ag0 are not marked in the figure
The observation of agents in the neighbouring nodes. The agent may observe
the number of agents of a particular type which are waiting in the neighbouring
nodes (for the node in which the agent exists) for further displacement in the
environment (Figs. 4.2 and 4.4). If in the neighbouring nodes in the particular
direction (e.g., for the node Wn,m in the neighbouring nodes Wn1,m , Wn1,m+1 ,
Wn,m+1 ) there is a relatively large number of agents of type Ag1 it may be assumed
that there is an underflow of the resource in the nodes placed in that direction, and
it is pointless to search for the resource there. On the basis of the information, the
agents of types Ag1 or Ag2 may make decisions in which direction they should
continue their search.
The observation of agents parameters in a given node. The agent at the
moment of its creation is equipped with a certain amount of life energy. During
every transition between the nodes it loses a certain portion of this life energy. As
a result, the amount of energy the agent possesses at a given moment is connected
with the distance it has covered (the number of displacements between the nodes).
In particular, a relatively small amount of energy the agent has indicates the long
way it travelled in the environment. A given agent may not only observe the
number of agents of a particular type in a given node but also certain features of
these agents, and in particular the amount of life energy they possess. Therefore,
if a given agent observes in a given node a large amount of agents of type Ag1
76 4 The Agent System for Balancing the Distribution of Resources
which additionally possess a small amount of life energy, it may conclude that
they have travelled a long way in search of the resource and they have not found it,
because there is an underflow of the resource in the whole environment. Similarly,
if there is a relatively large number of agents of type Ag2 with a small amount
of life energy, then probably there is an overflow of the resource in the whole
environment. On the basis of the information, the agent of type Ag0 may decide
whether it should create new agents searching for the nodegiver (or receiver)
of the resource.
The observation of agents parameters in the neighbouring nodes. The agent
may also observe agents and their amount of life energy in the neighbouring
nodes. Therefore, if there is a large number of agents of type Ag1 with a small
amount of life energy in certain neighbouring nodes, then it may be assumed that
there is an underflow of the resource in farther nodes in the direction of this kind
of neighbouring nodes. Conversely, a similar observation of agents of type Ag2
may indicate an overflow of the resource in these nodes. The information allows
agents of type Ag1 and Ag2 to make a decision about the direction of their search.
The above possibilities of acquisition of different kinds of information presented
above contained in the environment may be used for making right decisions by
the agent (particularly Ag1 or Ag2 ), as well as for the creation of more complex
mechanisms for managing groups of agents.
The cases illustrating the agents capability to observe the environment and use
the information for decision-making may be generalized, e.g., taking advantage
of the fact that the agent observes not only resources in the environment but also
other agents.
Generally speaking, we may accept that agents circulating in the system constitute
from a given agents point of view (for which they can be the source of information)
a kind of resource in the environment. What is more, relocating agents may bring
information from different remote corners of the environment, and as a result may
make it easier for the agent to acquire information not only about the local but also
global state of the environment, and then the whole agent system.
A class of open problems waiting to be resolved includes the stabilization and scaling
of the multi-agent system. They are more general issues, concerning most of the agent
systems in which there is a change in the number of currently operating agents. In the
case we consider, the problem may be resolved with the use of interesting concepts
of special mechanisms of the agent systems. In the context of these considerations,
the problems of stabilization and scaling of the agent systems may be formulated as
follows:
The problem of stabilization is connected with the fact that the number of agents
operating in that multi-agent system changes in time, which is a normal
4.5 Stabilization and Scaling of the Multi-agent System 77
phenomenon (and even desirable) in this system. However, at the same time
the stabilization of the system requires a limited number of agents (total, and
of certain types). Similarly, the agents (of a certain type and cumulative) should
not disappear in the system (unless such a scenario for some types of agents is
foreseen in a given situation).
The problem of scaling in the case of the multi-agent system involves choosing
the adequate number of agents according to the complexity of a task to be per-
formed. The agent system allows for more precise scaling that consists in choosing
the adequate number of agents of particular types and establishing numerousness
of cooperating groups of agents. Specification of this numerousness should be
done on the basis of the complexity and character of tasks which the agent sys-
tems are entrusted with. It gives new possibilities of scalinga kind of functional
scaling. It should be emphasized that the agent acting in the environment uses the
resources of the system for the realization of its tasks (computing power and the
memory of the processor). If there are more agents in the system than needed for
the realization of entrusted tasks then the excessive agents use the resources of the
system mainly (or even solely) for the tasks connected with their own existence
(they do not contribute to performing tasks the system is entrusted with).
The solution to the problem of keeping the minimal number of agents of a particular
type is keeping the adequate number of agents which can generate them. For instance,
in the system we analyse, only one instance of the agent of type Ag0 exists in each node
of the environment, whereas their number is constant when the system is operating.
The agent of type Ag0 generates other types of agents (Ag1 , Ag2 ) whenever possible,
which guarantees that they will not totally disappear in the system.
There are three types of agents in the system: Ag0 , Ag1 , Ag2 . The agents of type Ag0
exist in the number established at the moment the system was created (by the number
of nodes of the environment) and therefore constant, and there is no problem with
an excessive increase in their number. However, the number of agents Ag1 and Ag2
changes during the time of the operation of the system and may increase excessively,
thus being a danger to its operation (at least efficient).
In the example of balancing the resource in the graph environment, the method of
self-liquidation and liquidation of agents by other agents was used. Let us consider a
case of the application of the self-liquidation mechanism when there is an underflow
of the resource in the whole system. Consequently, in individual nodes there is a
local underflow of the resource and in these nodes agents of type Ag1 are generated
to search for the resource in other nodes. On being generated each agent Ag1 receives
a certain amount of life energy that is used while conducting a search. It allows for
self-liquidation of a given agent when the realization of its mission is futile and
overruns, and the life energy is used up.
We may also use the mechanism of removing the agent by other agents. If the
number of searching agents Ag1 in the system is too large, then their increased
number may be observed in individual nodes. This phenomenon may be observed by
the agent of type Ag0 residing in a given node (and managing this node). This agent
may establish that the number of the agents Ag1 existing in a given node is over a
certain limit, and that the tendency lasts for some time, or is growing. It may signify
that there are too many agents of type Ag1 in the system. The agent Ag0 may make
4.5 Stabilization and Scaling of the Multi-agent System 79
a decision about the reduction of the number of agents of type Ag1 in a given node
through their liquidation. As was mentioned, the liquidated agents do not realize the
search for the resource, i.e. the task they are entrusted with.
The limitation of the number of the generated agents. Agents are generated in
the system by other agents (apart from the case when a certain number of agents
are generated on running the system).
Agents may be equipped with mechanism specifying the aim of making a decision
about generating a new agent of a chosen type, and the generating agent should
consider:
the need for generating the agent resulting from the necessity to ensure the specific
functionality of the system resulting from the tasks,
the assessment of the possibility of generating resulting from the assessment of
the temporary number of agents of a given type and accessibility to resources
necessary for their operation.
The generating agent may observe and use the features of the environment (including
the state of other agents) for decision-making about the generation of new agents.
This assessment should mainly include the number of agents of a given type, which
may turn out to be difficult, but an approximate estimate of that number is easier and
sufficient.
Coming back to the example of balancing the resource in the graph environment,
we may consider the operation of the mechanism of limiting the generation of new
agents in a situation when there is an underflow of the resource in the whole system
(and therefore, in individual nodes).
The agent of type Ag0 , which manages a given node, observes an underflow of
the resource and considers the possibility of generating the agent of type Ag1 and
sending it to other nodes to carry out a search for the resource. But, first it analyses
the number and state of agents of type Ag1 sent by other nodes and existing at a given
moment in that node.
If in the node, at a given moment, there is a relatively large number of agents
searching for the resource, it may mean that the number of nodes searching for the
resource is large, what is more, there is an underflow of the resource in the whole
system.
If additionally we may observe that the amount of life energy possessed by the
agents is small, it means that they arrive from far away and they have not found the
nodes with the sufficient amount of the resource to load on their way. It confirms
that the amount of the resource in the system is small and therefore generating and
sending another agent of type Ag1 in search of the resource is pointless. It opens up
the possibility of limiting the number of generated agents in the system.
of agents. In the case of balancing the resources in the system, a new type of
agents is introduced-unemployed agents denoted as Ag3 . Their activity proceeds
as follows:
When an agent of a certain type Ag1 or Ag2 realized the task it was entrusted with
and should be liquidated, it is not liquidated but transformed into the unemployed
agent (Ag3 ).
The agent Ag3 travels between the nodes and offers its activity in a particular node
as the agent Ag1 or Ag2 .
If the agent of type Ag1 (Ag2 ) is necessary in a particular node, a given agent of
type Ag3 generates the agent Ag1 (Ag2 ) and then undergoes self-liquidation.
The unemployed agent of type Ag3 searches for a task to realize in the system. The
concept of the unemployed agent allows for the transformation of excessive agents
of a given type into agents of another type which are missing in the system at that
moment, in other words, it allows for the re-skilling of agents of one type to the agent
of another type (e.g. Ag1 to Ag2 ), whenever necessary. It makes the management of
agents more efficient without having to generate them frequently, more specifically,
it becomes the tool for scaling the agent system.
The above proposals of the ways of stabilization of the number of agents, is an
example of possibilities of constructing appropriate mechanisms for managing the
agent systems. Summing up, the discussion about the scaling of the agent system, it
may be concluded that it is necessary to create appropriate mechanisms for adjusting
(preferably dynamic) the size and character of the system to the range of the task. It
may be executed in two ways by:
Adjusting the number of agents of a given type to the realization of specific tasks;
increasing (or decreasing) the number of agents of a given type, so that their
number would be preferably adjusted to the size of the task they are to perform. It
is connected with the appropriate generation and removal of agents.
Adjusting the quality of agents acting in the system to the task realized by the
system. The following variants can be considered:
An adequate increase (or decrease) in the number of agents of a given type so
that their number would be preferably adjusted to the size of the task to perform.
During the operation of the system the agents may receive help from the agents
of a new type. Therefore, it may be considered that a certain number of a new
type of agents can be inserted into the system during the realization of a task.
The proper selection of different types of agents which cooperate to realize a
task is necessary here.
Replacing agents of a given type with the new, improved versions of agents.
This kind of operation may take place without disturbing the task of the system
(successive replacement of agents).
The above-mentioned modifications of the agent system may be realized in the system
with the use of the methods described, and particularly:
4.5 Stabilization and Scaling of the Multi-agent System 81
The mechanism for generating agents in the system is related to the necessity
of the proper selection of the number and types of generated agents, as well as the
selection of the appropriate way of inserting them into the system.
The mechanism for removing (liquidating) agents from the system is related to
the necessity of considering the problems of the termination of missions performed
by these agents.
The operation of the re-skilling of agents may be realized by the mechanism for
generating and liquidating the appropriate agents, as well as using the concept of
unemployed agents.
The above examples illustrate the possibilities of the agent systems in the field of
control of the type and the number of agents, and matching the system to the needs
of a task realized in real time, hence the ways of their scaling. It is noteworthy that
the concept of the unemployed agent may be used together with the mechanisms
presented above. If there is a need to reduce the number of agents, the unemployed
agents may be removed in the first place. This agent does not have any task to realize
which would be connected with the particular functionality of the system, hence
its liquidation will not disturb any functionality of the system. Similarly, if there
is a need to increase the number of agents, then generating unemployed agents is
the simplest solution. These agents should find a task to realize and automatically
contribute to the support of the realization of that functionality of the system which
is necessary at that moment.
It seems that the concept of the unemployed agent, i.e. the software searching for
tasks to realize may become the basic element of the large-scale systems operating
in cyberspace.
Universal system for the simulation of the agent systems (Universal System for the
Simulation of the Systems of the Autonomous Agents), developed at the Department
of Computer Science of AGH University of Science and Technology, was used for
the research on the system.
With the use of this system the simulation of the process of balancing the resources
in the multi-processor structure was carried out. The structure, which was realized
in the form presented in Fig. 4.1, consists of 400 processors constituting the nodes.
The processors (nodes) are identical and may perform the same calculations. Each
processor is connected 8 processors referred to as its neighbouring processors. Tasks
may be sent and agents may relocate through these connections between the neigh-
bouring processors. What is more, all processors are connected via the interface
trough which tasks may be sent between the two optionally chosen, and defined as
the sender and the receiver (Fig. 4.1).
82 4 The Agent System for Balancing the Distribution of Resources
The operation of the system defined in this way is the calculation of the task
consisting of Nt partial tasks. The resource, distribution of which was balanced in the
environment, was made up of partial tasks intended for the calculation in respective
processors. During the calculations the number of partial tasks varied since those
tasks were successively generated, which resulted in an increase in the resource. On
the other hand, when the calculations of partial tasks were realized, their number
dropped.
These tasks were generated according to the following scenario:
At the moment of initiating the calculation of the task a certain number of partial
tasks (referred to as initial constituting 10 % of the number Nt in the studies) was
generated in chosen processors of the structure (constituting about 10 % of all the
processors).
At the end of calculating each task, with certain established probability, a decision
was made as to whether to generate a certain number of partial tasks in a given
processor (node), and after completing a given task a specified number of new
tasks was generated.
The number of generated partial tasks was controlled in the system and after
achieving the value Nt the process of generating new tasks was blocked.
In the multiprocessor environment, balancing the tasks is realized through the
agent system, which consists of the following agents:
The agent of type Ag0 which exists only in one instance in each node and is
responsible for managing this node (processor).
The agents of type Ag1 search for the node (processor) which may send a certain
number of tasks.
The agents of type Ag2 search for the node (processor) that is able to receive a
certain number of tasks.
The agents of type Ag3 , also referred to as unemployed when they exist in the
system search for tasks to realize.
The description of the structure of the system is the same as presented in Sect. 4.2.
As a measure of effectiveness (quality) of calculations the indicator Ef (efficiency)
was accepted in the form of:
Tc
Ef = , (4.1)
n Tr
where:
Tc is the time of calculating all partial tasks on one computer; Tr is the time
of performing all tasks onto the multiprocessor structure; na number of nodes
(processors) in the structure.
The indicator Ef is used for defining the efficiency calculation of tasks onto the
structure and refers to the calculation of a given task consisting of the specified
number of partial tasks (Nt).
4.6 Illustrative Results of Research into Balancing the Resources . . . 83
The notion of the coefficient allowing for the real-time observation may also be
introduced, during the time of calculations as the equivalence of the decomposition
of the resource, i.e. partial tasks. It is constituted by the measure of irregularity of
distribution of tasks, referred to as the ratio of the maximum number of tasks in the
nodes to the average number of tasks in the structure Wq ):
Ntmax
for Nts > 0
Wq = Nts (4.2)
0 for Nts = 0
where
Ntmax is the maximum (at a given moment) number of tasks in the structure of
nodes defined by the dependency:
and Nts is the average number of tasks in the structure of nodes defined by the
formula:
i=n
(Nti )
i=1
Nts = (4.4)
n
where:
Nti is the number of tasks at a given moment in the node i, and nthe number of
nodes (processors) in the structure.
In Fig. 4.6, the results of the calculation of two tasks are presented, each one in
three different conditions of operation of the agent system. Block graphs A1, A2
and A3 show the efficiency of operations of the agent system for the calculation of a
0.7
0.6
0.5
0.4
Ef
0.3
0.2
0.1
0
A1 B1 A2 B2 A3 B3
Fig. 4.6 The value of the indicator Ef for the calculation of the task consisting of 100,000 tasks
(A1, A2, A3) and 1,000,000 tasks B1, B2, B3) in variants without agents, with the agents Ag1 and Ag2
(A2, B2) and with the agents Ag1 , Ag2 , Ag3 (A3, B3)
84 4 The Agent System for Balancing the Distribution of Resources
task consisting of 100,000 partial tasks, and block graphs B1, B2 and B33 show a task
consisting of 1,000,000 partial tasks. The research was carried out for the following
variants of the operation of the agent system (Fig. 4.6):
A group of block graphs A1, B1 presents the efficiency of the calculation of tasks
when the agents Ag1 , Ag2 , Ag3 did not operate in the system, and sending tasks
only between the neighbouring processors was used for balancing the distribution
of tasks.
A group of block graphs A2, B2 presents the efficiency of the calculation of tasks
when only the agents Ag1 , Ag2 acted in the system.
A group of block graphs A3, B3 presents the efficiency of the calculation of tasks
when only the agents of types Ag1 , Ag2 as well as Ag3 so-called unemployed
agents acted in the system.
We may observe that if the number of calculated tasks is larger, the saturation of the
structure with tasks takes place and irregularities of their distribution are smaller,
which makes the balancing of the task distribution easier.
Figure 4.7 presents the momentary values of the indicator Wq for the calculation of
the task consisting of 100,000 partial tasks in three different variants of the operation
of the agent system:
Figure 4.7a presents the indicator Wq for the calculation of tasks when the agents
Ag1 , Ag2 , Ag3 did not operate in the system and sending tasks only between the
neighbouring processors was used.
Figure 4.7b presents the indicator Wq for the calculation of tasks when only
the agents Ag1 , Ag2 acted in the system.
Figure 4.7c presents the indicator Wq for the calculation of tasks when only the
agents Ag1, Ag2 as well as Ag3 acted in the system.
The greater irregularity in distribution in the system, the higher the value of the
indicator Wq is. We may observe that at the beginning of calculations there is a
certain irregularity but it later disappears. It results from two factors: the increasing
number of generated partial tasks (saturation) and the beginning of the sending system
operation also with the use of the agent system (variants b and c).
However, at the end of the calculations the number of tasks decreases (generation
is blocked) and saturation begins to fall, and then an underflow of tasks appears in
the system. At that time, the role of the agent system becomes significant due to
which the period of appearing irregularity becomes shorter, hence the calculation of
the whole task is faster.
Figure 4.8 presents the changes of the number of agents in the system at the time
of calculations of the task consisting of 1,000,000 partial tasks. In this graph, NAg1
denotes the number of the agents Ag1 , NAg2 the number of the agents Ag2 , and
NAg3 the number of the agents Ag3 . The graph NAg presents the cumulative
number of the agents Ag1 , Ag2 and Ag3 acting in the system at a given moment.
Analyzing these graphs, we may conclude that at the beginning of calculations
a momentary increase in the agents Ag1 searching for tasks appears. However, it
4.6 Illustrative Results of Research into Balancing the Resources . . . 85
(a) 300
250
200
W q 150
100
50
0
0 1000 2000 3000 4000 5000 6000 7000 8000
t
(b) 300
250
200
W q 150
100
50
0
0 1000 2000 3000 4000 5000 6000 7000 8000
t
(c) 300
250
200
W q 150
100
50
0
0 1000 2000 3000 4000 5000 6000 7000 8000
t
Fig. 4.7 The value of the indicator Wq for the calculation of a task in variants, a the system without
agents, b the system with the agents Ag1 and Ag2 as well as, c the system with the agents Ag1 , Ag2
and Ag3
disappears because tasks are generated and the system becomes saturated with the
tasks. At that time, the agents Ag2 are activated, which are responsible for searching
for free nodes that may receive an overflow of tasks. In the course of calculations and
after blocking the generation of new tasks there is a decrease in the task saturation in
the system, and then their underflow. It results in the need for the agents searching for
the tasks Ag1 , whose number begins to increase and the role of agents Ag2 becomes
less important, hence a decrease in their number. The number of unemployed agents
increases when the agents Ag2 are re-skilled
to the agents Ag1 . This operation takes
place due to the agents Ag3 . The graph NAg presents the cumulative number of
agents which is held at the constant level. Therefore, the stabilization of the number
of agents in the system takes place, and at the same time, due to unemployed agents
it is possible to re-skill agents and ensure functional scaling of the system.
86 4 The Agent System for Balancing the Distribution of Resources
5000
4000
NAg 1
3000
2000
1000
0
0 5000 10000 15000 20000 25000 30000 35000
t
5000
4000
NAg 2
3000
2000
1000
0
0 5000 10000 15000 20000 25000 30000 35000
t
5000
4000
NAg 3
3000
2000
1000
0
0 5000 10000 15000 20000 25000 30000 35000
t
5000
4000
NAg
3000
2000
1000
0
0 5000 10000 15000 20000 25000 30000 35000
t
Fig. 4.8 Momentary number of agents in the system during the calculation of the task NAg1 the
number
of the agents Ag1 , NAg2 the number of the agents Ag2 , NAg3 the number of the agents
Ag3 , NAgthe cumulative number of agents
4.7 Summary
In this Chapter we presented an example of the agent system which was created on
the basis of the models of the agent system described above.
Agents have the possibility of acting in the environment and influencing its
changes, which particularly include a change of the number of resources in indi-
vidual nodes. It is connected with the activity of the agent Ag0 that manages a
given node.
The agents of types Ag1 , Ag2 , Ag3 , which are able to relocate between the nodes
while getting to a given node, are part of this environment, and while relocating
in the environment, they make changes to it.
Agents, acting in the system as a result of observation of changes taking place in
the environment and the behaviour of other agents, may be autonomous from these
agents and make independent decisions concerning their behaviour in the system.
Summing up, the operation of the agent system we presented is based on the function-
ality of the agent previously described as the capability to observe changes caused
in the environment by other agents.
Chapter 5
The Examples of Applications of the Agent
Systems
Abstract This chapter is concerned with further solutions that are responsible
indicating the domains that are particularly predisposed to the application of agent
systems. It illustrates mainly of the fact that the agent system is not a universal
solution, and operates in certain characteristic situations. The role of the designer
is to make a decision on whether the agent approach should be applied to a given
solution and in what way.
In this chapter, we discuss the application of agents in the control of mobile robots
which cooperate in the realization of tasks.
The development of cyberspace which emerged from the connection of operation
systems of individual computers by means of the network of significant and even
global range, created the ideal environment for the existence and the activity of
agents. The fact that realspace and cyberspace exist next to each other has opened up
the possibility of gaining a new perspective on an agent and a robot, and further have
given new possibilities of robot management. In effect, there has been the need for
the new tools and methods of realization of the management systems for cooperation
of robots. The application of agents in the mobile robots management seems to be
of particular interest.
In the previous approach, a mobile robot was equipped with a computer placed on
a robot (most frequently referred to as the onboard computer or embedded computer).
With the use of the radio connection the onboard computer could communicate with
computers of other robots as well as with a desktop. The role of a desktop was
data sharing, and in some cases, computing power sharing, and generally speak-
ing resources sharing with onboard computers of individual robots. However, the
decision-making role concerning the activity of a given robot remained in the area
of systems operating on the onboard computers.
Using the concept of an agent and cyberspace, we may suggest a new approach
to the problem of the mobile robots management. In this case, the decision-making
role is transferred to agents acting in cyberspace. It resulted in the need for creating
a model of the real environment in cyberspace [168].
Springer International Publishing Switzerland 2015 89
K. Cetnarowicz, A Perspective on Agent Systems, Studies in Computational
Intelligence 582, DOI 10.1007/978-3-319-13197-9_5
90 5 The Examples of Applications of the Agent Systems
cyberspace
AGENT ROBOT
AGENT
the tool of a
ROBOT ROBOT
ROBOT
real-space
Fig. 5.1 Schema illustrating different concepts of relationships between the agent and the robot
Considering an agent and a robot as well as the existence of real-space and cyberspace,
we may distinguish the following relationships between the classic notion of the robot
and the agent:
The agent is related to a given robot and resides in its onboard computer. The agent
constitutes the robot management software, taking into account the conditions of
the surrounding environment, including other robots. This agent is capable of
communicating with other agents residing in onboard computers of other robots
existing at that moment in the environment. There is also a possibility that the agent
may contact with servers placed in cyberspace to use the resources gathered there,
referring mainly to information and computing power (classic configuration).
The agent related to a given robot acts in cyberspace, in a certain virtual environ-
ment created there. In this case, the real environment is mapped (with the use of
appropriate tools) in cyberspace where it creates the virtual environment, which
constitutes a model of the environment from real-space. Agents are associated
with robots acting in the real environment. Therefore, we consider the agent from
cyberspace as a robot, and the robot from real-space is considered as a tool of the
agent which is associated with it (now a robot) from cyberspace. It allows for the
transition of management of robots (partial or as a whole) to cyberspace, as well
as for the use of tools and methods used in the agent systems.
5.1 Agents in Cooperative Mobile Robots Management 91
The concept of farms (depots) of robots uses the concept of mobile robot management
with the use of an agent existing in cyberspace. The transfer of a decision-making
process to the agent and considering a classic robot in real-space as a tool gives,
in this case, new possibilities of robot management. Classic robots (referred to as
robots-tools or simply robots) are grouped in farms (or depots) of robots (Fig. 5.2).
There they are serviced and particularly their resources of energy are replenished,
they can also be replaced with new models and, generally speaking, prepared for the
actionfuture realization of tasks they are entrusted with. The agent which has a
particular task to perform may hire a certain robot (as a tool), i.e., a classic robot in
the farm (depot) of robots. This kind of action is shown in Fig. 5.2. The agent Ag
has to realize the task Z. To this purpose, it communicates with the agents managing
the depot of robots and particularly with the agent AgF1 and starts negotiation on
hiring a robota tool (a classic robot) (Fig. 5.2a). As a result of negotiations, the
agent hires a robot-the tool Rb1 (Fig. 5.2b). The agent takes over the management of
that robot and uses it to realize the task Z (Fig. 5.2c). After the task is realized, the
agent Ag returns the robot-the tool Rb1 to the depots of classic robots and hands over
control of the tool to the agent AgF1 from the depot of robots (Fig. 5.2d).
The exemplary expedition of robots through the narrow gate is realized according to
the following scenario:
To realize further tasks, the agent Ag needs to lead an expedition of the related
robot R (the one it controls) in real-space (Fig. 5.3a).
92 5 The Examples of Applications of the Agent Systems
(a) (b)
cyberspace depot of agents-robots cyberspace depot of agents-robots
AgF1 AgF3 AgF1 AgF3
Ag Ag
AgF2 AgF2
Ra1 Ra1
Rb1
Ra2 Ra2
Z depot of classic Z depot of classic
Rb1
real-space robots real-space robots
(c) (d)
cyberspace depot of agents-robots cyberspace depot of agents-robots
AgF1 AgF3 AgF1 AgF3
Ag Ag
AgF2 AgF2
Ra1 Ra1
Rb1
Ra2 Ra2
Rb1
Z depot of classic Z depot of classic
real-space robots real-space robots
Fig. 5.2 Schema illustrating robot management with the use of the depot of classic robots,
a negotiation on hiring a robot, b the agent hires a robot, c the agent takes over management
of the hired robot and uses it to realize the task, d after the task is realized, the agent returns the
robot
(a) (b)
cyberspace cyberspace
AgP2 AgP2 AgP2 AgP2
Ag Ag
R
R
real-space real-space
(c) (d)
cyberspace cyberspace
AgP2 AgP2 AgP2 AgP2
Ag Ag
R
R
real-space real-space
Fig. 5.3 Schema illustrating the realization of robots expedition through the narrow gate, a an agent
needs to lead an expedition of the robot through the narrow gate, b the agent entrusts the robot to one
of agents specialized in expedition, c the specialized agent realizes expedition of the robot through
the narrow gate, d after completing the task of expedition the specialized agent returns the robot
5.1 Agents in Cooperative Mobile Robots Management 93
cyberspace
the image
identification
Kalman filter
The control
algorithm
real-space
Fig. 5.4 The realization of the robots expedition model through the narrow gate
94 5 The Examples of Applications of the Agent Systems
cyberspace
Ag2
Ag1
real-space
Fig. 5.5 The method of realization of the agent algorithm of robots expedition
routes of robots so that (if possible) leading the corresponding robots in real-space
would be parallel (Fig. 5.5).
After planning and agreeing on the routes, the agents realize the relocation of
robots through the narrow gate in real-space.
The system intended for the management of a group of mobile robots meant for
emptying bins in the urban environment is an example of the application of agents
acting in cyberspace.
Let us consider this urban environment which consists of the elements listed
below, which are managed according to the following rules (Fig. 5.6):
a network of street with crossroads,
litter bins placed in certain places (streets or crossroads),
bins are filled with waste with unknown intensity (impossible to predict),
each bin has a sensor which may transmit the level to which the bin is filled to the
computer system (operating in cyberspace),
a certain number of mobile robots, which are responsible for emptying the bins,
circulates in the streets,
bins should be emptied if they are filled to some level (excluding empty or nearly
empty bins),
however, bins should be emptied with such frequency so as not to let them be
overfilled,
5.1 Agents in Cooperative Mobile Robots Management 95
cyberspace
P5
P3 A2 A3 P7
A5
A1 P6
P2
P1 P4 A4 P8
real-space
Fig. 5.6 Schema of the urban environment with litter bins and its model in cyberspace
the capacity of a robot is larger than the capacity of a litter bin, i.e., the robot may
empty more than one bin (a few bins),
from time to time a given robot is excluded from the emptying action to take waste
to a rubbish dump.
The task of management of robots involves leading the robots in such a way so
that they would get to full bins without excessive wandering around the streets. That
task is realized by the appropriate agent system operating in cyberspace. The system
is designed as follows:
A network of streets in the city is mapped in cyberspace in the form of a graph
(streets represent edges, nodes represent crossroads).
In certain places of the graph litter bins with information about the level to which
they are filled are mapped (e.g., expressed as a percentage).
Each robot corresponds to the agent which may move around on the edges of the
graph.
The agent related to a particular robot may realize the following operations:
define the location of a corresponding robot in the city,
map this location in the graph in cyberspace and take this place itself,
update its location in the whole graph,
have an influence on the control of a robot, leading it around the streets according
to its intentions.
The agent that corresponds to a given robot is responsible for leading it to the bin
that needs emptying.
The concept of pheromones applied to the ant systems provided an inspiration
for the realization of the solution described above. If we assume that the ant system
96 5 The Examples of Applications of the Agent Systems
cyberspace
100 100
60 P 60
60
100 60
20 100 100
60 60
20 20
20 60 60 60
60
60
A 60 60
20 20
20 20 20
0
20
0
Fig. 5.7 Schema illustrating the influence of the bin filled with litter in the urban environment
cyberspace
and the agent system are similar to each other, we may apply the rules from the
concept of pheromones to the agent system. However, the direct use of the concept
of pheromones does not give sufficient effects. Therefore, we may suggest a different
approach based on the smell of litter in bins. This kind of system may operate as
follows (Fig. 5.7):
Information from the bin that it is filled up is sent to the bin model (e.g. P1 ) placed
in the graph in cyberspace.
The bin model (e.g. P1 ) generates smell in the graph.
The smell goes through the edges of the graph. The edges which are closer to the
bin have higher concentration of smell, and those which are further have lower
concentration.
The process of the disappearance of smell takes time and once the bin is emptied
it disappears.
The agent (e.g. A1 ) moving around in the graph feels smell and uses it for the
navigation in the graph, which is realized as follows:
The agent is able to distinguish the concentration of smell on every edge.
The agent getting to the pick (which corresponds to the crossroads) chooses a
further route along the edge through which the highest (in this place) concen-
tration of smell gets.
If a few edges in the pick have the same or higher concentration of smell, then
one of these edges is chosen randomly.
In effect, the agent heads for the node with the filled bin.
The agent directs the robot it manages to follow the route in the city which
corresponds to the relocation of the agent in the graph.
The above example of the robot management uses the concept of attracting smell as
a method for the navigation of the agent (and the robot) in a given environment in the
5.1 Agents in Cooperative Mobile Robots Management 97
way similar to the use of pheromones in the ant systems. However, this solution has a
drawback when a few robots (and agents) for emptying bins, are used in the system.
In this case the attracting of a full bin may attract a few agents and consequently the
robots, which gather around the bin without any purpose, whereas only one robot is
needed (one agent in the cyberspace).
This drawback may be overcome by taking advantage of the capability to produce
the smell not only by full bins but also by the agents (or robots) circulating in the
environment. The difference in smell is that the smell of one agent is repulsive to
another agent. Spreading such smell in the environment prevents agents (and robots)
from gathering around one full bin.
Therefore, we may consider that each agent produces its own characteristic
smell. Consequently, in the environment different types of smells with the following
properties are spread:
Smell produced by a given agent is repulsive to others,
Smell does not affect the agent which produces it,
Full bins placed in the environment also produce smell which attracts all
agents.
The agent after getting to the node of the graph has the capability to check the
concentration of individual smell related to the edges of a given node, and on the
basis of the result of these observations makes a decision along which edge it should
continue its route. It takes into account that:
the smell of full bins attracts,
the smells of other agents are repulsive.
If there is a choice between two or more edges with identical concentration of
smell, the right edge is chosen randomly.
By developing the example presented above, we may consider different ways of
producing smells as well as different ways of decision-making on the basis of smells
appearing in the environment.
This system may be an example of the application of the robot management in
real-space with the use of simulation of the real environment in cyberspace, with the
application of the concept of the agent.
Obviously, having some information about full bins and the location of robots
which empty these bins, we may consider other methods (algorithms) of the robot
management. The considerations we present are an example of the agent approach
to the solution to a problem rather than the optimal solution to the task.
The agent system may be used for the management of task distribution among
mobile robots. In the exemplary system, the agent system is responsible for such
98 5 The Examples of Applications of the Agent Systems
task distribution among robots so that the realization of tasks would be performed in
the shortest possible time. The mobile robots system realizes its tasks as follows:
The environment in which mobile robots operate is considered. Apart from the
robots there are also resources in the environment [166, 191].
There is a specified number (N) of mobile robots operating in the system. All robots
are identical and may relocate in the environment. The environment in which the
robots exist allows for the relocation of robots and the specification of the distance
between two robots at a given moment. A certain limiting value of the distance
between the robots is established. For a given robot, the robots which are less than
the limiting value of distance away from each other are referred to as the neigh-
bouring robots. Among these robots certain relationships may take place, which
are unavailable the those non-neighbouring. It may be accepted that the neigh-
bouring robots may communicate directly. Due to the limiting value of distance
we may specify the range of the wireless communication between the robots.
There are M tasks in the system. The robots have to realize these tasks. Each task
may be realized by one robot (which is given a task) and each robot is capable of
performing a task, but may realize only one task at a time.
At the initial moment, only a certain initial number of tasks Mi is revealed in the
system. On completion of the task, new tasks or revealed (generated) whose num-
ber is established randomly (within a certain range). The moment the number of
generated tasks reaches M no more tasks appear in the system.
Tasks that appear after finishing the realization of a given task by a robot are
initially assigned to this robot. However, later they may be given to other robots.
One of the basic properties of this system is that when a given robot relocates
in space, a group of neighbouring robots, with which it has the direct connection,
changes. However, this robot has always connection with its agent Ag0 . The agents
of type Ag0 , inter alia, pay attention to the communicative integrity of the whole
group of robots.
The task of the agent system, which will be used for the management of a group
of mobile robots, is to distribute tasks in such a way so as to allow a group of mobile
robots to perform them in the shortest time. This task is in some sense similar to the
task of balancing the resources described in Chap. 4.
Resource are tasks intended for robots to realize, however, their uniform distrib-
ution corresponds to the equal assignment of tasks to the robots. The agent system
consists of the following agents:
The agents of type Ag0 are generated at the moment of activation of a given robot in
the system. There is only one instance of this agent for each robot, and every agent
is related to one particular robot it represents in the system. The agent Ag0 has
information about the agents of type Ag0 which correspond to the neighbouring
robots for a given robot, and may communicate with them. This agent has informa-
tion about the tasks assigned to a given robot and these tasks may be sent between
the neighbouring agents of type Ag0 (i.e., between the neighbouring robots).
5.1 Agents in Cooperative Mobile Robots Management 99
The agents of type Ag1 are generated by the agents of type Ag0 and are
responsible for searching for tasks for the robot represented in cyberspace by
this agent. The search is aimed at the robot that has been entrusted with many (too
many) realization tasks.
The agents of type Ag2 are generated by the agents of type Ag0 and are responsible
for searching for the robot which could take over some realization tasks from the
robot represented by that agent.
Fig. 5.9 The efficiency Ef of the realization of 400 tasks by a group of 20 robots
the best result of the efficiency of the system operation (from all variants previously
discussed).
5.1.5 Summary
The above approach, based on the concept of agents operating in cyberspace, allows
for decentralized realization of complex systems intended for the mobile robot man-
agement in real-space. Systems created in this way are flexible in operation due to
the fact that the agent is treated as a robot in cyberspace and a classic mobile robot
as a tool in real-space.
As was presented (Sect. 5.1.2), the agent (robot) in cyberspace may replace a tool
it manages (classic robot from real-space) with the one more appropriate for the
realization of the task. Here, we deal with the adjustment of causative possibilities
of the agent (robot) in cyberspace to the needs of the task realized in real-space.
On the other hand, the agent (robot) acting in cyberspace may hand over the
management of a given classic robot from real-space to another agent that is more
specialised for the realization of a specific task (Sect. 5.1.2). It allows for the flexible
adjustment of algorithms to the type of tasks to realize.
Fig. 5.10 Schema illustrating stages of performing services by a group of servers, a a client asks
a sever to realize a certain service, b, c more servers are used to realize the service, d, e results of
the partial services are returned to the client
102 5 The Examples of Applications of the Agent Systems
request
Client Kl Server Srw
The model
of an agent
response
Creation of
an agent
agent
Fig. 5.11 Schema illustrating the application of agents and SOA Service Oriented Architecture
Finally, the realization of a service is completed and the client initiating the process
of realization receives the appropriate results (Fig. 5.10e).
The client requests servers to perform a certain complex service (Fig. 5.10f).
The servers request other servers to perform certain component (partial) services
to further servers (Fig. 5.10g, h).
At that moment, the client that initiated performing the service is not interested in
continuing the service and excludes itself from cooperation (Fig. 5.10i).
Nevertheless, some servers continue the realization of the service (Fig. 5.10i, j),
which means that they continue the action (realization of the component partial ser-
vices) which is compatible with the algorithm of realization of the complex service.
And so certain servers continue to perform partial services of the complex service,
although the client (which initiated its realization) is not engaged in its realization. It
means that the concept of service oriented architecture (SOA) is not fully useful for
the realization of the system, which is to realize the service according to the scenario.
It seems that one of the solutions to cope with difficulties is to use the concept of
agents with the elements of SOA service-oriented architecture.
The application of the agent system requires defining the agents and the envi-
ronment of their action. In the case we consider, a set of servers may constitute the
environment of agents action, however, to this purpose the servers must be adjusted
5.2 Agents in Service-Oriented Systems (SOA) 103
to provide agents with the environment in which they may relocate and realize their
tasks. In particular, it is necessary to introduce a service when, in response to the
clients request, the server may create and activate an agent (corresponding to the
request) in a special area referred to as the action zone of the agent. An example
of such an operation of the system may be illustrated by the following scenario
(Fig. 5.11).
Client Kl sends a request to the server Srw to activate the agent. To this purpose,
it prepares the model of the agent which is sent through the network.
Client Kl sends a request to the server Srw to create and activate the appropriate
agent. The model of the agent is attached to the request.
The server is equipped with the action zone of the agent Sdaa specific
environmentin which the agent may be activated as a result of the clients request.
The server replies to the request of the client Kl thus confirming the activation.
The agent continues to realize requests in the zone of its action.
A rescue operation managed by the computer system may be an example of the appli-
cation of the system constructed according to the (SOA) service oriented architecture
with the use of agents. This operation, managed by the system SOA with the agents,
may be as follows:
Emergency
telephone
number 112
1 2
Server Server
Medical Causalty care
transport accident agents
Agent Server
Server Rescue
Server Rescue
Fire Brigade Police operation operation
agents
Fig. 5.12 Schema illustrating the stages of performing services by a group of servers
104 5 The Examples of Applications of the Agent Systems
1. An accident takes place (e.g., a traffic accident), there are casualties. An acci-
dental eyewitness to the accident calls Medical Emergency Center Telephone
number 112 (Fig. 5.12).
2. Crisis Center creates an agent rescue operation onto the Server of Rescue
Operations (Fig. 5.12).
3. The agent of a rescue operation calls the police and the fire brigade and informs
them about the accident (Fig. 5.12).
4. The police and the fire brigade go to the scene of the accident and start a rescue
operation, and particularly medical rescuers treat casualties (Fig. 5.13).
5. The rescuer creates on its computer the agent of the casualty and sends it onto
the server of the casualty care agents (Fig. 5.13).
6. The agent of the casualty initiates negotiations with accessible hospitals in order
to transport the casualties to a chosen hospital (Fig. 5.14).
7. As a result of negotiations the hospital is chosen (Fig. 5.14).
8. The agent of the casualty searches for the transport in order to take the casualty
to hospital (Fig. 5.14).
9. The casualty is transported to hospital and the agent of the casualty is sent onto
the server of the hospital (Fig. 5.15).
10. The casualty is treated in the hospital. The agent of the casualty acting on the
server of the hospital is transformed into the agent of the patient that controls
(to some extent) the process of treatment of the patient (Fig. 5.15).
11. The agent of the rescue operations ends its action and gathers documentation,
and finally undergoes liquidation (Fig. 5.15).
Emergency
telephone
Tel. 112
number 112
5
Server Server
Medical Causalty care
transport rescuer Agent
accident agents
Causalty
4
Agent Server
Server Server
Rescue Rescue
Fire Brigade Police operation operation
agents
Fig. 5.13 Schema illustrating the stages of performing services by a group of servers
5.2 Agents in Service-Oriented Systems (SOA) 105
Emergency 6
Tel. 112
telephone
number 112
Server Server
Medical Agent Causalty care
transport rescuer Causalty
accident agents
Agent Server
Server Server Rescue Rescue
Fire Brigade Police operation operation
agents
Fig. 5.14 Schema illustrating the stages of performing the services by the group of servers
Agent
Server Server Patient Server
of the of the of the
hospital 1 hospital 2 hospital 3
10
Emergency
telephone
9
number 112
8
Server Server
Medical Agent Causalty care
transport rescuer
accident Causalty agents
Agent
Rescue Server
Server Server operation Rescue
Fire Brigade Police operation
11 agents
Fig. 5.15 Schema illustrating the stages of performing the services by the group of servers
106 5 The Examples of Applications of the Agent Systems
The above scenario of the rescue operation, in which servers and agents participate,
is realized in real-space as well as in cyberspace. The agents acting in this operation
have to be capable of adjusting their action to the situation in both spaces. It requires
allowing the agents to observe the events that take place in cyberspace as well as in
real-space.
5.2.2 Summary
In the example we presented, the source of information for agents is the information
provided by the appropriate servers as well as natural persons (such as rescuers) acting
in real-space. They become the producers of the observation operation of the agents.
Therefore, the process of observation refers not only to changes in cyberspace but
also events taking place in real-space. The result of the observation is used for the
realization of the algorithm of the agents in cyberspace.
The use of two spaces: cyberspace and real-space gives new interesting possibilities
of developing computer systems of a large scale, especially creating structures of
computer systems which enable to manage complex processes taking place in the
real world (real-space).
In this chapter, we present the application of agents in the recognition of atypical situ-
ations which appear during the operation of different systems. It is noteworthy that the
knowledge of precise information about atypical situations is not always necessary.
Some elements of considerations in the area of immune artificial systems, as well as
systems simulating the socio-ethical behaviours were used for the realization of these
problems. The application and in particular the connection of the above mentioned
elements allows for the creation of the agent system in which agents may realize an
operation of the immune system, as well as participate in social interactions.
Let us consider a given system (e.g., the computer operation system) as a multi-agent
system. The operation of this system may be perceived as cooperation of particular
agents during the realization of tasks entrusted to the system (in this case to the
appropriate agents) to perform.
Atypical behaviour of the system may be considered as the appearance of some
agents whose behaviour is different than before, or it is not as it would be expected.
These agents may appear in the system in two ways:
5.3 Agent System for the Recognition of Atypical Behaviours 107
They may be new agents (agents of a new type) which have arrived at the system
(have been inserted into it). Such events occur when we deal with so-called open
systems.
Agents may also appear as a result of the transformation of certain agents existing
in the system.
If agents behaving in a unusual way are responsible for atypical behaviour, then
the task is to identify these agents.
One of the approaches is an attempt to analyse the agents identifying features
and establish its distinctness. It may be done on the basis of the analysis of:
features of a given agents structure
features of a given agents behaviours
The analysis of the features of the agents structure has been used for a long
time in different applications, e.g., for the identification of viruses in the computer
systems. However, this approach suffer from some serious drawbacks. Namely, the
agent (a piece of code) of certain distinct features may not always be dangerous. The
computer system is often complemented with new elements (in our approach-agents)
while being modernised-errors removed, or new functionalities installed (so-called
upgrade).
It seems that the analysis and assessment of the behaviour of a given sub-system
(agent), its influence on the system and especially on the resources of the system
would be a better approach. This approach is easier to realize if we use the concept
of the agent and the approach to the system as the agent system.
The general schema of the operation of the agent system recognizing atypical
behaviours may be presented as follows:
The system is considered as the agent system with agents of one particular type, or
agents of different types. The agents act in the environment which consists of (inter
alia) resources, and by acting the agents change the properties of these resources. In
other words, the resources and their properties may be observed and changed by the
agents acting in the system.
Apart from the agents action connected with the functioning of the whole system,
the agents also act with the intention of recognising atypical behaviours of the system
caused by the agents acting in the environment. It means that in order to recognize
atypical functioning of the system, it is necessary to recognize and identify atypical
action (or rather behaviour) of agents (Fig. 5.16).
Let us consider the following example (Fig. 5.17):
The system consists of a certain number of agents and two resources (resource a
and resource b). The resources are kept in the containers and their maximum as
well as minimal amount is specified.
The action of agents is to load a certain amount of a chosen resource from a cho-
sen container. Every time the agent downloads the same amount of the resource,
however, it may do it with varied frequency. The frequency of the load of a given
resource is established randomly for a given agent. The difference between the
108 5 The Examples of Applications of the Agent Systems
Agent
Triggered
event
Observation of
a typical
behaviour
(a typical Agent
event)
Fig. 5.16 Schema illustrating the process of identification of the author of atypical behaviours
aababbbabb
agent 1
aababbbabb
ababbbabaa agent 2
aabbb
agent i aababbbabb
agent n
resource resource
a b
Fig. 5.17 Schema illustrating the process of creation and usage of the model of behaviours
On the other hand, if there are agents that only use one kind of resource, the
process of its replenishing becomes impossible. When only one kind of resource is
used, it is exhausted after some time.
As a result of this situation, the agents that need that resource are not able to use
it and consequently the operation of the whole system is blocked.
Unbalanced load of the resource may be treated as the example of atypical behav-
iour (action) of the system, which should be recognized.
Therefore, it is necessary to equip the system with the mechanism for recognition
of agents that load the resources in atypical way (i.e., unbalanced).
To realize this mechanism, the agents in their model remember the order and the
type of the resource they used. The memory covers a given n of the previous cases of
loading the resource. Memorized cases are accessible to a given agent and all other
agents in the system.The method for recognizing the agents atypical behaviour
consists of three stages:
The first stage is the formation of a model of atypical behaviour by each agent.
The approach based on the elements of operation of immune artificial systems
has been used here. Each agent generates randomly a model of certain length
(e.g., m model), being the code of using the resources. Then it compares the
model with information stored in its memory, specifying the order in which it
used the resources. If the comparison is negativea given model is used in further
recognition of atypical behaviours. If the result of comparison is positive (i.e., a
given agent loaded resources in the same order) then the model is rejected.
At the second stage a given agent (assessing) assesses other agents. For each
assessed agent it compares the model with the agents behaviour it has built itself.
The result of comparison is remembered by the assessing agent. This process is
repeated by the assessing agent for every single agent of the system. As a result,
a given agent has the assessment of behaviour of all other agents in the system.
This procedure is realized for every agent of the system, therefore, each agent has
the information assessing the behavior of all other agents.
At the third stage agents send their feedback about other agents to the common
chart of the system. It is a kind of voting where every agent comments on (as a
citizen casts a vote) each one of the agents. On the basis of feedback, the resul-
tant assessment of each agent may be specified, and the agents with the worst
resultant assessment may be chosen, i.e., those whose behaviour cause atypical or
undesirable operation of the system.
The scenario presented above is cyclical except for the first stage, i.e., the genera-
tion of models which is performed only once. It allows for the control of the system
operation and receiving warnings about the appearance of atypical behaviours and
what is more, their probable sources.
The above approach to the recognition of atypical situations may be realized as
the identification of a certain behavior of the agents. It allows them to analyze the
behaviour of the system, with the use of different points of view.
In order to illustrate the above considerations, we will present the research results
carried out with the use of the simulation method on the exemplary system for
110 5 The Examples of Applications of the Agent Systems
120
a
100
80
NAg [%]
60
b
40
20
0
0 5 10 15 20 25 30 35
T
Fig. 5.18 The results of the simulation research of the system for recognition of intruders in the sys-
tem, athe number of normal agents, bthe number of intruder agents loading only the resource A
5.3 Agent System for the Recognition of Atypical Behaviours 111
NAg [%]
intruder agents loading the 60
resource A with the
probability 0.75, and the 40
resource B with the
probability 0.25
20
0
0 50 100 150 200 250 300 350
T
B with the probability piB = 0, 25. In this case, we may also observe a decrease in
the number of the agents-intruders, but the process of identification and elimination
proceeds slower than in the first case.
A comparison between the development of eliminating the agents-intruders in the
former and the latter case is presented in Fig. 5.20.
In both systems there are cases of false identification of intruders. It results from
the fact that the normal agent is perceived as an intruder and eliminated from the
system. However, in neither of the cases the number of eliminated normal agents is
significant (a few percent).
In the examples of the operation of the system some delay may be observed at the
start of eliminating the intruders. The time is used by the normal agents for preparing
70
a
60
50
40
NAg
30
20
10 c
b
0
0 100 200 300 400 500 600 700 800
T
Fig. 5.20 The results of the simulation research of the system for recognition of intruders in the
system, athe number of normal agents; bthe number of intruder agents loading the resources
with the probabilities piA = 1.0, piB = 0; cthe number of intruder agents loading the resources
with the probabilities piA = 0.75, piB = 0.25
112 5 The Examples of Applications of the Agent Systems
the models of appropriate behaviour on the basis of their own behaviour. Only then
is the assessment of the agents behaviour possible. The results of broader studies of
the systems of a given class can be found in the works: [42, 43, 156, 157].
The application of the concept of the agent is particularly interesting is the evolution
systems. The establishment of the simulation algorithms of the evolution processes,
which later were developed into such techniques as evolution algorithms, became
the beginning of the ideas based on the biological evolution. The aim of the studies
carried out in this field was to find the methods of effective solutions for dealing with
optimization tasks.
The evolution algorithms are most frequently used for problems which are difficult
to solve with the use of other methods. It refers mainly to searching for the global
and local extremes for the aim function which has such a form that finding these
extremes is time-consuming.
Different variants of evolution algorithms appeared in the course of their devel-
opment. The evolution software, evolution strategies, and genetic algorithms were
developed. The introduction of the parallel evolution algorithms with different
kinds of interactions between parallel running algorithms was also suggested, which
enhanced the probability of keeping the variety of evolving populations.
The use of the agent paradigm involved equipping the agents with mechanisms
allowing for their participation in the evolution process, similar to evolution processes
occurring in the natural environment.
The introduction of new operations of evolution nature in the agent systems
resulted in the establishment of evolving agent systems (EMAS).
This allowed the development of the concept of the evolution algorithms and
improvement of their efficiency and also the extension of the application area of the
evolution systems in new fields [159].
In effect, in the course of development of the agent-oriented model of simulated
evolution EMAS new techniques have appeared, including co-evolution, niching,
speciation and the sexual evolution, which contributed to the creation of new kinds
of the evolution agent systems.
used due to the fact that the agents behaviour is influenced by the dependence on
other agents. It allows for the introduction of new evolution operators.
New properties of the evolution systems, gained due to the agent approach may
include the following:
The agents acting independently have an influence on the dynamics of the evolution
process which is characteristic of a given group (subgroup) of agents. In effect, the
evolution process proceeding in different ways (and especially at varied speed) in
certain areas of the environment may be the source of feature diversity of subjects
participating in the evolution process.
Due to the observation operation a given agent may pick up information about the
state of its surrounding environment and use it for its actions, and especially for the
realization of such operations which have an influence on the evolution process.
A given agent may observe other agents, which results in the fact that the relation-
ships between them such as rivalry or competition may have the direct or indirect
influence on the behaviour of the agents. It offers new possibilities of exerting an
influence by the agents on the proceeding of the evolution process (in some sense,
the possibility of controlling that process).
The evolution process takes place in the multi-agent environment which may
include the agents playing different roles in the system not necessarily governed
by the evolution processes. It creates the dynamically changing environment for
the agents actions.
Equipping the agent with the capability to reproduce opens up the possibilities of
population development with the use of many factors, and thereby expands the
flexibility of adjusting the population to the conditions (often local conditions) of
the environment.
The problem of the liquidation of population subjects may proceed in the way
we know from genetic algorithms and be realized as an influence of external rea-
sons for the agent-killing the agent (e.g., by another agent). The elimination of
the subject caused by internal reasons for the agent may also be considered. In
this case, the concept of the agents life energy may be applied. The dissipa-
tion of life energy results in the removal of the agent from the system. It allows
for the use of the concept of the resources existing in the environment, which
may include life energy whose absorption allows the agent to stay alive in the
population.
Due to the properties of the agent system we presented, it was possible to avoid
certain problems or limitations encountered in evolution algorithms. These problems
are mainly about the disappearance of population diversity that occurs during the
evolution processes.
At the same time, due to the fact that the agent systems are decentralized, they
have become very useful for the realization of new concepts of the development of the
evolution algorithms applied to problems for which the property of decentralization
is connected with the nature of the problem.
114 5 The Examples of Applications of the Agent Systems
The operations known from the classic approach to the evolution algorithms have
been enriched with new operations influencing the evolution process, which include
the aggregate operation and the migration operation (or escape).
Aggregation is one of the operations extremely characteristic of the agent systems.
The mechanism of its operation may be presented with the use of the following
example (Fig. 5.21):
There are four agents in the environment: Aga , Agb , Agc , Agd . Each of them is
able to change the environment in its immediate neighbourhood in the way that
it would have features desired by a given agent. It is a certain local environment
created and then kept by the agent (e.g. the agent Aga creates the local environment
Ea , Fig. 5.21a).
The agents may relocate in the environment, and therefore approach each other,
and consequently make their local environments overlap. In effect, the new, resul-
tant environment is established which inherits features from both (or more) local
overlapping environments.
If the environment with common features composed of the chosen features
of individual agents is established, then they may prove advantageous for the
agents (more advantageous than the environments of particular agents). Then a
group of agents is built, which undergoes consolidation. Further specialization
within the group enhances the advantageous features of the common environment
(Fig. 5.21b).
(a)
Aga Agd
Ea Agb Agc
Ed
Eb Ec
(b) (c)
Fig. 5.21 Schema illustrating the process of aggregation in the evolution, a agents in the environ-
ment, b a group of agents is built, which undergoes consolidation, c a new entity (agent) is created
as a result of the evolution process
5.4 Agents in the Evolution Systems 115
The next step is the consolidation of this structure in the evolution process. The
operations of the evolution algorithms (such as crossing, mutation) are now applied
to the whole group. In effect, this group may appear in the evolution process as
a new entity created as a result of the aggregation operation, i.e., the evolution
process (Fig. 5.21c).
The operation of aggregation allows for the creation of more complex compositions,
i.e., aggregators. The operation uses the agents skills, including the observation
operation of the surrounding environment and particularly of other agents. It allows
the agent to choose candidates for the collaborative creation of new environments
referred to as niches in the evolution processes.
Migration makes the evolution algorithms closer to the processes taking place in
the biological evolution. The elaborate methods of interactions between the subjects
allow for the creation of new species, which in turn may have an influence on the
evolution process itself.
The introduction of the agent in the evolution processes allowed a given subject
(agent) to have access to information on to what extent it is adjusted to the require-
ments of the environment.
While analysing information a given agent concludes that it is not well adjusted
to the present environment, it may use the operation of migration (or escape). This
is possible especially when the whole environment is highly diverse, i.e., there are
areas (sub-environments) with distinguishing properties.
In the case the agents features do not ensure the optimal adjustment in a given
part of the environment, it may migrate and search for some different part, and escape
from the unfriendly one.
In effect, the possibility of migration in the area of diverse environments arises,
and further the concept of evolution in the environment consisting of islands can be
developed.
The operation of migration takes advantage of the agents possibilities, especially
its capability to observe the environment, including other agents, and the agents
mobility. Due to the observation the agent has the possibility of establishing the
direction of migration satisfying its needs within the evolution process.
One of the new solutions in the evolution processes are the co-evolution algorithms.
In these algorithms, the quality of the subjects adaptation (the value of function
adaptation) depends not only on the quality of the solution, which is represented by a
given subject, but also on the properties of other subjects that exists in the population.
The concept of co-evolution has become one of the ways used for preventing the
variety of population from disappearing.
This concept involves the interactions between species, which can be created in a
given process of evolution, and particularly such interactions as competition between
116 5 The Examples of Applications of the Agent Systems
given sex to the amount of subjects of another sex [89, 155, 158] gives new interesting
possibilities of the development and applications of the evolution algorithms.
local maxima of the function. The agents store in their genotype the information
about their location in the area of the functions domain, the value of which is used
for demarcating the level of adaptation of this agent. The genotype of the agent
which is processed within the evolution operations contains the information about
characteristic features of the agent and about its location in the domain of the function
fR . In these examples, the agents acting in the system are governed by typical evolution
operations such as cloning, crossing over and mutation. Moreover, operations typical
of the co-evolution systems such as operations of migration (relocation between the
vertices of a graph), operations related to inter-sexual interactions as well as the
operations of aggregation (creation of aggregates in the form of niches) were used.
The systems used in experiments have the following properties:
ACoEMAS System is the co-evolution agent system with the speciation mech-
anism based on the geographic isolation of the subpopulation (referred to as the
allopatric speciation), which constitutes the mechanism leading to the origin of
species. To this purpose, barriers are created which make it difficult for agents to
migrate between the vertices of the graph (i.e. environment). The agent, to get from
one vertex to the other, uses a large amount of energy (life energy) which is the
said barrier. It gives the possibility of mutual isolation of the group of agents and
makes the origin of species easier.
SCoEMAS System is the co-evolution agent system, using the concept of sex, with
both sexes within each species. Interactions between the subjects of both sexes are
made possible in the system, particularly each agent for the realization of the oper-
ation needs to find (and accept) the subject of the opposite sex. Generally speaking,
the interactions in the form of conflict and co-evolution of sex, sexual selection as
well as matching agents in pairs for a longer time are used here which is possible
due to the operation of aggregation. Aggregation is a situation when two agents of
the opposite sex which are ready for crossing make a pair (aggregate) which lasts
for some time. The pair may relocate in the environment and realize the operation of
crossing several times. Apart from these interactions in the SCoEMAS system, there
are other interactions between species such as rivalry for resources (existing in the
environment in limited number).
NCoEMAS System is the co-evolution system using the concept of niche for the
creation of species (suggested in the paper [77]). In that solution, the agents creating
the populations of subjects may use the aggregation operation to create niches that
constitute a certain local environment for the agents. There are agents and niches in
the environment. The environment of agents and niches is the domain of the function
fR and the location of the agent specified by its positions in this domain. Niches
are represented by a special agent of the niche that identifies a given niche in the
agent system. The location of the niche is defined as the location of the agent of
the niche, and this is specified as the location of the centre of gravity of the agents
which belong to a given niche (where the weight of the agent is specified by the
value of its adaptation function, i.e., the value of the function fR ). The introduction
of the agent of the niche allows for the realization of certain evolution operations at
the niche level, which gives the possibility of realizing evolution processes at two
levels: at the level of subjects, and at the level of niches. The location of one agent in
5.4 Agents in the Evolution Systems 119
relation to the other is defined as the close location when the distance in the domain
of the function is shorter than a certain established breakpoint, and there is no mutual
isolation of the agents. The mutual isolation of two agents takes place when in the
section joining the points being the locations of these agents there are points for
which the value of the adaptation function fR is lower than in the points of agents
location (when the search for the local extrema gives the local maxima, and higher
when the search gives the local minima).
The following interactions take place between the agents and the niches:
At the level of agents that are subjects which may be included in the niches. At
the moment of creation of the system, the agent has its individual niche where it
is only inhabitant. Agents may make decisions, join the niches and be included
in them. Hence, two agents may join their niches and still exist in one common
niche. Inside the niche, the agents are governed by the evolution operations such
as crossing and mutation, as well as rivalry for access to the resource in the niche.
At the level of the relationship agent-niche, the agents may join the existing niches
and leave them. The agent joins the niche when the location of the agent towards
the niche is close, and the agent is not isolated from the agent of the niche. If the
location of the agent towards the location of the niche is that the agent is isolated
from the niche, then the agent leaves a given niche.
At the level of niches (aggregators), the niches are capable of relocating in the
environment. It is realized by the change of the location of the niche as the centre
of gravity of the agents belonging to the niche. If the location of two niches is
close enough, and there is no mutual isolation, then the unification of niches into
one niche may take place. Niches take part in the rivalry for access to the resource
existing in the environment. The resource obtained by the niches is made accessible
by a given niche to the agents of a given species (belonging to a given niche) which
take part in the rivalry for this resource within the niche.
The agents belonging to a given niche create species corresponding to the niche.
In effect, the dynamic system of niches is created in which changing generative
isolation takes place (i.e. making it difficult or even impossible for some agents to
take part in reproduction) allowing for the development of species. On the other
hand, the interactions between the niches result in the fact that they constitute a new
entity at a more advanced (higher) level. The above-mentioned interactions are
schematically presented in Fig. 5.22.
EMAS System constitutes a system in which the evolution of agents was used
by using basic evolution operations: crossing and mutations, as well as operations
characteristic of the agent, i.e. operations of migrations. In this system there are no
mechanisms allowing for the origin of species.
The simulation research of the systems described above were carried out [77]
on the basis of the agent systems constructed according to the above-mentioned
rules with the use of the architecture of the multi-profile M-agent. Individual profiles
realize appropriate groups of behaviours of the agent (we may consider here: a
resource profile, a reproductive profile, an interaction profile as well as a migration
profile).
120 5 The Examples of Applications of the Agent Systems
niche
niche
individual
migration
of the
individual
reproduction
migration of individuals
and creation
of a new niche
unification
of niches
Fig. 5.22 Schema presenting the interactions between the subjects and niches in the NCoEMAS
system
25
NCoEMAS
20
15
Ne
ACoEMAS
10
5
SCoEMAS
EMAS
0
0 500 1000 1500 2000 2500 3000 3500
ts
Fig. 5.23 The number of extrema localized by the evolution systems EMAS, SCoEMAS, ACoEMAS
and NCoEMAS at the time of formation of species. Nethe number of localized extrema, tstime
of simulation
The solution to the task is to indicate of local extrema (minima) of the Rastrigin
function fR . In the example we study the function has 25 local extrema. The aim of
the experiments (agents) was to localize as large (possibly all) number of extrema as
possible. The indication of an extremum involves creating societies of agents in its
direct neighbourhood (surroundings).
5.4 Agents in the Evolution Systems 121
25
20
NCoEMAS
15
Ne
ACoEMAS
10
5
SCoEMAS
0
0 100 200 300 400 500 600
ts
Fig. 5.24 The number of extrema localized at the early stage of the operation of the evolution
system SCoEMAS, ACoEMAS and NCoEMAS at the time of formation of species. Nethe number
of localized extrema, tstime of simulation
The results in the form of the mean of 20 replicated experiment for the EMAS,
SCoEMAS, ACoEMAS and NCoEMAS systems are presented in Figs. 5.23 and 5.24.
In Fig. 5.23 the results are related to the longer time of simulation, which leads to
the formation of the number of species.
EMAS System, which does not possess mechanisms for the creation of different
species, is able to localize only one local extremum.
SCoEMAS System localizes on average four local extrema and ACoEMAS localizes
a dozen or so local extrema.
On the other hand, NCoEMAS System is able to localize on average 22 local
extrema, that is to say almost 90 percent of all existing extrema. It confirms the
huge capability to develop species, in the NCoEMAS system, useful for searching
for extrema.
Figure 5.24 presents the initial period of the realization of the evolution process
of SCoEMAS, ACoEMAS and NCoEMAS systems, in which the formation of species
is initiated.
It may be observed that SCoEMAS and ACoEMAS systems are the fastest in estab-
lishing the number of originating species, i.e., detected extrema. Whereas, NCoEMAS
system needs more time for stabilising the number of species and finding all local
extrema it is able to identify.
122 5 The Examples of Applications of the Agent Systems
Dynamic processes taking place in the continuous environments are most often
described with the use of partial differential equations. The solution is a function spec-
ifying the distribution of parameter, which is the subject of the simulation, in two-or
three-dimensional space. The method of the finite elements or finite differences
based on the creation of appropriately configured networks is most often applied to
the numerical solutions of such of problems. Space is covered with a grid made up
of nodes joined with the edges. The simulation of the processes proceeding in time
in the environments involves the calculation of the value of the function, which is
the solution to the initial differential equation in the nodes of the grid. To specify
the behaviour of the process in time, the system stimulating this process calculates
the value of the function at particular points of space-time. The grid consisting of
nodes connected together makes a characteristic environment which can be used for
the activity of agents. The agent system consisting of agents acting in this kind of
environment sets out the new approach to the simulation of the dynamic processes
of a given class.
The models of phenomena described with the use of partial differential equations
may be of complex character, if a couple of physical processes proceeding at the
same time (the flow of heat and the movement of the centre Fig. 5.25) are taken into
consideration. The formal description of these phenomena becomes complex, and it
is often impossible to search for solutions with the use of classic methods.
Such complex phenomena are often found in practice and the possibility of their
simulation would make it possible to solve many problems that are encountered in
technology and science. However, their simulation may be more difficult to realize
and it is necessary to apply the new approach.
One of the examples of these phenomena, which are the subject of further consid-
erations, is the process of the heat flow in the continuous environment that is divided
into two partstwo bodies.
The flow of heat may take place between the bodies as well as within each body.
Additional difficulty is posed by the fact that the bodies relocate relative to each other
at a given speed.
We assume in this approach that the grid representing the environment constitutes
two-or three-dimensional space (depending on the character of the simulation). The
bodies existing in space are represented by certain information, characteristic of the
physical state of a given body, which can be specified by a set of parameters assigned
to a given node (Fig. 5.26). In the variant of simulation we present it is temperature at
a specified point of the body, mapped by the node of the grid, which is an interesting
parameter.
Let us assume that the object we consider is made of metal which is divided
into two layersthe upper and the bottom layers which come in contact. Therefore,
we have two contacting bodies with certain physical properties. The contact of two
bodies allows for the flow of heat between them (between the layers) (Fig. 5.25). If
at a certain chosen point of the upper body the temperature is raised (we provide
certain amount of the heat), then the process of flow or rather, in this case, the
process of heat spreading along the two bodies will be initiated. Therefore, the field
of temperatures which changes in time will be established within the area of both
bodies (both layers) till the moment the established state is achieved. If the contact of
both bodies is not a barrier in the flow of heat (as we assume), no disruptions will be
observed in the course of the flow of heat within the area of both bodies (Fig. 5.25a).
The course of this phenomenon may be stimulated with the use of many existing
systems.
A1 A2 A3
wa wb wc wd
Fig. 5.26 Schema illustrating the way of representation of bodies in the environment of the network
and agents
124 5 The Examples of Applications of the Agent Systems
However, if the upper layer starts to move towards the bottom layer, then this
phenomenon will have an influence on the process of heat spreading in both bodies,
particularly at the point of their contact (Fig. 5.25b). Two phenomena should be taken
into account in the simulation of this processthe phenomenon of heat spreading
and the phenomenon of two bodies relocating relative to each other. To realize the
simulation of the phenomenon of relocation, we may use the agents acting in the
environment created by the nodes of the grid.
A pilot version of the system has been developed at the Department of Computer
Science at AGH University of Science and Technology in Krakow [17]. This system
consists of agents acting in the environment of the network (Fig. 5.26). The main
idea of this solution is a change of the role played by the nodes of the grid.
According to the classic approach to the simulation, the nodes of the grid rep-
resented the points of the body in which the simulated thermal conductance takes
place. Due to this fact, the temperature of a given point of the body in a given node is
memorized and it is changed in time (the time of simulation) according to the process
of heat spreading on described with the use of appropriate equations.
In this approach, the nodes of the grid represent the points of space. The body
whose specific point corresponds to a given node of the grid at a given moment is
represented by corresponding information stored in this node.
It means that it is not only temperature that is stored in the node but also the
parameters specifying physical properties of the point of the body which corresponds
to this node.
wa wb wc wd
Fig. 5.27 Schema illustrating the result of the activity of agents responsible for the simulation of
the movement in the environment of the grid
5.5 Agent in the Simulation of Dynamic Processes 125
Further, we may suggest not only the displacement of heat energy responsible for
changes of temperature between the nodes (points of space) but also displacement
of the other parameters specifying the physical state of the body, which at a given
moment exists at this point of space. It allows for extension of the model based
on the grid and the simulation of the phenomenon of displacement of bodies in
space.
This approach may be realized by using the grid (specifically, the nodes) as the
environment of the activity of agents realizing respective processes.
Let us consider two contacting bodies, presented in Fig. 5.25. The model of this
arrangement of the bodies in the form of a grid is presented in Figs. 5.26 and 5.27.
A scenario of the activity of agents realizing the simulation of the phenomenon of
displacement of the bodies has the following form:
The agent A1 takes the parameters from the node Wa and travels to the node Wb
transporting these parameters.
In the node Wb there is the agent A2 , that observes the fact of agents A1 arrival at
that node. Due to this observation it makes decision about performing the transition
action to another node (the node Wc ).
The agent A2 takes parameters from the node Wb and travels to the node Wc ,
thereby transporting the parameters of the node Wb .
The agent A2 leaves the node Wb , and the agent A1 places the parameters from the
node Wa in the node Wb .
t=0
t=50
t=100
t=150
Fig. 5.28 The example of the simulation of the phenomenon of heat spreading and displacement
of two bodies [17]
126 5 The Examples of Applications of the Agent Systems
The above procedure is repeated for all the nodes which correspond to those parts
of space that are related to displacing bodies.
In the system, there may exist initial agents as well as final agents whose activity
is related to the edge nodes of the grid modeling a given body.
The initiation of the process involves the activation of certain initial agents by the
system.
In the example we present the agent A1 is the initial agent which removes para-
meters from the node Wa and copies them into the node Wb .
The results of the simulation realized in this way is presented in Fig. 5.28. The
bodies have the form of two contacting layers. In the upper layer, the area in the
shape of a square (in three-dimensional space of the cube) is heated uniformly to a
certain temperature. Afterwards, we have heat spreading and the movement of the
upper layer towards the bottom layer. The results of the simulation at the subsequent
moments of time are presented in Fig. 5.28.
Metal casting is one of the frequently used technological processes which seems to
be difficult to realize.
Making a metal casting of high quality is a process which requires meeting a
number of technological prerequisites such as the cooling rate of metal, so-called a
cast in a mould. The rate of dissipating the heat away, thereby a decrease in tem-
perature, is the basic factor affecting the physical parameters (crystallization) of a
casting. Therefore, the process of cooling the cast is the subject of numerous studies,
inter alia, with the use of methods of the computer simulation [17].
(a) (b)
A
R R
p1 p2
Fig. 5.29 Schema illustrating the way of simulation of cooling the cast: a schema of the confor-
mation, b representation in the grid environment
5.5 Agent in the Simulation of Dynamic Processes 127
In order to control the cooling process of the cast, the mould cooling systems are
used as well as special cooling pipes are placed in the mould. The medium circulating
in the pipes cools down the metal around the pipe. The temperature of the cooling
medium at the input of the pipe and the rate of the flow constitute the parameters that
control the cooling rate of the cast. The cooling pipes may be placed in the areas in
which the outflow of the heat is slower than on the surface (these are usually points
further the edges of the form) (Fig. 5.29 point p1 and point p2 ).
The exemplary conformation of the mould with the cooling pipe (R) is shown in
Fig. 5.29a. The cooling pipe affects the cooling rate of the cast at the point p1 and
makes it possible to ensure the same cooling rate as at the point p2 .
Figure 5.29b illustrates the representation of the mould with the use of the environ-
ment of the network. For the simulation of the process of the cooling medium flow,
which absorbs the heat from the surrounding metal, the agents were used (according
to the concept described in Sect. 5.5.2).
In the schema presented in Fig. 5.29b, the agents move along the established
trajectory which represents the location of the cooling pipe. The agents responsible
for the transport of parameters of the cooling medium between the nodes are placed
in the nodes of the grid which lie on the trajectory.
The agent A is inserted into the node corresponding to the inlet of the cooling pipe.
This agent has the information about the parameters (in particular about temperature)
of the cooling medium at the input of the conformation.
The agent that is already in the node observes the arrival of a new agent. As a result
of these observations, the agent starts its activity of fetching information describing
the state of the medium at the point corresponding to this node and transports it to
t=2 t=8
Fig. 5.30 The example of the simulation of the cooling process of the cast in the mould with the
cooling pipe [17]
128 5 The Examples of Applications of the Agent Systems
(a)
T [ oC]
700 p1
600
p2
400
200
0
0 1 2 3 4 5 6 time [s]
(b)
T [ oC]
700
p1
600
400 p2
200
V=0 V>0
0
0 1 2 3 4 5 6 time [s]
Fig. 5.31 The example of cooling the cast in the mould with the cooling pipe: a without the flow
of the cooling medium, b with the enabled flow of the medium at the moment t = 4 s [17]
another node. On the other hand, the agent that has arrived at a given node installs
there the information it has brought.
This procedure is repeated for the nodes lying on the trajectory along the cooling
pipe R.
The example of the simulation of the temperature field during the cooling of the
cast with the use of the cooling medium (at two different moments of the process)
is presented in Fig. 5.30. The course of temperature in time, for the cooling without
the flow of the medium (v = 0) is presented in Fig. 5.31a, and with the enabled
flow of the medium (v > 0) at the moment t = 4 s is presented in Fig. 5.31b.
Comparing the temperature courses, it is clear that the application of cooling results
in the stabilization of the differences in temperatures between the point p1 and the
point p2 .
The above solution concerning the simulation of the solidification process of the
casting contribute to the improvement of the methods used for the design of the
cooling systems, and for the control of cooling through the changes of the flow of
the cooling medium. They are an example of the application of the agent approach
in technologies of metalwork production.
Chapter 6
Conclusion
The author, together with the team from the Intelligent Information Systems Group
(IISG), at the Department of Computer Science AGH University of Science and
Technology has been participating actively in the development of agent technologies
since the 1980s. Considerations presented in this monograph are an attempt to explain
and establish basic concepts that play a crucial role in agent technologies and outline
the concept of the agent and the agent system at a few levels of generality.
Chapter 1 provides a general introduction to the history of the development of
agent systems development. It gives an overview of some reasons why the agent
concept was invented. Then the process of agent system development at universities
and research centers is presented. This chapter contains also a layout of the problem
presentation in the monograph.
Chapter 2 looks at the notions of the partial function and the Cartesian product.
The presentation of the problem opens with a formal approach to the definition of the
agents properties. This part explores the reasons for the introduction of the concept
of the agent and give an interpretation of such definitions as the autonomy of the
agent or its capability to observe the environment.
Chapter 3 offers a more intuitive approach based on the concept of the M-agent
architecture. In this chapter, the inner structure of the agent is given with an attempt to
keep the balance between a universal approach with its broad application of the agent,
and a more detailed approach which could help to understand the basic elements of
the agents structure and its action.
Chapter 4 deals with the agents application in practice. The system of balancing
the resources in multi-processor environment is presented. It is a very good illustrative
example of the application of the multi-agent systems, and allows for the discussion
of the main properties of the agent and agent systems.
Chapter 5 is concerned with further solutions that are responsible indicating the
domains that are particularly predisposed to the application of agent systems. It
illustrates mainly of the fact that the agent system is not a universal solution, and
operates in certain characteristic situations. The role of the designer is to make a
decision on whether the agent approach should be applied to a given solution and in
what way.
Springer International Publishing Switzerland 2015 129
K. Cetnarowicz, A Perspective on Agent Systems, Studies in Computational
Intelligence 582, DOI 10.1007/978-3-319-13197-9_6
130 6 Conclusion
It is worth pointing out that the solutions of the agent systems we described are
limited to the creation of the concept. Some of them have been applied in practical
use, while others are still at the prototype stage, which made it possible to carry out
a number of experiments and tests whose examples are given in the chapters of this
monograph.
The literature in the field of the agent systems is extensive and this work does not
discuss all the problems related to the agent and agent systems. Different approaches
to the agent systems, from the more formal ones to general descriptions of practical
applications, may be found in numerous works. There are also some books on the
application of tools and methods form different fields (artificial intelligence, the
theory of games, the theory of decision-making, negotiations and others). Therefore,
it seems that the problematic aspects of the agent systems are still developing and
may provide us with much inspiration for our future research.
Finally, the concluding methodological remark. The notion of the agent is not an
extension of the notion of the object. They both exist in parallel to each other and
the application of the notion of the agent does not exclude (or replace) the use of the
concept of the object. In certain justified cases they may be successfully applied at
the same time. However, the point of the agent concept is the capability to observe the
environment, and particularly changes that take place in this environment, perceive
other agents and their influence on the environment, which results in the above
mentioned changes in the environment.
References
1. J. Adamek, H. Herrlich, G.E. Strecker, Abstract and Concrete Categories (Dover Publications
Inc., Mineola, 2009)
2. L. Alvares, P. Menezes, Y. Demazeau, An essential step for multi-agent systems. In 10th Inter-
national Conference on Systems Research, Informatics and Cybernetics, ICSRIC98 (Baden-
Baden, 1998)
3. L. Alvarez, Y. Menzes, P. Demazeau, Problem decomposition: an essential step for multi-
agent systems. In Proceedings of the 10th International Conference on Systems Research,
Informatics and Cybernetics, ICSRIC98 (Baden-Baden, 1998)
4. S. Ambroszkiewicz, Entish: a simple language for web service description and composition.
In Proceedings of IFIP TC6-WG6.4 Workshop on Internet Technologies, Applications and
Societal Impact (WITASI 2002) (2002), pp. 289306
5. S. Ambroszkiewicz, EnTish: An Approach to Service Description and Composition. Instytut
Podstaw Informatyki Polskiej Akademii Nauk (2003)
6. S. Ambroszkiewicz, Entish: a language for describing data processing in open distributed
systems. Fundamenta Informaticae, vol. 60(14) (ISO Press, 2004), pp. 4166
7. S. Ambroszkiewicz, M. Baraski, M. Faderewski, D. Mikuowski, M. Pilski, G. Terlikowski,
Elektroniczne Rynki Usug: Technologie i ich realizacje (Akademicka Oficyna Wydawnicza
EXIT, Warszawa, 2011)
8. S. Ambroszkiewicz, W. Bartyna, K. Cetnarowicz, M. Faderewski, G. Terlikowski, Interop-
erability in open heterogeneous multirobot systems. In Proceedings of the RIDIS2007 Fall
2007 Symposium, Arlington, Virginia USA (AAAI, USA, 2007), pp. 2431
9. S. Ambroszkiewicz, W. Bartyna, M. Faderewski, G. Terlikowski, Multirobot system architec-
ture: environment representation and protocols. In Bulltein of the Polish Academy of Science,
Technical Sciences, vol. 58 (2010), pp. 9971002
10. S. Ambroszkiewicz, W. Bartyna et al., The soa paradigm and e-service architecture reconsid-
ered from the e-business perspective. in LNCS 6385, ed. by F. Daniel, F.M. Facca (Springer,
Berlin, 2010), pp. 256265
11. S. Ambroszkiewicz, K. Cetnarowicz, On the concept of agent in multi-robot environment. In
Proceedings of the WRAC-2005, Workshop on Radical Agent Concepts, 2005 NASA Goddard
Space Flight Center Visitors Center Greenbelt, MD USA, Innovative Concepts for Autonomic
and Agent-Based Systems, Lecture Notes in Computer Science, vol. 3825/2006 (Springer,
Berlin, 2006)
12. K. Arai, H. Deguchi, H. Matsui, Agent-Based Modeling Meets Gaming Simulation (Springer,
Tokyo, 2005)
13. N.M. Avouris, L. Gasser, Distributed artificial intelligence: Theory and praxis. In Computer
and Information Science, vol. 5 (Kluwer Academic Publishers, The Netherlands, 1992)
Springer International Publishing Switzerland 2015 131
K. Cetnarowicz, A Perspective on Agent Systems, Studies in Computational
Intelligence 582, DOI 10.1007/978-3-319-13197-9
132 References
14. R. Axelrod, The Complexity of Cooperation (Princeton University Press, Princeton, 1997)
15. R. Axelrod, The Evolution of Cooperation (SBASIC BOOKS, Perseus Books Group,
New York, 2006)
16. F. Bellifemine, G. Caire, D. Greenwood, Developing Multi-Agent Systems with JADE (Wiley,
Chichester, 2007)
17. S. Bieniasz, Techniki symulacji agentowej w zastosowaniu do badania procesw cieplnych,
Praca doktorska, Wydzia Elektrotechniki, Automatyki, Informatyki i Elektroniki AGH (2006)
18. Z. Binder, Y.M. El-Fattah, E. Nawarecki, R. Perret, Synthesis of complex control objects.
In Proceedings of the Second Europeen Meeting on Cybernetics and Systems Research
(Wien, 1974)
19. Z. Binder, E. Nawarecki, A computer algorithm of variable structure for the control. In Pro-
ceedings of the 4-th IFAC-IFIP Conference (Zurich, 1974)
20. M. Boman, W. Van Velde (eds.), 8th European Workshop on Modelling Autonomous Agents
in Multi-Agent World, MAAMAW97. Lecture Notes in Artificial Intelligence, vol. 1237
(Springer, Berlin, 1997). ISBN 3-540-63077-5
21. R.H. Bordini, M. Dastani, J. Dix, A. Seghrouchni, Multi-Agent Programming, Languages
Platforms and Applications (Springer, Berlin, 2005)
22. R.H. Bordini, M. Dastani, J. Dix, A. Seghrouchni, Multi-Agent Programming, Langugaes
Tools and Applications (Springer, Heidelberg, 2009)
23. R.H. Bordini, J.F. Hbner, M. Wooldridge, Programming Multi-Agent Systems in AgentSpeak
Using Jason (Wiley, Chichester, 2007)
24. M.E. Bratman, Intentions Plans and Practical Reason (Harvard University Press, Cambridge,
1987)
25. A. Byrski, M. Kisiel-Dorohinicki, M. Carvalho, A crisis managementapproach to mission
survivability in computational multi-agent systems. In Computer Science, rocznik Akademii
Grniczo-Hutniczej imieniaStanisawa Staszica w Krakowie (Krakw, 2010) pp. 112113
26. A. Byrski, M. Kisiel-Dorohinicki, E. Nawarecki, Immunological selection in agent-based
optimization of neural network parameters. in Proceedings of the 5th Atlantic Web Intelligent
Conference AWIC2007: Fontainebleau, France, 2007, ed. by K.M. Wgrzyn-Wolska, P.S.
Szczepaniak. Advances in Soft Computing, vol 43 (Springer, Berlin, 2007), pp. 6667
27. A. Caglayan, C. Harrison, Agent Sourcebook (Wiley, New York, 1997)
28. C. Castelfranchi, Trust Theory. A Socio-Cognitive Computational Model (Wiley, Chichester,
2010)
29. R. Cervenka, I. Trencansky, AML The Agent Modelling Language (Birkhuser Verlag Basel,
Boston, 2007)
30. E. Cetnarowicz, K. Cetnarowicz, E. Nawarecki, Inteligentny zdecentralizowany system
wieloagentowy i jego zastosowanie do realizacji systemow rozproszonych. In Zeszyty
Naukowe AGH, Elektrotechnika vol. 14(3) (Krakow, Poland, 1995), pp. 175184. ISSN 0239
5274
31. E. Cetnarowicz, K. Cetnarowicz, E. Nawarecki, The simulation of the behavoiur of the world
of autonomous agents. In Proceedings of the XVII International CzechPolandSlovak
ColloqiumWorkshop: Advanced Simulation of Systems vol. 2 (Zabrzech na Morave, Czech
Republic, 1995), pp. 107115. ISBN 80-901751-4-7
32. E. Cetnarowicz, E. Nawarecki, K. Cetnarowicz, Agent oriented technology of decentral-
ized system based on the m-agent architecture. In Proceedings of the MCPL97, IFAC/IFIP
Conference, CTITechmological Center for Informatics Foudation, CampinasSP, Brazil,
LAGGrenoble, France, BIBA, Bremen (Germany, 1997)
33. K. Cetnarowicz, Koncepcja strategii rozdzialu zadan w zdecentralizowanych strukturach
wieloprocesorowych (conception of the task repartition strategy in multiprocessor decentral-
ized structures). In Zeszyty Naukowe AGH, Elektrotechnika (Scientific Bulletin of the Faculty
of Electrical Eng, AGH Krakow), vol. 8(34) (AGH Krakow, Poland, 1989), pp. 621629
34. K. Cetnarowicz, Problems of the evolutionary development of the multi-agent world. In Pro-
ceedings of the First International Workshop: Decentralized Multi-Agent Systems DIMAS95
(Krakow Poland, 1995), pp. 113123. ISBN 83-86813-10-5
References 133
66. D. Chodura, P. Dominik, J. Kozlak, Market strategy choices made by company using rein-
forcement learning. in Trends in Practical Applications of Agents and Multiagent Systems, ed.
by J. Corchado, J. Perez, K. Hallenborg, P. Golinska, R. Corchuelo. Advances in Intelligent
and Soft Computing, vol 90 (Springer, Berlin, 2011), pp. 8390
67. P. Coad, J. Nicola, Programowanie obiektowe, Oficyna Wydawnicza Read Me (Yourdon Press,
Warszawa, 1993)
68. P. Coad, E. Yourdon, Analiza obiektowa, Yourdon Press, 1991, Oficyna Wydawnicza Read
Me (Yourdon Press, Warszawa, 1994)
69. J.L. Crowley, Y. Demazeau, Principles and techniques for sensor data fusion. In Signal
Processing, vol. 32 (Elsevier Science Publishers B.V., 1993), pp. 527
70. Y. Demazeau, Systmes MULTI-AGENTS (OFTA, Paris, 2004)
71. Y. Demazeau, J.P. Mller, J. Perram (eds.), Modelling Autonomous Agent in a Multi-Agent
World (Odense University, Odense, 1994)
72. M. dIverno, M. Luck, Understanding Agent Systems (Springer, Berlin, 2004)
73. D. Dobroczyski, Multi-agent behaviour control systems concept and description. In 10th
IEEE International Conference on Methods and Models in Automation and Robotics (2004),
pp. 9971002
74. D. Dobroczyski, P. Dutkiewicz, W. Kowalczyk, K. Kozowski, W. Wrblewski, Wybrane zas-
tosowania metod sztucznej inteligencji w robotyce. In Zarzdzanie i Technologie Informacji
T2. (Wydawnictwo Uniwersytetu lskiego, Katowice, 2005), pp. 373412
75. G. Dobrowolski, Technologie agentowe w zdecentralizowanych systemach informacyjno-
decyzyjnych, Rozprawy Monografie, vol. 107, Uczelniane Wydawnictwa Naukowo-
Dydaktyczne Akademii Grniczo-Hutniczej im. S. Staszica, Krakw (2002)
76. G. Dobrowolski, E. Nawarecki, Min/max graph in analysis of the protection planing model.
In Proceedings of the Fourteenth IASTED International Conference on Applied Informatics,
Annency 93 (Annency France, 1993)
77. R. Dreewski, Koewolucyjne techniki optymalizacji funkcji wielomodalnych z zastosowaniem
technologii agentowej (Praca doktorska, Wydzia Elektrotechniki, Automatyki, Informatyki i
Elektroniki AGH, 2005)
78. R. Drezewski, K. Cetnarowicz, Sexual selection mechanism for agent-based evolutionary
computation, ICCS 2007, Beijing, China, Lecture Notes in Computer Science (Springer,
Berlin, 2007), pp. 920927
79. B. Dunin-Keplicz, R. Verbrugge, Teamwork in Multi-Agent Systems (Wiley, Chichester, 2010)
80. R. Eoisel, V. Chevrier, J-P. Haton. Un modle pour la rorganisation de systme multi-agents.
In Quinqueton et al., JFIADSMA97 (HERMES, France, 1997), pp. 261278
81. J. Ferber, Les systemes multiagents. Vers une intelligence collective (Inter Editions, Paris,
1995)
82. J. Ferber, Multi-Agent Systems. An Introduction to Distributed Artificial Intelligence (Addison-
Wesley Longman, Harlow, 1999)
83. M. Fisher, M. Wooldridge, Distributed problem-solving as concurrent theorem proving. in 8th
European Workshop on Modelling Autonomous Agents in Multi-Agent World, MAAMAW97,
ed. by Boman and Van Velde Lecture Notes in Artificial Intelligence, vol 1237 (Springer,
Berlin, 1997). ISBN 3-540-63077-5
84. L. Fogel, A. Owens, M. Walsh, Artificial Intelligence through Simulated Evolution (Wiley,
New York, 1966)
85. J.P. Fortier, Design of Distributed Operating Systems (McGraw-Hill Inc., New York, 1988)
86. S. Franklin, A. Graesser, Is it an agent, or just a program? A taxonomy for autonomous agents.
In Proceedings of the Third International Workshop on Agent Theories, Architectures, and
Languages (Springer, Berlin, 1996)
87. U. Garimella, L. Alvares, Y. Demazeau, On decomposition methodology. In IETE, May-
June 96 Selected from the Proceedings of the 1st Symposium on Parallel and Distributed AI,
Hyderabad, vol. 42(3) (1996), pp. 111116
136 References
110. J. Kozlak, J.-Ch. Crput, V. Hilaire, A. Koukam, Multi-agent environment for dynamic trans-
port planning and scheduling. In International Conference on Computational Science (2004),
pp. 638645
111. J. Kozlak, J.-Ch. Crpput, V. Hilaire, A. Koukam, Multi-agent environment for modelling and
solving dynamic transport problems. Comput. Inform. 28(3), 277298, Slovak Academy of
Sciences. Institute of Informatics (2009)
112. J. Kozlak, G. Dobrowolski, M. Kisiel-Dorohinicki, E. Nawarecki, Anti-crisis management of
city traffic using agent-based approach. J. Univ. Comput. Sci. 14(14), 23592380 (2008)
113. J. Kozlak, S. Pisarski, M. Zabinska, Application of holonic approach for transportation
modelling and optimising. in Advances on Practical Applications of Agents and Multiagent
Systems9th International Conference on Practical Applications of Agents and Multiagent
Systems, PAAMS 2011, Salamanca, Spain, 68 April 2011, ed. by Y. Demazeau, M. Pechoucek,
J. Corchado, J. Prez. Advances in Intelligent and Soft Computing, vol 88 (Springer, Berlin,
2011), pp. 189194
114. J. Kozlak, S. Pisarski, M. Zabinska, Situation patterns in multi-agent systems for solving
transportation problems. in Advances on Practical Applications of Agents and Multi-Agent
Systems, ed by. Y. Demazeau, J.P. Mller, J.M.C. Rodriguez, J.B. Perez. Advances in Intelli-
gent and Soft Computing, vol 155 (Springer, Berlin, 2012), pp. 109114
115. J.L. Kulikowski, Zarys teorii grafw (PWN, Warszawa, 1986)
116. V. Lesser (ed.), ICMAS95 (AAAI Press/The MIT Press Menlo Park, Cambridge, 1995)
117. J. Liu, Autonomous Agent and Multiagent Systems (World scientific Publishing Co.,
Singapore, 2001)
118. M. Luck and M. dIverno. A formal framework for agency and autonomy. in ICMAS95 ed.
by Lesser (AAAI Press/The MIT Press Menlo Park, Cambridge, 1995), pp. 254260
119. M. Luck, P. McBurney, C. Preist, Agent Technology: Next Generation Computing (AgentLink,
Southampton, 2003)
120. F. Luna, A. Perrone, Agent-Based Methods in Economics and Finance: Simulations in Swarm
(Kluwer Academic Publishers, Boston, 2002)
121. P. Maes, Designing Autonomous Agents. Special Issue 1990 (MIT Press/Elsevier, Cambridge,
1990)
122. H.R. Maturana, F.J. Varela, in Autopoiesis and Cognition: The realization of the Living, eds. by
R.S. Cohen, M. Wartofsky, Boston Studies in the Philosophy of Sciene, vol. 42 (The language
of Science, Kluwer, 1980)
123. H.R. Maturana, F.J. Varela, Tree of Knowledge (Shambhala Publication Inc, Boston, Massa-
chusetts, 1992)
124. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs (Springer,
Berlin, 1992)
125. J.H. Miller, S.E. Page, Complex Adaptive Systems, an Introduction to Computational Models
of Social Life (Princetown University Press, Princetown, 2007)
126. A. Moreno, J. Pavn, Issues in Multi-Agent Systems (Birkhuser Verlag Basel, Boston, 2008)
127. J.-P. Mller, P. Pecchiari, Un modle de systme dagents autonomes situs: application
la dduction automatique. in JFIADSMA96 ed. by Mller, Quinqueton (HERMES, France,
1996), pp. 202214. ISBN 2-86601-528-2
128. J.-P. Mller, J. Quinqueton (eds.), JFIADSMA96 (HERMES, France, 1996). ISBN 2-86601-
528-2
129. E. Nawarecki, E. Cetnarowicz, K. Cetnarowicz, Model rozwoju systemow poprzez ewolucje
aktywnych agentow (model od the development of systems as evolution of active agents).
In Zeszyty Naukowe AGH, Automatyka, Zeszyt 64, Nr 1546 (Scientific Bulletins of AGH
Technical University of Mining and Metallurgy. Bulletin 64 No 1546), vol. 64 (AGH Krakow,
Poland, 1993), pp. 189204 ISBN 0454-4773
130. E. Nawarecki, K. Cetnarowicz, An approach to the technology of the development of the real
time systems. In Proceedings of the Conference: IASTED Ninth International Symposium
Applied Informatics, Insbruck, Austria 91 (IASTED, Acta Press Zurich, 1991)
138 References
146. M. Ocello, Y. Demazeau, Building real time agents using parallel blackboards and its use
for mobile robotics. In IEEE International Conference on Systems, Man and Cybernetics,
vol. 2/3, IEE Systems, Man and Cybernetics Society (1994)
147. L. Padgham, M. Winikoff, Developping Intelligent Agent Systems, a Practical Guide (Wiley,
Chichester, 2004)
148. S. Parsons, N.R. Jennings, Negotiation through argumentation-a preliminary report. in
ICMAS96, ed. by M. Tokoro (AAAI Press Menlo Park, California, 1996), pp. 267274
149. Z. Pawlak, Mathematical foundations of computers. In Proceedings of the International Sym-
posium and Summer School on Mathematical Foundations of Computer Science (Warsaw,
Jablonna, Poland, 1972), pp. 2127
150. Z. Pawlak, Rough Sets Theoretical Aspects of Reasoning about Data (Kluwer Academic
Publishers, Dordrecht, 1991)
151. Z. Pawlak, A. Skowron, Rudiments of rough sets. Information Sciences 177(1), 327 (2007)
152. B.C. Pierce, Basic Category Theory for Computer Scientists (The MIT Press, Cambridge,
1991)
153. J. Quinqueton, M.-C. Thomas, B. Trousse (eds.), JFIADSMA97 (HERMES, France, 1997)
154. A.S. Rao, M.P. Georgeff, Modelling rational agents within a bdi architecture. In Proceed-
ings of the Second International Conference on Principles of Knowledge Representation and
Reasoning, KR-91 (Cambridge, 1991), pp. 473484
155. M. Ratford, A.L. Tuson, H. Thompson, An investigation of sexual selection as a mechanism
for obtaining multiple distinct solutions. In Technical Report 879, Department of Artificial
Intelligence (University of Edinburgh, 1997)
156. G. Rojek, Problemy Bezpieczestwa w systemach wieloagentowych z wzgldnie swobodnym
przepywem zasobw informacyjnych, Praca doktorska, Wydzia Elektrotechniki, Automatyki,
Informatyki i Elektroniki AGH (2003)
157. G. Rojek, R. Cieciwa, K. Cetnarowicz, Algorithm of behavior evaluation in multi-agent sys-
tem. In Proceedings Computational ScienceICCS 2005, Lecture Notes in Computer Science,
vol. 3516 (Springer, Berlin, 2005), pp. 711718
158. J. Sanchez-Velazco, J.A. Bullinaria, Gendered selection strategies in genetic algorithms for
optimization. In Proceedings of the UK Workshop on Computational Intelligence (UKCI
2003) (University of Bristol, UK, 2003), pp. 217223
159. R. Schaefer, in Foundation of Global Genetic Optimization, Studies in Computational Intel-
ligence, vol. 74 (Springer, Berlin, 2007)
160. Y. Shoam, K. Leyton-Brown, Algorithmic, GameTheoretic, and Logical Foundations
(Cambridge University Press, Cambridge, 2009)
161. Y. Shoham, Agent-oriented programming. In Artificial Intelligence, vol. 60 (Elsevier, UK,
1993), pp. 5192
162. V. Tamma, S. Cranefield, T.W. Finin, S. Willmott, Ontologies for Agents, Theory and
Experiences (Birkhuser Verlag Basel, Boston, 2005)
163. M. Tokoro (ed.), ICMAS96 (AAAI Press Menlo Park, California, 1996)
164. W. Turek, Extensible multi-robot system. In Computational ScienceICCS 2008, Lecture
Notes in Computer Science (Springer, Berlin, 2008), pp. 574583
165. W. Turek, Motion coordination method for numerous groups of fast mobile robots. In M.A.
Kopotek, A. Przepirkowski, S.T. Wierzcho, K. Trojanowski (eds.), Recent Advances in Intel-
ligent Information Systems (2009)
166. W. Turek, Agentowy system wielomodelowy do zarzdzania grup robotw mobilnych, Praca
doktorska, Wydzia Elektrotechniki, Automatyki, Informatyki i Elektroniki AGH (2010)
167. W. Turek, Scalable navigation system for mobile robots based on the agent dual-space control
paradigm. In Proceedings of the International Conference and Workshop on Emerging Trends
in Technology, 2010 (Mumbai, Maharashtra, 2010), pp. 606612
168. W. Turek, K. Cetnarowicz, W. Zaborowski, Software agent for improving performance of
multi-robot group. Fundam. Inform. 112(1), 103117 (2011)
169. A.M. Uhrmacher, D. Weyns, Multi-Agent Systems, Simulation and Applications (CRC Press
Taylor and Francis Group, Boca Raton, 2009)
140 References
170. R. Unland, M. Klusch, M. Callisti, Software Agent-based Applications, Platforms and Devel-
opment Kits (Birkhuser Verlag Basel, Boston, 2005)
171. C. Van Aart, Organizational Principles for Multi-Agent Architectures (Birkhuser Verlag
Basel, Boston, 2005)
172. Y. Wallach, Alternating Sequential/Parallel Processing (Springer, Berlin, 1982)
173. M. Watson, AI Agents in Virtual Reality Worls, Programmin Intelligent VR in C++ (Wiley,
New York, 1996)
174. G. Weiss, Multiagent Systems. A Modern Approach to Distributed Artificial Intelligence (Ger-
hard Weiss, MIT Press, Cambridge Massacusetts, 1999)
175. G. Weiss, Multiagent Systems. A Modern Approach to Distributed Artificial Intelligence (Ger-
hard Weiss, MIT Press, Cambridge Massacusetts, 1999)
176. E. Werner, Social intentions. In Proceedings of the Conference ECAI-88, Munich, WG
(Munich, Germany, 1988), pp. 719723
177. E. Werner, Toward a theory of communication and cooperation for multiagent planning. In
Theoretical Aspects of Reasoning about Knowledge, ed. by M. Vardi (Morgan Kaufman,
Germany, 1988), pp. 129143
178. D. Weyns, Architecture-Based Design of Multi-Agent Systems (Springer, Heidelberg, 2010)
179. R.J. Wilson, Wprowadzenie do teorii grafw (PWN, Warszawa, 1985)
180. J. Winkowski, Programowanie symulacji procesow (Wydawnictwo Naukowo-Techniczne,
Warszawa, 1974)
181. N. Wirth, Algorithms + Data Structures = Programs. Prentice-Hall Series in Automatic
Computation, 1976
182. N. Wirth, Algorytmy + struktury danych = programy (Wydawnictwo Nukowo-Techniczne,
Warszawa, 1980)
183. M. Wooldridge, Reasoning about Rational Agents (MIT Press, Cambridge, 2000)
184. M. Wooldridge, An Introduction to MultiAgent Systems (Wiley, New York, 2009)
185. M. Wooldridge, An Introduction to MultiAgent Systems (Wiley, New York, 2009)
186. M. Wooldridge, N. Jennings, Formalizing the cooperative problem solving process. In
Demazeau et al., Modelling Autonomous Agent in a Multi-Agent World (Odense University,
Odense, 1994), pp. 1526
187. P. Xuan, V. Lesser, Multi-agent policies: from centralized ones to decentralized ones. In
AAMAS 02: Proceedings of the First International Joint Conference on Autonomous Agents
and Multiagent Systems: Part 3 (ACM publication, 2002)
188. M. Zabinska, K. Cetnarowicz, A study of agents orientation in decentralized information sys-
tem. In Proceedings of the First International Workshop: Decentralized Multi-Agent Systems
DIMAS95 (Krakow Poland, 1995), pp. 443454. ISBN 83-86813-10-5
189. M. Zabinska, K. Cetnarowicz, Zdecentralizowane systemy wyszukiwania informacji
wrozproszonej bazie danych oparte na koncepcji autonomicznych agentw. In A.G.H.
Krakw, E. Kochan (eds.), Automatyka t.1, z.1, (AGH University of Mining and Metallurgy,
Krakw, 1997), pp. 433441. ISBN 83-7108-029-8
190. M. Zabinska, K. Cetnarowicz, G. Dobrowolski, E. Nawarecki, Simulation systems with
decentralized knowledge bases: process of agents orientation in environment. In Proceed-
ings of XVIIth International Workshop Advanced Simulation Systems ASS 96, ASU, CSSS,
EUROSIM, Department of CS of FEI VSB Ostrava, Czech Rep. (1996)
191. W. Zaborowski, Zastoswanie systemw agentowych do wspomagania zarzadzania zespomi
robotw mobilnych, Praca doktorska, Wydzia Elektrotechniki, Automatyki, Informatyki i
Elektroniki AGH (2008)
192. R. Zimmermann, Agent-based Supply Network Event Management (Birkhuser Verlag Basel,
Boston, 2006)