Professional Documents
Culture Documents
Innovations in Multi-Agent Systems
Innovations in Multi-Agent Systems
Abstract
This paper outlines an abridged history of agents as a guide for the reader to understand the trends
and directions of future agent design. This description includes how agent technologies have
developed using increasingly sophisticated techniques. It also indicates the transition of formal
programming languages into object-oriented programming and how this transition facilitated a
corresponding shift from scripted agents (bots) to agent-oriented designs. The trend shows that
applications with agents are increasingly being used to assist humans, either at work or play.
Examples include the ubiquitous paper clip, through to wizards, entire applications and even games.
The trend also demonstrates that agents vary in the complexity of the problem being solved and their
environment. Following the discussion of trends, we briefly look at the origins of agent technology
and its principles, which reflects heavily on ‘Intelligence with Interaction’. We further pinpoint how
the interaction with humans is one of the critical components of modern Distributed Artificial
Intelligence (DAI) and how current applications fail to address this fact. The next generation of
agents should focus on human-centric interaction to achieve intelligence. Utilising these
advancements, we introduce a new paradigm that uses Intelligent Agents based on a Belief, Desire,
and Intention (BDI) architecture to achieve situation awareness in a hostile environment. BDI agents
are implemented using the JACK framework, and spawn agents with individual reasoning processes
specifically relating to the goals being instigated in its environment. They depend on the environment
or superior agents to generate goals for them to act upon. In order to improve the performance of the
agents we need to remove this dependency. To this end, it is suggested that JACK can be extended to
Corresponding author.
E-mail address: jeffrey.tweedale@dsto.defence.gov.au (J. Tweedale).
1084-8045/$ - see front matter Crown Copyright r 2006 Published by Elsevier Ltd. All rights reserved.
doi:10.1016/j.jnca.2006.04.005
ARTICLE IN PRESS
1090 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
realise the Observe, Orient, Decide and Act (OODA) loop using feedback from a learning component
within a team environment.
Crown Copyright r 2006 Published by Elsevier Ltd. All rights reserved.
Keywords: Artificial Intelligence (AI); Agent; Multi-Agent Systems (MAS); Distributed Artificial Intelligence
(DIA); Belief, Desire, and Intension (BDI); Human–Computer Interface (HCI); Observe, Orient, Decide and Act
(OODA); Procedural Reasoning System (PRS); Decision Support System (DSS); Intelligent Decision Support
System (IDSS)
1. Introduction—what is an agent?
There are many definitions of what is termed an agent. The major reason for this
variance is due to the exponential growth of diversity and functionality. Most Artificial
Intelligence (AI) researchers support Wooldridge’s multiple definitions of weak and strong
Agency (Wooldridge, 1995). The weaker notion defines the term ‘agent’ as having ability to
provide simple autonomy, sociability, reactivity or pro-activeness (Castelfranchi, 1995;
Genesereth and Ketchpel, 1994). Where a stronger notion is more descriptive, an agent is
generally referred to as a computer system that, in addition to having the properties
identified above, is either conceptualised or implemented using concepts that are more
usually applied to humans. It is quite common in AI to characterise an agent using
cognitive notions, such as knowledge, belief, intention, obligation (Kinny et al., 1996; Shen
et al., 1995), and possibly emotion (Bates, 1994). In a nutshell, an agent can be seen as a
software and/or hardware component of system capable of acting exactingly in order to
accomplish tasks on behalf of its user (Nwana, 1996).
1.1. Agents: where they are coming from?
Before venturing directly into current agent technology, we briefly discuss the history
related to its origin. Leading researchers believe that agent technology is a result of
convergence of many technologies within computer science such as object-oriented
programming, distributed computing and artificial life. Another important aspect of
agents is their ability to offer intelligence with interaction, which suggests joining two
different research streams such as AI and Human–Computer Interaction (HCI).
Traditional AI focus has been on pure intelligence with little external interaction with
its peers and humans. This approach produced bottlenecks, especially when troubleshoot-
ing faults in systems developed with AI techniques. The shortcoming of AI systems (in late
1990s) and need for interaction gave birth to a field called Distributed Artificial
Intelligence (DAI). This new domain mainly includes agent technology in order to form
intelligence with interaction and vice versa. Thus, agents provide a means to bridge the gap
between humans and machines by means of interaction and intelligence.
Although this aspect of DAI technology heavily relies on intelligence with interaction,
most of the current systems or applications lack the vision of utilising human interaction.
Recently, there has been a shift towards human–agent interaction, with many researchers
contributing to the field. The next two subsections discuss the current status of agent
technology and the authors’ point(s) of view of future agent technology with a desirable
drive towards a human-centric agent system. Towards the end of this paper an example of
human-centric agents is provided.
ARTICLE IN PRESS
J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115 1091
Given the diversity of agent uses, it naturally follows that they should be classified. How
to classify agents has been debated since the first scripts originated. Some researchers
preferred functionality, while others used utility or topology. Agreement on categories has
not been achieved; however, three dimensions dominate. This characterisation includes (1)
mobility, i.e. ability to move around; (2) the reasoning model; and, (3) attributes that they
should exhibit. Nwana (1996) choose to classify agent topology using five categories as
Although, in current developments the Belief, Desire, and Intention (BDI) paradigm is
widely used to achieve human-like intelligence, it still cannot satisfy the definition of truly
‘smart agents’, since the agents lack ideal characteristics such as ‘Coordination and
Learning’. Such characteristics lead towards the next generation of agents that allow
coordination (Teaming). In next section, we explore why BDI became one of the building
blocks of agent technology.
Another major step towards the next generation of agents is adding ‘human-centric’
nature. Currently, agent development is very much concentrated on its agent-only
interaction. The concept of smart agent is not quite fulfilled, especially when it come to its
‘Social ability’. Wooldridge describes the social ability as ‘y. The ability to interact with
other agents and possibly humans via some communication language’ (Wooldridge and
Jennings, 1995a, b). In this statement we would like to suggest that ‘interaction’ with
humans cannot only be via some communication language but also can be by other means
such as observation and adaptation. We would like to also suggest that truly smart agents
could thus be complementary to humans by adapting similar skills (and that may include
communication, learning and coordination) rather being pure replacements to humans.
This leads us to focus on developing the agent’s human-centric nature by combining one or
more ideal attributes such as coordination, learning and autonomy.
This section attempts to cover most important building blocks on which the future
trends of agent technology heavily rely. It is evident that the authors take the point of view
that BDI is and will be the paradigm of choice for complex reactive systems, although it
needs some refinement or extension in order to suit some systems where human
involvement is the most crucial factor. An example of such a system is the cockpit of an
airplane, a complex reactive system involving critical human decision making in real time,
in other (cognitive) words, human situation awareness.
ARTICLE IN PRESS
1092 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
One of the major issues in early human–machine automation was a lack of focus on the
human and their cognitive process. This was due to the aggressive introduction of
automation as a result of urgent need. Recently, the major development in intelligent
agents has become a popular choice to answer these pitfalls. Early agent models or theories
(such as BDI) are attractive solutions due to their human-like intelligence and decision-
making behaviour. Existing agent models can act as stand-alone substitutes for humans
and their human decision-making behaviour. This is where we come back to one of the
pitfalls of early human–machine automation; the human-like substitute could fail at a
critical point without leaving any choice to the human for regaining control of the
situation, resulting in impairing his situation awareness. The answer to this pitfall was
provided via modern AI research after developing a machine-assistant in advisory and
decision support roles to human operators in critical or high workload situations The
intelligent agent technology has become mature and attractive enough to implement such
machine-assistants in the role of more independent cooperative and associate assistants
(Urlings, 2004).
However, Urlings (2004) claims that in order to compose effective human–agent teams
and in order to include intelligent agents as effective members in this team, it is suggested
that a paradigm shift in intelligent agent development is required (see Fig. 1), similar to the
change from the technology-driven approach to the human-centred approach in
In the next section, we will cover the popular agent model of reasoning (BDI) along with
some cognitive system engineering background theories such as Rasmussen Decision
making ladder (Rasmussen, 1994) and Boyd’s Observe Orient Decide and Act (OODA)
loop (Boyd, 2004; Hammond, 2004). We believe that human-centric agents could benefit
from these human cognition theories as an extension of their inherent reasoning. The
example of such extension is briefly illustrated towards the end of this paper as an
application, namely, intelligent agents for situation awareness.
2.2. Agent notions, BDI: paradigm of choice for complex reactive systems
BDI agent architectures employ the concept of beliefs; desires (or goals), intention and
plan to build agents that balance the time spent deliberating (deciding what to do) and
planning (deciding how to do it). Perhaps uniquely among agent architectures, BDI
combines several desirable attributes:
BDI is thus of interest in at least four areas of research: in modelling human behaviour
(particularly human practical reasoning); in developing and improving BDI theory and
therefore its implementations; in building complex and robust agent applications; and as a
candidate for logical specification, leading to the possibility of automatic verification and
ARTICLE IN PRESS
1094 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
compilation. It is of interest for all these reasons in multi-agent systems. For example,
Wooldridge’s Logic of Rational Agents (LORA) extends Rao and Georgeff’s formalisa-
tion of BDI into a framework that enables the logic-based description and specification of
communication between agents, ultimately allowing its incorporation into a theory of
Cooperative Problem Solving (Wooldridge, 2000; Wooldridge and Jennings, 1999).
2.2.1. Philosophy
Bratman’s theory was an attempt to make sense of the seeming contradictions involved
in future-directed intention (Bratman, 1987). First, showing that desire and belief are by
themselves insufficient, he demonstrated that it is possible to explain future-directed
intention by adding the notion of intention. Intention is desire with commitment. This
commitment has certain characteristics. Firstly, it is conduct-controlling: having
committed to do something, a person should not seriously consider actions which are
incompatible with so doing. Secondly, it implies some temporal persistence of the
intention. Lastly, it generally leads to further plans being made on the basis of the
intention.
2.2.2. Implementations
The Rational Agency project was instigated at the Stanford Research Institute (SRI) in
the mid-1980s to research how to build agents that balance time spent deliberating and
planning. Through Bratman’s involvement it was recognised that intentions can save time:
an agent need not deliberate over plans that are inconsistent with its current intentions.
The principal outcomes of the project were two BDI architectures: the Intelligent
Resource-bounded Machine Architecture or IRMA (Bratman et al., 1988) and the
Procedural Reasoning System or PRS (Georgeff and Lansky, 1987).
A version of the PRS interpreter is discussed in Wooldridge (2000). In its simplest form,
it consists of a loop of several functions, which are performed (or omitted) in sequence.
Adding more complexity to the behaviour of the agent is achieved by adding control
structures to the loop. The basic functions are:
decides on a suitable plan to be executed. In the PRS model, this means selecting a plan
from a pre-existing plan library. Finally, execution typically means executing the first
action of the plan and updating the plan by removing that component.
IRMA can be seen as a refinement of PRS: an extra filter is put in place so that an
intention is not necessarily reconsidered, even though it may be inconsistent with the
agent’s current beliefs and desires. This reflects Bratman’s contention that it can be rational
for an agent not to reconsider an intention even though from an objective viewpoint that
non-reconsideration is irrational (Bratman, 1987). This may be the case, for example, if the
agent lacks the time for the means-end analysis that reconsideration would entail.
Implementations of BDI have either followed the PRS model or extended it. SRI
continues to maintain its version of PRS (written in LISP and referred to as PRS-CL). At
the Australian Artificial Intelligence Institute, Georgeff and Rao oversaw an implementa-
tion of PRS in C++, called the distributed Multi-Agent Reasoning System, or dMARS.
JACK began as an implementation of dMARS in Java, but is designed with a focus on
extensibility and with the objective of supporting non-BDI as well as BDI architectures
(Busetta et al., 2000). As a commercial product, extensions to JACK have been customer-
driven. These extensions include the JACK Development Environment, JACK Teams and
JACK Sim.
PRS was re-implemented in C at the University of Michigan, as UM-PRS. JAM is a
Java implementation that draws from PRS, UM-PRS, Structured Circuit Semantics and
Act Plan Interlingua (Huber, 1999).
First-order logic: The first component is classical first-order logic with the standard
quantifiers 8 and (. LORA is therefore an extension of classical logic.
Temporal logic: The temporal component is CTL, which combines a linear-time past
with a branching-time future to represent a model of the agent’s world. It should be
noted that in keeping with the fact that this is a model of a world, rather than a
description of the real world, the nodes in the CTL tree should properly be regarded as
agent states, rather than as time points.
BDI logic: The BDI component introduces three modal operators, representing beliefs,
desires and intentions, which connect possible worlds. (Each possible world is
represented by a CTL branching tree). There are many possible relationships between
these operators, and some are at least potentially useful. For two examples, consider
Bratman’s Asymmetry Thesis: it is (generally) irrational for an agent to intend an action
it believes will not succeed; it is acceptable, however, for it to intend an action that it
does not believe will succeed (Bratman, 1987). Wooldridge systematises the basic
relationships, including those that involve the temporal operators A ‘on all future time
lines’ and E ‘on at least one future time line’, and considers each in turn to determine
their usefulness.
ARTICLE IN PRESS
1096 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
Dynamic logic: The action component consists basically in labelling the state transitions
in the CTL tree with actions (Wooldridge, 2000). The labelled tree is essentially a
specification of a program to be followed until the requirements for the next perception/
belief revision cycle are fulfilled. At that point the BDI relationships and the CTL tree
will (possibly) be renewed. Wooldridge goes on to outline how LORA can be used to
describe and specify communication between BDI agents.
Agents (not just BDI agents) do not exert control over one another: an agent may request
that another perform an action, but the second agent may decline. In the same way, an
agent may inform another of its belief that something is the case, but the second agent may
choose not to so believe. This has implications with regard to communication between
agents. There are many types of speech act, but for the purposes of definition Wooldridge
identifies just three: Inform, Request-that and Request-to. Inform is defined as an attempt
by an agent to bring about a group’s mutual belief of an item of information. In order to
achieve this, the agent intends the lesser goal that the group should mutually believe that it
intends the group should mutually believe the said information. Similarly, Request-that is
an attempt to bring about a group’s mutual intention to produce a state of affairs, by
intending the lesser goal that the group should mutually believe that such is its intention.
Request-to is a special case of Request-that, where the state of affairs intended is that a
particular action has been performed. From these can be defined ‘composite’ speech acts,
such as Query, Agree and Refuse. The next step is to define conversation policies. For
example, a simple policy is that a Request-to should be responded to with either Agree or
Refuse. With conversation policies established, effective communication can begin.
Wooldridge (2000) discusses cooperation between agents specifically from the point of
view of cooperative problem solving. He also rules out cases where the agents working
together are not autonomous. From this standpoint, and in the context of LORA, he
develops a model that attempts to account for the mental state of agents as they engage in
cooperative problem solving. Four stages are identified:
much reflects human reasoning. The next subsection attempts to cover two very important
theories of human reasoning: Decision Ladder (Rasmussen, 1994) and Boyd’s OODA
(Boyd, 2004; Hammond, 2004). These models were chosen to make agents more human
centric and enhance its existing BDI reasoning model, by introducing a learning
component.
GOALS
Evaluate Options
OPTIONS GOAL
CHOSEN
Predict
Consequences
STATE TARGET
INFORM
TASK
ATION
heuristic
Observation Planning
shortcuts
PROCED
ALERT URE
Activation Execution
environment may change. The changes will subsequently be observed when activating
the next decision process. The ladder also illustrates that it is possible to use heuri-
stic short-cuts within the decision process in order to bypass some of the decision
operations.
implicit guidance
and control
orientation
unfolding Genetic
circumstance Heritag
Cultural e
Tradition
s Analysis
observ decis
and action
ation ion
Synthesi
outside New
informatio Informatio
n Previous
Experien
ce
feedback
feedback
unfolding interaction
with environment
the OODA loop speed involves proceeding through the stages more rapidly. The speeds of
the first two stages are heavily dependant on the speed of information gathering and
processing, whereas the speed of the last two stages is dependant on the confidence in the
output of the previous two stages.
The OODA loop identifies a number of important feedbacks in the decision process. It
explicitly shows the feedback caused by the environment after acting in it. It also shows
direct feedbacks going back to the observation stage from both the decision and action
stages. Finally, it shows an implicit guidance feedback extending from orientation to all
other stages of the process. The direct and implicit guidance feedbacks introduce short cuts
in the decision process that cause it to loop faster.
as to why a technology that promises so much has not taken the world by storm? What
does the future hold for the technology?
From a software engineering perspective, one would expect to gain major benefits from
intelligent agent technology through its deployment in complex distributed applications
such as virtual enterprise management and the management of sensor networks. However,
while the agent paradigm offers the promise of providing a better framework for
conceptualising and implementing these types of system, it needs to be recognised that the
underlying programming paradigm needs to be supported with standards, design
methodologies and reference architectures if these applications are to be developed
effectively. As noted above, these are beginning to appear, but more experience needs to be
gained with them and the software community needs to be educated in their use. Given the
nature of these applications, a killer application seems unlikely at this level. Rather we
would expect to see a gradual shift from the object-oriented to the agent-oriented
paradigm as the supporting framework matures. It is our belief that the underlying theories
of cognition will continue to prove adequate for large-scale software developments. The
key theories (BDI and production systems) date from the 1980s and have a long pedigree in
terms of their use in commercial-strength applications. This longevity indicates that their
basic foundation is both sound and extensible, which is clearly illustrated in the
progression of BDI implementations from PRS to dMARS to JACK and now JACK
Teams. New cognitive concepts may gain favour (e.g. norms, obligations, or perhaps
commitment), but we believe that these concepts will not require the development of
fundamentally new theories.
While we believe that the existing theories are sufficiently flexible to accommodate new
cognitive concepts, we perceive a need to develop alternative reasoning models. In the case
of the JACK implementation of BDI, a team-reasoning model is already commercially
available in addition to the original agent-reasoning model. At the other end of the
spectrum, a low-level cognitive reasoning model (COJACK) has been recently developed.
This model enables the memory accesses that are made by a JACK agent to be influ-
enced in a cognitively realistic manner by external behaviour moderators such as
caffeine or fatigue. Interestingly, COJACK utilises an ACT-R like theory of cognition,
which in turn is implemented using JACK’s agent reasoning model. From a software
engineering viewpoint, it should be the reasoning model that one employs that shapes
an application, not the underlying cognitive theory. Thus, there is the opportunity
through the provision of ‘higher level’ reasoning models like OODA and their
incorporation into design methodologies, to significantly impact productivity and hence
market penetration.
The development of intelligent agent applications using current generation agents is still
not routine. This may improve by providing more intuitive reasoning models and better
supporting frameworks, but behaviour acquisition remains the major impediment to the
widespread application development using intelligent agent paradigms. The distinguishing
feature of the paradigm is that an agent can have autonomy over its execution—an
intelligent agent has the ability to determine how it should respond to requests for its
services. This is to be contrasted with the object-oriented paradigm, where there is no
notion of autonomy and objects directly invoke the services that they require from other
objects. Depending on the application, acquiring the behaviours necessary to achieve the
required degree of autonomous operation could be a major undertaking and one for which
there is little in the way of support. The problem could be likened to the knowledge
ARTICLE IN PRESS
J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115 1101
acquisition bottleneck that beset the expert systems of the 1980s. There is a need for
principled approaches to behaviour acquisition, particularly when agents are to be
deployed in behaviour rich applications such as enterprise management. Cognitive Work
Analysis has shown promise in this regard, but further studies are required.
Alternatively, the requirement for autonomous operation can be weakened and a
requirement for human interaction introduced. Rather than having purely agent-based
applications, we then have cooperative applications involving teams of agents and humans.
Agent-based advisory systems can be seen as a special case of cooperative applications, but
we see the interaction operating in both directions—the agent advises the human, but the
human also directs and influences the reasoning processes of the agent. Existing
architectures provide little in the way of support for this two-way interaction. What is
required is that the goals and intentions of both the human and the agent are explicitly
represented and accessible, as well as the beliefs that they have relating to the situa-
tion. This approach provides a convenient way to address the difficulties associated
with the behaviour acquisition associated with autonomous operation. By making
visible the agent’s longer-term goals and intentions, as well as the rationale behind its
immediate recommendation, this approach also provides a mechanism for building trust
between humans and agents. It should also be noted that in many applications, such
as cockpit automation and military decision-making, full autonomy is not desirable–an
agent can provide advice, but a human must actually make the decision. Because of
these reasons, we expect to see an increasing number of applications designed speci-
fically for human/agent teams. Learning has an important role to play in both cooperative
and autonomous systems. However, the reality is that it is extremely difficult to achieve
in a general and efficient way, particularly when dealing with behaviours. The alter-
native is to provide the agent with predefined behaviours based on a priori knowledge of
the system and modified manually from experience gained with the system. This has
worked well in practice and we expect that it will remain the status quo for the immediate
future.
In summary, we expect that intelligent agents will retain their architectural foundations
but that the availability of more appropriate reasoning models and better design
methodologies will see them being increasingly used in mainstream software development.
Furthermore, better support for human/agent teams will see the development of a new
class of intelligent decision support applications.
As stated earlier in this paper, the BDI agent model has potential as a method of choice
for complex reactive systems. Future trends in agent technology can be categorised on the
basis of ‘Teaming’, as illustrated in Fig 4 below. ‘Teaming’ can be divided into agent only
(Multi-agent) teaming and human-centric agent (human–agent) teaming. These two
streams have two commonalities, namely, collaboration and cooperation. Along with these
two commonalities the human-centric agent possesses ideal attributes such as learning as
discussed earlier in the definition of the truly smart agent. Recently work on the BDI agent
such as Shared plans/Joint Intentions and JACK teams (AOS, 2002, 2004a, b) facilitates
agent only teaming. Furthermore the addition of the ideal attribute such as learning enable
the agents to be closer to the goal of a human-centric smart agent, as illustrated by the
‘extensions’ in Fig. 4.
ARTICLE IN PRESS
1102 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
Agent teaming gained popularity recent years and categorised in to prominent domain
such as ‘Multi Agent Systems’ (MAS). It is believed that three important aspects such as
‘Communication-Coordination and Cooperation’ plays important role is agent teaming.
Multi-agent teaming takes inspiration from human organisational models of team
operation, where role-playing such as leadership, communicative, cooperative and
collaborative skills empower the success of team. Next subsections cover brief under-
standing of these multi agents teaming aspects.
coherent manner. Coherency is an essential property of a MAS and coordination for the
reason that it ensures an agent or system of agents behaves as a unit (Nwana, 1996).
Without coordination, agents are capable of conflict, wasting effort and squandering
resources thus failing to accomplish their required objectives (Durfee, 2004). Therefore,
coordination allows agents to meet global constraints, distribute their expertise, resources
and information and promote efficiency between other agents (Nwana, 1996). Thus, the
ultimate survival of an agent and a multi-agent system is how well agents within the system
can be coordinated (Borgoff et al., 1996).
Coordination can be likened to a 2-player game. In this case, let the players be agents A
and B. The agents give each other clues about their world. For example, every time a clue is
given to agent B, agent B produces a new hypothesis about agent A, and agent B will give
agent A new clues about its world (Agostini, 1999). The coordination of the agents are
considered to be satisfied once each of the clues given about each of their backgrounds
stabilise to a consistent set of hypotheses that are satisfied within each of their worlds
(Agostini, 1999). In order to have such stability, one assumption is that the agents are
rational, and there are joint commitments and conventions.
This notion of joint commitments and conventions are considered to be the cornerstones
of coordination since they provide the pledges to undertake a specified course of action,
assist in monitoring of such pledges, and offer the social laws of the system (Jennings,
1993a, b). Commitments not only provide MAS with a degree of flexibility, but in
coordination, they allow agents to make assumptions about the actions of others, hence
removing any uncertainties (Jennings, 1993a, b). Conventions further assist in agent
coordination by ensuring that the members of the community are acting in a coherent
manner. Jennings (1993a, b) states that the overall coherency of the MAS will improve if
there is a minimum reporting action included, and as such, each agent should have one
convention for each commitment they have pledged to.
Another basic ingredient of a multi-agent system is the ability for agents to communicate
to one another. Communication is essential in coordination as it allows agents to not only
achieve their goals for themselves, or for their community, but it allows the coordination of
an agent’s actions and behaviours (Hunhs and Stephens, 1999). When agents communicate
information about themselves, they are simplifying their models of each other, thus
reducing uncertainties about themselves, their world or their goal (Durfee, 1999).
However, agent communication can reduce the coherency of a MAS; thus, Durfee (1999)
states that when necessary, an agent should maintain ignorance about their world and the
agents that populate the system, but at the same time, preserve enough information to
coordinate well with other agents. This is further emphasised by Jennings (1993a, b), where
communication should be achieved to promote satisfactory coordination, but at a level
that ensures agents retain their flexibility in achieving their own goals and objectives in
unknown environments.
3.2.1.2. Coordination models and languages. Many models and languages have been
developed to solve the problem of coordination within MAS. Coordination models are
necessary since they not only allow separate activities of agents to be bound into an
assembly, but they provide the necessary framework of interaction and communication
between agents within a system At a higher level, coordination models have the ability to
create, and destroy agents, synchronise and distribute actions to agents over time. In order
to apply coordination models into applications, coordination languages are developed to
ARTICLE IN PRESS
1104 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
appropriate. In order for the service to work properly, new coordination laws must be
computed and added to the infrastructure (Denti et al., 2003).
In next subsection, a novel hybrid architecture for agent learning by Sioutis et al. (2003)
is described. This extract demonstrates one of efforts of fusing human reasoning theories
and reinforcement learning techniques in to current popular BDI architecture.
knowledge-
based domain
learning
behaviour
knowledge-
based
analysis
knowledge-
based
planning
BDI …agents
rule-based
domain
skill-based
domain learned
responses
Furthermore, Urlings suggests that learning involves crossing the expertise levels.
Specifically, a learning agent placed in an unfamiliar environment would be at first
operating in the knowledge level. As the learning agent gains experience it builds a rule-
base for situations presented in that particular environment. As the rule-base becomes
larger, the agent uses its new rules more frequently and consequently drops to the rule
level. In a similar way, as the learning agent improves the performance of its rules, it gains
confidence In using them. It then subsequently selects them automatically with less or no
pre-processing involved. Hence, the agent crosses to the skill level of operation. Fig. 5
illustrates this process graphically.
4. Agent applications
Two factors have changed the nature of decision support and are likely to continue to
dominate the future of IDSSs: (1) global enterprises have a need for distributed
information and fast decision making; and (2) the Internet enables access to distributed
information and speed. As illustrated in the applications described below, agents have
permitted IDSSs that can respond to these requirements.
In today’s environment, information needed for decision-making tends to be distributed
(Du and Jia, 2003). Networks exist within and outside of enterprises, homes, the military,
government, and nations. Information is segmented for logistic and security reasons onto
different machines, different databases, and different systems. Informed decisions may
ARTICLE IN PRESS
J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115 1107
Operationalize Forecasts
Model
Model Base Recommended
Evaluate
Action
alternatives
Intelligent
Agents
Computer Decision
Technology Maker
scenarios that the user may desire. During processing, agents may acquire the needed
models or consult with the user. The feedback loop indicates interaction between the
processing and input components such as real-time updating or user requirements for
additional information. The output component provides the result of the analysis to the
user and possibly recommendations about the decision. In this component agents may, for
example, personalise the output to particular user so that it is presented in a desired way or
so that drill-down information is available. The decision maker and the computer
technology are components of the overall system and are recognised explicitly in the
diagram. The decision maker is usually a human user, although robotic applications
increasingly utilise automated decisions that may be implemented with agents (Phillips-
Wren and Forgionne, 2002).
4.1.1. Healthcare
Healthcare is an area that benefits from IDSSs by bringing a combination of expert
opinion, availability and preliminary diagnosis to complement the physician. One
illustrative example will be given. Hudson and Cohen (2002) described an IDSS to assist
in the identification of cardiac diseases. The system uses five agents plus the medical
professional to evaluate factors in congestive heart failure. The authors claim good results
with evaluation factors of sensitivity, specificity and accuracy all over 80%. Agents can
monitor patient conditions, supply information to the physician over the Internet, bring
outside experts into the consultation, or advise the physician to become the indispensable
other set of hands.
Awareness
Quality
Knowledge Info-Centric
Quality GIG Services,
SW Agents using
Information Net-Centric M&S Services
Quality Components,
Common
Data
Operational
Quality Federations,
Model
Common
Operational
Picture
Systems,
Messages
Time
Today
2
Fig. 7. C improvements in military applications using agent-based IDSSs (Tolk, 2005).
the flow of information as needed to the right person (s), and recommends a course of
action. These agents work in the background to monitor the evolving situation and to
recommend actions.
4.1.3. e-Commerce
Decision support systems can be used in electronic commerce to aid in the choice of
product on the basis of criteria specified by the user (Gwebu et al., 2005; Singh and Reif,
1999). Gwebu et al. (2005) described an ecommerce IDSS that included Agents to assist
with what to buy and sell, at what process. Buyer and seller agents have access to databases
with products, buyers, sellers and trading history. Using this information, supply and
demand can be predicted, enabling decisions on what, when, and at what price to purchase
or sell. Agents implement multi-criteria purchase decisions trade-offs between such criteria
as price, quality, functionality, condition, brand name, warranty, or delivery terms. It is
possible to manage huge amounts of distributed data needed for decision-making using
agents. In 2003, Wang, Fang, Wang and Liu proposed an IDSS for the electronic
commerce market to maximise the sale price of electricity. The bidding strategy is
influenced by qualitative and quantitative factors including the strategies of other bidders.
Agents interact with the user and with other agents. Negotiation can also be implemented
with Agents (Liu et al., 2003).
support decision-making. Two basic types of Agents are used, a Knowledge Agent
and a User Interface Agent. A decision tree component was used to represent domain
knowledge for both agents in a rule-based, modular method that can be learned, used
and shared by other agents. The architecture provides a facility to explain decisions to
the user. Houari and Far (2004) described an enterprise-wide system for know-
ledge management using agents. As an illustration of agents in knowledge manage-
ment, Lee et al. (2002) described agents used for decision support in supply chain
management.
This research effort describes initial research into intelligent agents using the BDI
architecture in a human–machine-teaming environment. The potential for teaming
applications of intelligent agent technologies based on cognitive principles are examined.
Intelligent agents using the BDI-reasoning model can be used to provide a situation
awareness capability to a human–machine team dealing with a military hostile
environment. The implementation described here uses JACK agents and Unreal
Tournament (UT). JACK is an intelligent agent platform while UT is a fast paced
interactive game within a 3D-graphical environment. Each game is scenario based and
displays the actions of a number of opponents engaged in adversarial roles. Opponents can
be humans or agents interconnected via the UT games server. To support research goals,
JACK extends the Bot class of UT and its agents apply the BDI architecture to build
situational awareness.
The next subsection is taken from Sioutis et al. (2003) and provides the background
for the use of intelligent agents and their cognitive potential. The research is described in
terms of the operational environment and the corresponding implementation that is
suitable for intelligent agents to exhibit BDI behaviour. The JACK application is
described, and specific requirements are addressed to implement learning in intelligent
agents. Other cognitive agent behaviour such as communication and teaming are aims
of this research.
ARTICLE IN PRESS
J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115 1111
UtJackInterface
JACK Agent
Goal Events
Beliefs Plans
JACK
View
contains
class UtController
extends
class Bot
Javabot
TCP/IP
UT-BotAPI
Gamebots
Unreal Tournament
5. Concluding remarks
Intelligent agents have been the subject of many developments in the past decade,
especially in operations analysis, where the modelling of ‘human-like’ intelligent and
decision-making components is required. We expect that intelligent agents will retain the
architectural foundations discussed. We have witnessed the transition from object-oriented
to agent-oriented software development and MAS are maturing to a point where standards
are being discussed. We have attempted to present a guide for the future design of agent-
based systems, although MAS are expected to continue to evolve with a series reasoning
models and methodologies before a single standard is agreed to within the MAS community.
Furthermore, better support for human/agent teams will continue to mature and form the
basis of a new class of intelligent decision support applications. The future design of agent-
based systems will rely on the collaboration of academia, industry and Defence with a focus
on solving a significant real-world problem. Our initial efforts have focused on demon-
strating a multi-agent system is capable of learning, teaming and coordination.
Acknowledgements
The financial assistance by the Australian Defence Science and Technology Organisa-
tion is acknowledged. The reviewers’ comments enhanced the quality of presentation. We
wish to express our appreciation to Dr. Dennis Jarvis, Ms. Jacquie Jarvis, Professor Pierre
Urlings and Professor Lakhmi Jain for their valuable contribution in this work.
References
Agostini A. Notes on formalising coordination. In: Proceedings of AIIA 99: sixth congress of the Italian
Association for Artificial Intelligence: advances in artificial intelligence, Bologna, Emilia-Romagna, Italy.
Berlin: Springer; 1999. p. 285 (September 1999).
Alwast T, Miliszewska I. An agent-based architecture for intelligent decision support systems for natural resource
management. New Rev Appl Expert Systems 1995:193–205.
AOS. Jack intelligent agents. Melbourne: Agent Oriented Software Pvt. Ltd; 2002.
AOS. JACK intelligent agents: JACK manual, Release 4.1. Melbourne: Agent Oriented Software, Pvt. Ltd; 2004a.
AOS. JACK intelligent agents: JACK teams manual, Release 4.1. Melbourne: Agent Oriented Software Pvt. Ltd;
2004b.
Bates J. The role of emotion in believable agents. Commun Assoc Comput Mach 1994:122–5.
Billings CE. Aviation automation: the search for a human-centered approach. Mahwah, NJ: Lawrence Erlbaum
Associates; 1997.
Borgoff UM, Bottini P, Mussio P, Pareschi R. A systematic metaphor of multi-agent coordination in living
systems. In: Proceedings of ESM’96: 10th European simulation multiconference, Budapest, Hungary, 2–6 June
1996, p. 245–53.
Boyd J. 2004. Accessed from http://www.valuebasedmanagement.net/methods_boyd_ooda_loop.html
Bratman ME. Intention, plans, and practical reason. Cambridge, MA: Harvard University Press; 1987.
ARTICLE IN PRESS
1114 J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115
Bratman ME, Israel DJ, Pollack ME. Plans and resource-bounded practical reasoning. Comput Intell
1988;4:349–55.
Busetta P, Howden N, Rönnquist R, Hodgson A. Structuring BDI agents in functional clusters. Intelligent agents
VI: proceedings of the sixth international workshop, ATAL 1999. Lecture notes in artificial intelligence, vol.
1757. Heidelberg, Berlin: Springer; 2000.
Castelfranchi C. Guarantees for autonomy in cognitive agent architecture. In: Wooldridge M, Jennings NR,
editors. Intelligent agents: theories, architectures, and languages. Lecture notes in artificial intelligence,
vol. 890. Heidelberg, Berlin: Springer; 1995. p. 56–70.
Cohen P, Levesque H. Teamwork. Nous Special Issue Cognitive Sci Artificial Intell 1991;25(4):487–512.
Denti E, Ricci A, Rubino R. Integrating and orchestrating services upon an agent coordination infrastructure. In:
Ominici A, Petta P, Pitt J, editors. Proceedings of ESAW 2003—engineering societies in the agents world:
fourth international workshops. Lecture notes in computer science, vol. 3071/2004. Heidelberg: Springer;
2003. p. 228–45.
Du H, Jia X. A framework for Web-based intelligent decision support enterprise. In: Proceedings of the 23rd
IEEE international conference on distributed computing systems workshops, 2003. p. 958–61.
Durfee EH. Practically coordinating. AI Mag 1999;20(1):99–116.
Durfee EH. Challenges to scaling-up agent coordination strategies. In: Wagner T, editor. Multi-agent systems: an
application science. Dordrecht: Kluwer Academic Publishers; 2004. p. 1–20.
Genesereth MR, Ketchpel SP. Association for computing machinery. Software Agents, vol. 37(7). New York:
ACM; 1994. p. 48–59.
Georgeff MP, Lansky AL. Reactive reasoning and planning. In: Proceedings of the sixth national conference on
artificial intelligence. Menlo Park, CA: AAAI Press; 1987. p. 677–82.
Gwebu K, Wang J, Troutt M. Constructing a multi-agent system: an architecture for a virtual marketplace. In:
Phillips-Wren G, Jain L, editors. Intelligent decision support systems in agent-mediated environments.
Netherlands: IOS Press; 2005.
Hammond GT. The mind of war: John Boyd and American security. Washington, USA: Smithsonian Institution
Press; 2004.
Hicks J, Flanagan R, Petrov P, Stoyen A. Taking agents to the battlefield. In: Proceedings of the second
international workshop on formal approaches to agent-based systems, 2002. p. 220–32.
Houari N, Far B. Application of intelligent agent technology for knowledge management integration.
In: Proceedings of the third IEEE international conference on cognitive informatics, August 16–17, 2004.
p. 240–9.
Huber MJ. JAM: a BDI-theoretic mobile agent architecture. In: Proceedings of the third international conference
on autonomous agents, 1999. p. 236–43.
Hudson DL, Cohen ME. Use of intelligent agents in the diagnosis of cardiac disorders. Comput Cardiol
2002:633–6.
Hunhs MN, Stephens LM. Multiagent systems and societies of agents. In: Weiss G, editor. Multiagent systems: a
distributed approach to distributed artificial intelligence. Cambridge. MA, London, England: The MIT Press;
1999. p. 79–121.
InfoGames, Epic games, and digital extremes. Unreal tournament manual, New York, 2000.
Jennings NR. Commitments and conventions: the foundation of coordination in multi-agent systems. Knowledge
Eng Rev 1993a;8(3):223–50.
Jennings NR. Specification and implementation of a belief–desire-joint-intention architecture for collaborative
problem solving. J Intell Cooperative Inf Systems 1993b;2(3):289–318.
Kaminka GA, Veloso MM, Schaffer S, Sollitto C, Adobbati R, Marshall AN, et al. GameBots: a flexible test bed
for multi-agent team research. Commun ACM 2002;45:43–5.
Kinny D, Georgeff M, Rao A. A methodology and modeling techniques for systems of BDI agents. In: Van de
Velde W, Perram JW, editors. MAAMAW ‘96–agents breaking away: proceedings of seventh European
workshop on modelling autonomous agents in a multi-agent world. Lecture notes in artificial intelligence, vol.
1038. Heidelberg: Springer; 1996. p. 56–71.
Lee CC, Lee KC, Han JH. An intelligent agent based decision support system for supply chain management in e-
business. In: Proceedings of the international resources management association, Seattle, WA, May 19–22,
2002. p. 418–22.
Liu Q, Liu Y, Zheng J. Multi-agent based IDSS architecture and negotiation mechanism. In: Proceedings of
the 2003 IEEE international conference on natural language processing and knowledge engineering, 2003.
p. 198–202.
ARTICLE IN PRESS
J. Tweedale et al. / Journal of Network and Computer Applications 30 (2007) 1089–1115 1115
Marshal AN, Vaglia J, Sims JM, Rozich R. JavaBot for unreal tournament, 2003.
Nwana HS. Knowledge engineering review, vol. 11(1). Cambridge: Cambridge University Press; 1996. p. 1–40.
Nwana HS, Jennings NR. Coordination in multi-agents systems. In: Proceedings of the software agents and soft
computing: towards enhancing machine intelligence, concepts and applications, 1997. p. 42–58.
Ominici A, Ossowski S. Objective versus subjective coordination in the engineering of agent systems. In: Ossowski
S, editor. Intelligent information agents: the AgentLink perspective, vol. 2586/2003. Berlin: Springer; 2003.
p. 179–202.
Ossowski S, Hernandez J, Iglesias C, Fernandez A. Engineering agent systems for decision support. In:
Proceedings of the third international workshop on engineering societies the agents works, ESAW2002,
Madrid, Spain, September 16–17, 2002. p. 184–98.
Payne T, Lenox T, Hahn S, Lewis M, Sycara K. Agent-based team aiding in a time critical task. In: Proceedings of
the 33rd Hawaiian international conference on system sciences, 2002.
Phillips-Wren G, Forgionne G. Advanced decision-making support using intelligent agent technology. J Decision
Systems 2002;11(2):165–84.
Rao AS, Georgeff MP. Decision procedures for BDI logics. J Logic Comput 1998;8(3):293–344.
Rasmussen J. Skills, signs, and symbols, and other distinctions in human performance models. IEEE Trans
Systems Man Cybernet—SMC 1983;13(3).
Rasmussen J, Pejtersen AM, Goodstein LP. Cognitive systems engineering. In: Sage AP, editor. Wiley series in
systems engineering. New York: Wiley; 1994.
Ricci A. Ominici A, Denit E. Enlightened agents in TuCSoN. In: Proceedings of AIIA/TABOO—Oggetti agli
agenti: tendenze evolutive dei sistemi software (WOA2001), September 2001, Modena, Italy.
Schumacher M. Objective coordination in multi-agent system engineering: design and implementation.
Heidelberg: Springer; 2001.
Sengupta R, Bennett D. Agent-based modelling environment for spatial decision support. Int J Geographic Inf Sci
2003;17(2):157–81.
Shen W, Barthes JP. DIDE: a multi-agent environment for engineering design. In: Proceedings of ICMAS ’95,
1995.
Singh R, Reif H. Intelligent decision aids for electronic commerce. In: Proceedings of the fifth Americas
conference on information systems, Atlanta, GA, 1999. p. 85–7.
Singh R, Salam A, Iyer L. Using agents and XML for knowledge representation and exchange: an intelligent
distributed decision support architecture (IDDSA). In: Proceedings of the ninth Americas conference on
information systems, 2003. p. 1854–63.
Sioutis C, Ichalkaranje N, Jain LC. A framework for integrating BDI agents. In: Proceedings of the third hybrid
intelligent systems conference, Melbourne, Australia. Netherlands: IOS Press; 2003. p. 743–8.
Sioutos C, Ichalkaranje N, Jain LC, Urlings P, Tweedale J. A conceptual reasoning and learning model for
intelligent agents. In: Proceedings of the second international conference, on artificial intelligence in science
and technology, Hobart, Australia, November, 2004. p. 301–7.
Tolk A. An agent-based decision support system architecture for the military domain. In: Phillips-Wren G, Jain L,
editors. Intelligent decision support systems in agent-mediated environments. Netherlands: IOS Press; 2005.
Urlings P. Human–machine teaming. PhD thesis, University of South Australia, Adelaide, 2004.
Urlings P, Tweedale J, Sioutis C, Ichalkaranje N, Jain LC. Intelligent agents as cognitive team members. In:
Proceedings of the conference on human–computer interaction, 2003.
Wang P, Fang D, Wang X, Liu K. Intelligent bid decision support system for generations companies. In:
Proceedings of the second international conference on machine learning and cybernetics, 2–5 November,
2003a. p. 396–401.
Wang Z, Tianfield H, Jiang P. A framework for coordination in multi-robot systems. In: Proceedings of INDIN
2003: IEEE international conference on industrial informatics, 2003b. p. 483–9.
Wooldridge M. In: O’Hare GMP, Jennings NR, editors. Temporal belief logics for modelling distributed artificial
intelligence systems, foundations of distributed artificial intelligence. New York: Wiley Interscience; 1995.
Wooldridge M. Reasoning about rational agents. Cambridge, MA: The MIT Press; 2000.
Wooldridge M, Jennings NR. Intelligent agents: theory and practice. Knowledge Eng Rev 1995a;10(2).
Wooldridge M, Jennings NR. In: Wooldridge M, Jennings NR, editors. Theories, architectures, and languages: a
survey, intelligent agents: ECAI-94 proceedings of the workshop on agent theories, architectures, and
languages. Lecture notes in artificial intelligence, vol. 890. Heidelberg: Springer; 1995b. p. 1–39.
Wooldridge M, Jennings NR. The cooperative problem-solving process. J Logic Comput 1999;9(4):563–92.