Download as pdf
Download as pdf
You are on page 1of 70
i BH) | Introduction to Intelligent Agents 14.1 Introduction SPR Perec URER EPO RHR AS RE ORRUR ARREARS KRHA EAA Computers are incapable of performing any function unless the actions are explicitly coded by a Programmer. Owing to this, in cases where a program designer fails to foresee certain situations that lead to results that are unknown to the system, it causes the system to crash. However, in a large number of applications, we require systems that possess the capability of making decisions regarding their course of action in order to accomplish their design objectives. Such computer systems are known as agents. A key feature of these agents is their autonomous nature. An agent may be one of the following kind: biological, robotic, or computational. In this chapter, we will restrict our discussions to computational agents, also known as software agents. A software agent may be defined as a computer program which is capable of acting on behalf of the user in order to accomplish a given computational task. Examples of such agents include: a simple agent that i set up to buy a particular stock when its price falls below a stipulated value, an.internet search agent that is designed to transmit queries to various search engines and then collate the results obtained, and so on. Software demons in UNIX operating systems can be viewed as agents that monitor a software environment and perform actions to modify it, Another example of an agentis, the 'xbiff utility of X windows that is used to monitor a user’s incoming emails; it then indicates via a GUI icon whether or not the messages have been read. In distributed applications, multiple agents may be ble for achieving its goals, interacting with the envirot in the system, Such systems are known as multi-ag implies that each agent is assumed to have at least sed; in such systems, each agent is responsi- hment, and communicating with other agents ent systems, These are multi-threaded, which One thread of control. Introduction to Intelligent Agents 511 prom our discussion above, we can state that agents may be thought of as computer systems or igrams that are capable of acting autonomously in a given environment to meet their objectives. can be understood to sense or perccive the environment with the help of devices called nsors and then acts upon its environment through effectors. Therefore, we can think of an agent take stock of its environment through sensors and then modify this environment with the help of ors (Fig. 14.1). With the anology of a human agent, sensors are cyes, ears, nose, skin, etc., hile effectors include hands, legs, mouth, and other body parts. Percepts Sensors Agent }— Effectors ‘Actions Figure 14.1 Interactions of an Agent with the Environment Therefore, an agent is capable of interacting with its working environment, and other agents ent in this environment, with some degree of control over its internal state and actions. Thus, snagent can be defined in terms of the following elements: Animportant class of agents consists of intelligent agents, which are capable of robust operation even in dynamic and unpredictable environments, where there is a great probability of actions filing. Because of this, an important characteristic that an agent must possess is intelligence. Inthis chapter, we will focus our attention on different types of agents, architectures of intelligent agents, the communication language between agents, and multi-agent systems along with an application requiring multi-agent architecture. 14.2 Agents versus Software Programs TITTTCP TPT P SPOT TTT P Tee ier rere eee) Traditional software programs lack the ability to assess and react to the environment and modify their behaviour accordingly. They do not follow a goal-oriented and autonomous approach to Wroblem-solving, These characteristics distinguish traditional programs from agents whose key is the presence of autonomy (Franklin and Graesser, 1996). For instance, a payroll program ‘ould probably be said to sense the world through its input and act on it via its output, but it is not an. ent because its output would not normally be affected if it senses or finds unseen situations later. 512 Artificial intettigonco 1h Such program there is no concept of capturing environment which is dynamic in nature and ean effects the output of the system at different times of its invocation 14.2.1 Agents and Objects Wooldtidge has depicted the underlying difference between agents and objects in tetms of autonomy and behaviour which depends on characteristics such as reaction, proactiveness ang social ability (Wooldridge and Jennings, 1995). Standard object models do not support the kind of behaviour normally displayed by agents. The most basic difference lies in the degree to which agents and objcets are autonomous; the classical definition of objects clearly defines them as computational entities are capable of encapsulating a cer ate, and methods on this state, and then perform actions, The manner in which different objects communicate with each other ig called message passing. An example of a typical object that is clearly defined in JavalC++ con, sists of instance variables and methods which can have public or Private accesses. While instance variables identified as private can only be accessed from within an object, public methods can be accessed from anywhere. Although in this way, an object may be thought of as exhibiting au. tonomy over its state (as it displays control over it), but it does not exhibit any control over its behaviour. On the other hand, in case of an agent, it may or may not choose to perform a certain action which is of no interest to itself even if it is directed by other agents in favour of that Particular action. That is, if agent, requests agent, to perform an action, then agent, may choose to perform or not perform this action. Therefore, the decision to perform a given action rests with agents. In the case of object systems the decision is taken by the object that invokes the method, From our discussion so far, we have confirmed that agents possess the capability of autonomous behaviour based on the environment, while objects do not Possess any such capability. Thus, we can state that agents display stronger sense of autonomy than objects, and can take the ‘important decision of whether or not to perform an action on the request of another agent. 14.2.2 Agents and Expert Systems Expert systems were considered to be the most important AI technology of the 1980s. An expert system is a system that is considered to be an expert when it comes to solving problems or giving advice in some knowledge-rich domain. A classic example of an expert system is MYCIN: this expert system was developed for assisting physicians in the treatment of blood infections in hu- mans. Expert systems are defined as rule-based systems in which a knowledge engineer uses the knowledge of a certain domain and codes this knowledge as rules and facts in a special type of database known as a knowledge base. The rules ate usually rules of thumb, that is, they are based on some heuristics knowledge of the domain expert, In fact, expert systems are quite similar to regular learning systems in that they also draw conclusion about new cases depending on the knowledge that has been provided as input to the program. ‘Therefore, expert systems are based on the fact that previous knowledge of a certain application exists and that we ean acquire this knowl edge from samples or interviews with domain experts and then code this gathered knowledge into the knowledge base. However, expert systems are not capable of interacting with their environ ment and do not display reactive, proactive behaviour or social abilities such as cooperation, coordination, and negotiation, Introduction to Intelligent Agents 513 14.3 Classification of Agents a” * : . conts can be classified into different classes, Nwan ine basis of the parameters: mobility and interac Shy also be further classified de fxileaming ability. a has defined a typology and classified agents tion with environment (Nwana, 1996). They Pending on the primary attributes such as autonomy, cooperation term mobility refers to the ability acess mobility are called mobile ag erect with foreign hosts, and also pe tette originator. of agents to move around in a given network. Agents that ents; they are able to roam around in wide-area networks, erform tasks on behalf of their owners before returning back Os the basis of their interaction with the environm gwtive, Deliberative agents possess a reasoning model, and plan and negotiate for coordination sith other agents in order to achieve their goals. On the other hand, reactive agents do not possess xy prior model of their environment and act by responding to the present state of the environment awhich they are embedded using stimulus/response type of behaviour. This implies that the same ston performed twice in identical circumstances might appear to have entirely different effects. Ipparticular, the action may fail to have the desired effect; owing to this, agents must be prepared fer the possibility of failure in all environments, except the most trivial ones. ent, agents may be either deliberative or Tue primary characteristics of agents are as follows: * Autonomy * Reactivity * Proactiveness Letus briefly describe each of these as follows: 4xonomy: Autonomy is that characteristic of an agent which enables it to function without the ect intervention of humans or other intelligent systems, and thus retain control over its actions 2d internal state, Autonomy is considered to be the central concep} in designing an agent. This terecteristic of autonomy along with the ability to gather input from the environment and com- Sunicate with other agents provides system builders with a very powerful and effective form of Heepsulation. Reactivity: Reactivity is that characteristic of agents owing to which they judge their environment %4 respond in accordance to the changes occurring in it. As mentioned earlier, reactive agents do %t possess any internal model of their environment and act using a stimuluvresponse type of ‘haviour by responding to the present state of the environment in which they are embedded. Froactiveness: Agents that possess the characteristic of proactiveness are capable of taking “itistive to make decisions in order to achieve their goals. 514 Artificial Intelligence i iti dures in a programming lan i system by simply writing proce Buage We can design a goal-directed system by ditions of the system. The effects of the procedures are seen as the goals of the system. If the pre-conditions hold ee ee ee 's applied, itis assumed that the procedure would execute correctly resulting int he ene -con tions of the given system. This exemplifies the design of a simple goal-oriente sys x owever, this holds true only in those cases where the environment does not change during the application of the procedure/Executing a procedure designed for a static environment onto dynamic environments : ; He crroueous results, AS We have understood from our discussions is a poor strategy and may lead so far, an agent must possess the qualities of being reactive and ee 2 the oe that occur ina dynamic environment, The nature of events could be such that they lead to changes the goals or the assumptions of the agent under which the procedure is executing. by specifying pre-conditions and post-con From the preceding discussion, we understand that building purely goal-directed systems is not difficult, What we should also understand is that building purely reactive systems that continually respond to their environment is not that difficult either, since they base their decision making entirely on the present condition, and are not concerned with the past at all; they simply respond directly to the current situation of their environment, lowever, developing an agent that achieves an effective balance between-goal-directed-and-reactive-behaviour-is-complicated) That is, although-on-one hand, we want agents.that-are-capable of achieving their goals systematically using complicated procedure-like methods of action, we do not want these agents to.continue_ executing these procedures in lin when the goal no longer remains valid or the procedure appears to be failing the purpose,|In such cases, we desire agents who can change their mode of operation in accordance with the thange in situation. However, we should also keep in mind that the designed agent should not be such that it continuously keeps reacting to the environment without focussing on the goal long enough and hence failing to achieve it. ‘Agents classified on the basis of primary attributes listed earlier are observed to exhibit the behav- iour of the primary attribute or to a conjunction of two or more primary attributes. Consider the example of a collaborative learning agent which displays the behaviour of cooperation as well as learning. Similarly, collaborate agents are those which cooperate as well as display autonomy. Proceeding in the same manner, we can define agents that are capable of displaying both learning and autonomous behaviour; these are known as interface agents, The agents which depict charac- teristics of all these three primary attributes are the most difficult to implement and are known as smart agents. Smart agents are placed in an environment and act on it for achieving its stipulated goal over some duration of time. A multi-agent system is formed when several agents communicate and collaborate with each other to achieve a single goal. No single agent in such a system possesses the complete data or knowl- edge of all methods required to achieve this goal and therefore needs to continuously communi- cate with other agents. Agents in this system may continuously negotiate with each other for exchange of information, or information may simply be transmitted from one agent to another. Agents may be homogenous (performing similar tasks) or heterogeneous (performing different types of tasks). Introduction to Intolligont Agents 615 yarious types of agents (generated using different types of characteristics) are discussed in the ollowing subsections: 44.3.1 Collaborative Agents collaborative agents emphasize on Autonomy and cooperation with other agents and typically operate in open and (ime-constrained multi-agent environments. They nego ith other agents resent in the environment to reach mutually acceptable agreements during cooperative problem solving. These agents are used to solve problems that are too large for a single centralized, agent and enable interconnection and inter-operation of existing legacy systems and also provide solutions (0 those problems that are inherently distributed in nature, One of the shortcomings of dese agents is that they have limited learning capabilities including parametric or rote learning. ‘These agents are primarily created to enable the creation of a system that interconnects separately developed collaborative agents to solve problems that are too large and to provide solutions to inherently distributed problems./Presently, the number of such agents deployed in real industrial settings are limited and involve engineering the construction of collaborative agent systems, inter- agent coordination and negotiation, and so on. Important challenges of agents include the issues of stability, scalability, performance learning in collaborative agent set-ups and evaluation of collaborative agent systems, 14.3.2 Interface Agents Interface agents can be imagined as personal assistants who emphasize on autonomy and learning inorder to perform tasks for their owners. They support and provide proactive assistance to the user of a particular application but display limited cooperation with other agents. These agents are capable of adapting to its user's preferences and habits over a duration of time by observing and imitating the user, by receiving explicit instructions from the user, by asking other agents for advice and by receiving positive and negative feedback from the user. Therefore, while dealing with an interface agent, there is less work for the end user and the application developer. These itable for applications requiring substantial repetitive behaviour, which may vary from user to user. The key challenges of interface agents include performing experiments to deter- mine the learning techniques suitable for certain domains, providing the reasoning for the same, ensuring user privacy and extending the range of applications into other related areas., 14.3.3 Reactive Agents Reactive agents do not possess internal, symbolic model of their environment and respond to the Present state of the environment in a stimulus-response manner. These agents are viewed as a collection of modules operating autonomously, which are responsible for specific tasks such as sensing, motor control, computations, and so on. The communication between various modules is limited and of low level. These agents are relatively simple and interact with other agents in basic Ways. Reactive agents are robust, fault tolerant and do not have any specification or plan for - behiaviour. Such agents should possess certain characteristics such as flexibility, adaptability, and Rapid response times. However, it is not easy to design such systems to obtain a desired behaviour. 516 Artificial intelligence 14.3.4 Internet Agents ; Rar Internet agents are also known as information agents. They help users to find out filtereg i nage, mani; ~~ and classified information from a vast source of information, eRe w rae setae OF collate information from many distributed sources on WANS sucl e aes tsmay Prove to be very useful for managing the information explosion o! . et 'Ypically embedded within an internet browser and use a host of init management tools such as spider, and search engines to gather information. For static agents, the main challenge is to keep tha indexes up-to-date, Future internet agents are likely to be mobile. 14.3.5 Mobile Agents Mobile agents are free to roam through wide area networks, interact with foreign hosts, perform tasks on behalf of their owners and return to their origin. While static agents exist as a single process or thread on the host computer, mobile agents pick up and move their code and data toa new host in the web, where they then resume execution. The key challenges of mobile agents include transportation, authentication, secrecy and privacy, destination system security, perform- ance issues, and interoperability/communication/brokering services. 14.3.6 Hybrid Agents These agents combine two or more agent philosophies within a singular agent, thus maximizing the strengths and minimizing the deficiencies of the various techniques to achieve a hybrid that would be more suitable for a particular application. These are also called heterogeneous agents. 14.3.7 Intelligent Agents Intelligent agents are those that exhibit properties of intelligence and can perform functions that require higher-level cognitive abilities. The term intelligence is used in several fields such as artificial intelligence, cognitive science, robotics, along with many interdisciplinary P Similarly next environment is generated by using a function named generate_next having the following form. generate_next (D, P) > D It thus maps a database and a percept to a new database, Ho i , . However, action function of agent maps a database D to action A as follows. ekones o ¥ action(D) > A ny disadvantages as well/In particular, the inherent computs- ig in ae may not be effective for agents in time-constrained i Fc acision making in logic-based agents is that the world does pot change while the agent is deciding. However, this is not the case as the real world vevivonment dynamic. Therefore, the problems associated withirep, ion and reasoning about complex, i 5 resentati ‘asonii ex, dynamic environmentdare not possible using logic-based ‘approach ae environments. Assumption in decisi Introduction to Intelligent Agents 525 47.2. Reactive Architecture the researchers started investigating alternatives to logic-based approaches in the mid to late "1980s. The subsumption architecture i ty Brooks, who was one of the strong critics of the symbolic approach (Brooks R., 1991). | subsumption has been widely influential in autonomous robotics and in real-time AI systems. subsumption architecture is a way of decomposing complicated intelligent behaviour into many simple self-contai liars jun organized into layers. Each layer implements | aparticular goal of the agent, and higher lay: ly more abstract. The goal of each layer subsumes that of the underlying layers. The subsumption architecture has two main characteristics: The first is that decision making of an agent is realized through a set of behaviours accom, ing tasks. Each behaviour may be ‘thought as an individual action function, which takes perceptual input and maps it to an action to pxform. Each of these behaviour modules is intended to achieve some particular task. Brooks suggested and implemented each behaviour module as a finite state machine. These modules had to complex symbolic representations and reasoning at all, In many subsequent implementations, ‘bese behaviours were implemented as rules of the form ‘situation — action’, which map percep- ‘ual input/situation directly to actions. The second characteristic of the subsumption architecture isthat many behaviours can be fired simultaneously in parallel. Brooks proposed a subsumption hierarchy wi viours arranged into layers for arranging these modules (Brooks R., 1991). the lower layers in the hierarchy are able to inhibit higher layers and have higher priority. Higher layers represent more abstract behaviours. Puely reactive agents inherently take a short-term view and make decisions based on local infor- Mation about current state. It is difficult to visualize decision-making taking into account global formation. Fora purely reactive agent when placed in its environment, overall behaviour ‘merges from the interaction of the component behaviours. Since the dynamics of the interactions ‘etween different behaviours become too complex to understand, it is hard to conceive agents to ‘ill specific tasks. There is no specific methodology for building pure reactive agents. One has to “ary out experiments with trial-and-error method to, build such ent. Various solutions to thse problems have been proposed. One of the most popular of these is the idea of evolving ‘tents to perform certain tasks using genetic algorithms. Sbsumption architecture for reactive approaches has many advantages such as_simplicity, ‘business against failure or fault tolerance, computational tractability, modularity, emph: ‘stative development and testing, and so on. However, there are certain disadvantages of this el as well. Since the goals might begin interfering with each other, there is a difficulty hy signing action selection through highly distributed system of inhibition and suppression. “ther, in this architecture, there is a Jow flexibility at runtime. ‘73 Belief-Desire-Intention Architecture bcliet-desire-intention (BDI) theory, the behaviour of an agent is described in terms of a "cessing cycle, The processing cycle is a control mechanism that may be achieved by software eee 526 Artificial Intelligence . i t direct external intervention. A fo, f feedback mechanism for performing functions wont ee cana Osta thay mechanism can continuously monitor the output o} ‘b avibarTof thal angele aint ) preset values and feeds the difference back to adjust the bel 'ystem in q ; processing cycle. ; that helps in deciding an appropriate action tg be performed for achicving goals based on belief and intentions. These cS Practical Feasoning architectures, in which the process reasoning resembles te sisioa pcb oi Peete under. i I i t context. The 8 typical standing the problem and environment in that cont ii ically is is i i lable to the agent, on the basis of belief ang understanding and generating various options avai a basis chooses between them, and commits to some. These chosen options become intentions for agent to determine their actions. Intentions are fed back into the agent's future practical reasoning. BDI architectures have a reasoning process An agent should review and reconsider its intentions from time to time as it might have to drop certain intentions because of some reasons. For example, some of the reasons might be that the © belief of the agent has changed such that a particular intention is no longer relevant now, or intention can never be achieved, or it has already been achieved, etc. But reconsideration in- creases the cost associated with it in terms of both time and computational resources. There is clearly a trade-off between the degree of commitment and reconsideration of intentions. David Kinny and Michael Georgeff examined the nature of this trade-off by number of experiments carried out with a BDI agent framework called dMARS (d’‘Inverno M. et. al, 2004). In their investigation they used two types of agents: bold agents and cautious agents. A bold agent never stops to reconsider, while a cautious agent stops constantly to reconsider in a variety of different environments. They concluded that if the environment does not change quickly, then bold agents will do well compared to cautious ones, as cautious agents waste time in reconsidering their _ commitments while bold agents are busy working towards and achieving their goals, Further, if the environment changes frequently, then cautious agents tend to outperform bold agents, as they are able to recognize the change in intentions and take advantage of new situations and ‘opportunities. So, different types of environment require different types of decision strategies. In static environments, purely pro-active, goal-directed behaviour of an agent is adequate. On the othet hand, in more dynamic environments, the ability of an agent to react to changes by mod intentions is more important, . The basic components of BDI architecture are data structures representing beliefs, desires, aM intentions of the agent, and functions that represent its deliberation for deciding what to do. Tb main components of BDI agent are as follows” A set of current beliefs (denoted by B) of aged representing information about current environment ° « Asset of current beliefs (denoted by B) of agent Tepresenting information about current environment © A set of current intentions (denoted by I) representing the Current focus of the agent. ¢ A set of current desires (options or goals, denoted by D) of the agent, Introduction to Intelligent Agents 527 enerally, beliefs, desires, and intentions are represented as logical formulae. The sets B, D, and jghould have consistency. For example, an intention to achieve X should be consistent with the Y. The state of a BDI agent at any given moment is represented as a triple (B, D, I),, It ion, the following functions ate used: , eae ni 0) determines hn action to be perforined bn the basis of eurren takes input asa percept and th i Sure es inputs apereept and the curent bis of agent and proces 3 me ton | a) takes agent's current beliefs, desires, and intentions and determines the of agent i ta ction (ogt) that determines the options available to the agent on the basis of _ onment atid its current intentions as 5 ogf(B.D =D re “The deliberation process.of a BDI agent is represented in the filter function. It updates the agent’s "jtentions on the basis of its previously-held intentions and current beliefs and desires. This “function must do the following things. It should a [° ¢ drop any imentions that are no longer achievable, or for becoming costlier t in intentions that are not achieved, and that are still expected to have a positive overall benefit >} adopt new intentions, either to achieve existing intentions, or to exploit new opportunities The main purpose of option-generation function is means-ends reasoning. It should satisfy several other constraints such as it must be consistent with agent's current belief8 and intentions. It must also be opportunistic, i.e., it should recognize environmental circumstances change to agent’s advantage and offer the agent new ways of achieving intentions, or the possibility of achieving intentions that were otherwise unachievable. Intentions can also be prioritized. A major challenge in BDI architectures is the problem of striking a balance between being com- nitted and overcommitted to one’s intentions. The BDI model is attractive for several reasons. Fist, it is intuitive as we all recognize the-processes and have an informal understanding of the totions of belief, desire, and intention. Second, it gives us a clear functional decomposition, which indicates what sorts of subsystems might be required to build an agent, But the main difficulty, as ver, is determining how_to efficiently implement these functions. 147.4 Layered Architecture In layered architectures,(the various sub-systems of an agent are arranged into a hierarchy of interacting layers.(There will be at least two layers, to deal with reactive and pro-active behav- jours of agent, respectively. A useful typology for such architectures is by the information and control flows within them. Broadly there are wo types of control flow within layered architectures Wooldridge and Jennings, 1995) namely horizontal layering and vertical layering. 528 Artificial Intelligence . ware layers ar ‘ Horizontal layering: In horizontally layered architecture, the #7 lao fait pate 4 directly connected ta the sensory input and action output. In fact, Sates exhibit n diff an age; producing suggestions as to what action to perform. [fan agent wants ferent 7 ‘ types of behaviour, then n different layers can be implemented. Figure 14.5 shows horizontal layering The main advantage of horizontally layered architectures eg the simplicity of concept, How. ever, since the layers are competing with each other to Be a eee ms eae Tight be a danger that the overall behaviour of the agent may not be es Econ Su Ea ach layer je capable of suggesting m possible actions, then there are at most met such | ntraCtiONS toby considered. In order to ensure that horizontally layered architecture is consistent, Benerally 4 mediator is included, that makes decisions about which layer have control of the agent at an given time, The introduction of a central control or mediator system antrosees a bottleneck inty the agent's decision making and it is problematic also as the designer must foresee all possibjy interactions between layers. Vertical layering: In vertically layered architecture, sensory input and action output are dealt by at most one layer each. In this form of architecture, the problems shown in horizontal architecture are partly solved. Vertical layered architectures can be divided into one-pass architectures and two-pass architectures. These are shown in Figure 14.6. + In one-pass architecture, control flows sequentially through each layer, until the final layer generates action output. * In two-pass architecture, information flows from percept through one layer to another in first pass and control flows back from last layer to first layer till action output in second pass. Percept (Input) Layer-t Layer-2 Layern Action Output eee Figure 14.5 Horizontal Layering The complexity of jnteractions between layers in both one pass and two pass vertically layered architectures is reduced since there are n interactions to be considered between layers. This is clearly much simpler than the horizontally layered case, However, this’simplicity comes at the cost of some flexibility. In vertically layered architecture, for making a decision, control must pass between each different layer, The failures in any one layer are likely to have serious conse- quences for agent performance ~ — — Introduction to Intelligent Agents 529 Percept (Input) Porcopt ‘Action Tayert rt Tayert Layer’ ayer? | ye 4 3 Hl came eet | ‘ation (Outpt) | @) ) | Figure 14.6 (a) One-pass Vertical Layer (b) Two-pass Vertical Layer ythe design of agent systems, their communication mechanism is to put in place using which a can pass messages amongst each other. 448 Agent Communication Language communication between agents is carried out in the form of messages. These messages follow ified formats, and communication occurs in predefined protocols. Agents use common Agent | Communication Language (ACL) having an agent-independent semantics. ACL provides agents sia mechanism of exchanging information and knowledge and handles propositions, rules, and | sions. An ACL message describes a desired state in a declarative language, rather than a proce- ‘dare or method. Agents can transmit messages using ACL over the network using a cwer-level powcol such as TCP/IP, HTTP, etc. The syntax of ACL helps agents to define the types of nessages and their meanings that are to be exchanged. However, agents are not only doing peer- inpeer message exchanges but are rather engaged in conversations as they have to interact or coordinate with other agents as well. They follow shared sequences of messages in conversations srequired in negotiation or an auction. We will discuss briefly two types of agent communica- tion languages to give the reader an idea of how agents can be expressed using these languages. One of the communication languages is Knowledge Query and Manipulation Language (KQML) and the other language is developed by Foundation for Intelligent Physical Agents (FIPA). Both ofthese have been discussed in the following subsections: 14.8.1 Knowledge Query and Manipulation Language The KQML is a language and protocol for communication among software agents and knowl- dge-based systems. It was developed in the early 1990s as a part of the DARPA knowledge ‘ering project that aimed at developing techniques for building large-scale knowledge bases ‘hich are shareable and reusable. It was originally conceived as an interface to knowledge-based ‘stems, but was soon realized as an agent communication language as well. The KQML is a 530 Artificial Intelligence anguage and protocol for information exchange with an intelligent system by an application program, oF another a Itis independ. ent of the transport mechanism and content language such as ontology ta is a vocabulary Of the symbols used for content and their meaning. T ‘he content is the teed inl formation included in the message that is expressed in syntax of content language. eae the revever Poth mustbe able to encode/parse expressions and ascribe the same meaning (0 the symbols for the commun. cation to be effective, There are some ficlds used (0 control several concurrent conversations and to specify timeouts for receiving a reply such as conver ssation-id, reply- with, in-reply-to, reply-by, ete, There are three layers ina KQML message: content, communication, and message, high-level, message-oriented communication | ssage content expressed in the program's own representa. * The content layer contains the actual me tion language such as ASCII strings. ks © The communication layer encodes a set of features to the message such as identity of the sender, recipient, and a unique identifier associated with the communication. ; + The message layer encodes a message that one application would like to transmit to another. This layer determines the kinds of interactions one can have with agents using KQML. The primary function of message layer is to identify the network protocol and to supply communication prim- tives (called performative) that the sender attaches to the content The KQML format comprises number of fields such as the sender of the message, the list of receivers, the communication primitives. Communication primitive can be request-if the sender wants the receiver to perform an action, inform-if the sender wants the receiver to be aware of the fact, query_if if the sender wants to know whether or not a given condition holds, propose; accept_proposal; reject_proposal if the sender and receiver are engaged in a negotiation, and more. Let us consider an example of hypothetical messages in KQML where there is a query from agent ‘john’ about the price of RELIANCE stock and the reply from stock_server. The syntax of KQML is similar to the s-expression used in Lisp. The initial element of a listisa communication primitive (such as ask-one or tell). The remaining elements are the communica- tion primitive's arguments as keyword/value pairs. AKQML message from agent ‘john’ representing a query about the price of a share of RELIENCE stock might be encoded as shown above. In this message, the KQML communication primitive is. ask-one, the content is (PRICE RELIANCE price), the ontology assumed by the query is identi- fied by the relevant ontology specified, the receiver of the message is to be a server identified as stock_server, and the query is written in a specific language. The value of the content keyword forms the content level, the values of the keywords {: reply-with, : sender, : receiver} form the communication layer. The message layer contains the communication primitive such as guage, :ontology} keywords. In this process of communication, the stock_server might send john the KQML message as shown in the above example using communication primitive tell. Introduction to Intelligent Agents 534 vask-one + Sender John | content (PRICE RELIANCE ? price) receiver stock_server reply-with — reltance_stock language Janguage_name ontology ontology_name tell : sender stock-server content (PRICE RELIANCE numeric_price) receiver john ‘in-reply-to reliance_stock Tanguage Janguage_name ontology ontology_name Thus, KQML introduces a small number of KQML communication primitives using which agents describe the metadata specifying the information requirements and capabilities. It also introduces aspecial class of agents called communication facilitators. A communication facilitator is an ‘gent that performs various useful communication services, such as maintaining a registry of service names, forwarding messages to named services, routing messages based on content, matchmaking between information providers and clients, and providing mediation and translation services The semantics of KQML is defined in terms of pre-conditions, post-conditions, and completion conditions for each communication primitive. Assume that there is a sender P and a receiver Q. The pre-conditions indicate the necessary states for an agent to send a communication primitive Pre(P), and for the receiver to accept it and successfully process. If the pre-conditions do not hold, the most likely response is error or sorry. The post-conditions describe the states of the sender after the successful utterance of a communication primitive and of the receiver after the receipt and processing of a message but before making any utterance. The post-conditions Posr(P) and Post(Q) hold unless a sorry or an error is sent as a response to report of unsuccessfill processing ofthe message. A completion condition for the communication primitive Completion, indicates the final state after a conversation has taken place and the intention associated with the communica- ‘ion primitive that started the conversation has been fulfilled. 532 Artificial Intelligence 14.8.2 FIPA ACL The FIPA i computer society whiel cronym for ‘Foundation for Intelligent Physical Agents’. It is a non-profitable [EEE sets the standards + to promote agent-based technology, services, and equipments and na + tomake available specifications that maximize inter-operability across agent-based systems, The FIPA operates through companies and universities active in the field of intelligent agents who are members of open international collaboration. The FIPA has also defined agent communi- cation language for agent inter-operability, and specified the format for messages exchanged by the agents. Its syntax is quite similar to that of KQML_ except for different names for some reserved primitives. The specification of FIPA ACL consists of a set of message types and the description of their effects on the mental attitudes of the sender and receiver agents. It describes every communicative act with a narrative form and a formal semantics based on modal logic. In order to process FIPA ACL primitives, receiving agent must have some understanding of Seman- tic Language (SL). This language is a quantified, multimodal logic with modal operators for beliefs(Bel), desires(Des), uncertain beliefs(UBel), and intentions (sometimes called persistent goals, PG). It is very easy to represent propositions, objects and actions in SL. The semantics of each communicative act (CA) in FIPA ACL is specified as sets of SL formulae that describe the feasibility preconditions (FP) and its rational effect (RE) for the act. Feasibility preconditions FP(X) for a given communicative act X describe the necessary conditions for the sender of the CA. That is, for an agent to properly perform the communicative act X by sending a particular message, the feasibility preconditions must hold for the sender. The agent is not obliged to perform X if FP(X) holds, but it can if it chooses to. A rational effect of CA represents the effect that an agent can expect to occur as a result of performing the action. It also typically specifies conditions that should hold true of the recipient. The receiving agent is not required to ensure that the expected effect comes out. Conformance with the FIPA ACL means that when agent A sends X, the FP(X) for A must hold. Just to get a feel, let us see FIPA ACL like semantics for the communicative act inform where Agent A, informs agent A, of content £. > FP: (Bel(A,(f)) and UBel(A,(Bel(A,(f)) ) ; RE: Bel(A,()) The content of inform is a proposition which implies that the sender informs the receiver of a given Proposition being true. The interpretation is that the sending agent Ai believes that the proposition _ fis true by Bel(Ai()) and does not believe that the receiver believes in iti ves in the truth of the proposition fby UBel(Ai(Bel(Aj(p))) and intends that the receiving agent should also come to believe that the proposition fis true, i.c., Rational Effect is Bel(Aj()). For Proper syntax and more details refer to FIPA home page (www.fipa.org). Introduction to Intelligent Agents 533 ; comparing the ACLs QML and FIPA ACL are almost same with respect to their basic concepts and the principles. gut they differ primarily in the details of their semantic frameworks because it is not possible to ‘ome up with exact mappings or transformations between KQML performatives and their com- «pletely equivalent FIPA primitives, or vice versa. If agent is not BDI agent then the other differ- Feces might be of little importance, ‘449 Applications There are various application areas where agents can prove to be useful and beneficial. Some of these are listed as follows: "i anaacturng in manufacturing, agents can n flexibly tandie ‘unexpected events such as machine failure, custom manufacturing of consumer specified special orders, opportunistic rescheduling ‘imonitoring of machines, automatic correction of machine or automatic assignment of failure of “machine to the appropriate agency with the details of the problem ‘s Unmanned Aerial Vehicles (UAVs): Here, agents having the ability to autonomously follow aight plan, can do re-planning of flight path in response to unexpected events, air traffic management (where agents can manage a large number of aircrafts. Pe Military Simulation: Here, agents can be simulated as humans. b Fault Diagnosis: In fault diagnosis, distributed agents monitor and, diagnose individual components i ze fs Prognostic Health Management (PHM): PEM utilizes agents for identification of incipient fanlts and integration of sensor information and maintenance histories: + Business Processes: ASuch processes use automated, goal-directed agents for execution of business | processes. Automated processes respond to events and invoke the appropriate course of action. * Dynamic Trading: Here, agents can monitor different aspect ofthe e-market. Agents can be used in auctions and negotiation on web. i i * Intelligent Decision Support: Here, agents with domain knowledge can i provide expert and inl | £ cad fice to users. a (> ima larger system. t f L | There are various agents built and available on net. They are briefly described as follows: Buyer Agents (Shopping Bots) These bots help Internet surfers to find products and services they are searching for. For example, When a person surfs for an item on eBay, at the bottom of the page there is a list of similar products that other customers who did the same search looked at. This is because it is assumed that the user ‘astes are relatively similar and they will be interested in the same products. This technology is known as collaborative filtering. 534 Artificial Intelligence User Agents (Personal Agents) These agents are meant to carry out {asks automaticaly emails according to the user's order of preference, ass webpage forms with the user's stored information (C8 for the user. For example, some bots sort ble customized news reports, oF fill gut Form Filler bot). Monitoring-and-Surveillance Agents oe Monitoring and Surveillance Agents are used to cea es repo eo on equipment, Usually computer systems. For example, the agents keep track 0: fe mpaty : rvntoty eves, observe competitors” prices and relay them back to the company, wate by insider trading and rumours, ete. Data Mining Agents . | ‘These agents use information technology to find trends and patterns in an abundance of informa. tion from many different sources, The user can sort through this information in order to fing whatever information they are seeking. An example of this class of bot would be a data mining agent that detects market conditions and changes and relays them back to a user/company so that the user/company can make decisions accordingly. 14.10 Multi-Agent Application SE PEC OR RR ERE RASA ARENA ROAR EERE eae Let us consider an e-application where multi-agent systems will be useful. Online auction is an example where agents can be used to participate in the auctions held by different auction houses. Since similar items may get auctioned by multiple auction houses at the same or overlapping time intervals, human bidders seeking items will require a lot of time, energy, and coordination. How- ever, due to the varying nature of auction houses, the agents participating in different auctions need to be reconfigured for each auction which might not be feasible for individual bidder. A bidding house framework using the concept of multi-agent architecture was proposed by one of author’s PhD students (Tagra H, 2008). Using which services of agents can be availed in coordinated manner. The bidding house framework is flexible and is based on service-oriented architecture. All well-defined web services are modelled as agents and are loosely coupled 14.10.1_ Bidding House Framework The agents attached with bidding house have autonomous decision-making capability and are flexible to adapt to the changing environment and have inherent learning capabilities. Bidding house has some agents doing core services and some agents can be built by third party which can registet with bidding house and can participate in providing services based on an authentication mechanism. However, the agent might be de-registered at a later stage on the basis of its performance. Figu® 14.7 shows various types of agents required to build the bidding framework.

You might also like