Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856

Planning architecture for control of an intelligent agent


Rabia Moussaoui1 and Hicham Medroumi2
1,2

Hassan II University Casablanca, National Height School of Electricity and Mechanics, Architecture System Team LISER, Eljadida Road, BP 8118 Oasis, Morocco

Abstract: In this paper we propose a planning architecture


based on agents for the control and supervision, to improve the global autonomy of a mobile system based on multi-agent systems. This architecture is the combination of two parts in particular: the structural part and control part. After studying the existing solutions kinematic wheeled mobile robots, it turned out how one can bring a maximum autonomy of a mobile robot moving by focusing on its architecture. The obtained result is an omnidirectional robot. This aspect is only part of improving itself. Indeed, the overall concept of autonomy is to complete work on the structural architecture by improving the control architecture. To perform the tasks of an intelligent agent in the optimization of energy and communication between them, our architecture consists of three layers of control and three associated knowledge bases that represent the agent and the environment at different levels of abstraction.

listed above, is the ability to make decisions while conducting and coordinating complex missions. This capacity is very important because, for many robotic tasks, may be many ways to achieve them. The robot has to select the best actions to take to achieve its mission adequately, by reasoning. Although many other capacities of perception and action can be added to this list, it is important to bear in mind that the mobile robotic is a highly multidisciplinary subject of much research in diverse disciplines. For this reason, in this article, we use most of these robotic capabilities and we believe that we have access to them because they are derived from other works, apart those presented here.

Keywords: Multi-Agent Systems, Sensors, Actuators, Control architecture, Internet, Agent UML.

1. INTRODUCTION
The most basic tasks in the daily life of a human can become extremely complex when analyzed more closely. For example, attending a seminar or conference can be a real headache. Above all, he must consult his calendar to ensure availability. Subsequently, he must go to the scene of the event by combining various modes of transportation such as cars, buses and perhaps even plane while these tasks may seem simple to us humans, it is not nearly as obvious to a robot. To be autonomous, mobile robot must have many skills. First, it must be able to perceive its environment and locate in it. To do this, a robot has sensors, such as sonar and laser scanning device for measuring distances between itself and nearby obstacles. Once located in the environment, the robot must be able to move from one point to another by finding safe and effective ways to avoid collisions with obstacles. In addition, a robot is often called upon to communicate with people or other agents nearby. This can be done in various ways, such as voice or from a Gateway. In addition to be able to perceive its environment, a robot must often be able to identify objects, recognize people, read signs, and even identifying graphic symbols. These operations are performed by analyzing pictures acquired by the camera(s) installed on the robot. After identifying and locating an object, we can imagine that the robot is then manipulating that object with its robot arm. Finally, another robotic capacity, as important as those Volume 2, Issue 5 September October 2013

2. STATE OF ART 2.1 Concepts of robotics Before entering in our subject, it is important to have a general idea about mobile robots function in order to understand the interactions between the different modules we refer to. 2.1.1 Components of a mobile robot Basically, a mobile robot consists of hardware and software components. Among the hardware components, there is a moving platform which holds all the other components such as sensors, actuators and energy source (batteries).
a) Sensors

The sensors are operable to acquire data from the environment. Sensors typically installed on a mobile robot (see Figure 1) there are ultrasonic sonar, proximity laser sensor, wheels encoders (odometer), one or two optical cameras and microphones. The kinds of information collected as well as their accuracy change greatly from one sensor to another. For example, Figure 2[1] shows a laser proximity sensor (c) which has a better perception of the contours of the environment than (a) sonar and (b) sonar because the sensor offers better angular resolution and better accuracy on distance.

Page 221

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856

Figure 1 Mobile robot components The robot has eight IR-sensors of which six are placed in the front and two in the back (Figure 1). These sensors can be used both to sense obstacles (called proxy sensors in this text) and to sense bright light in the vicinity (called light sensors in this text). These sensors all return real-values in the range [0..1] where the extreme values 1 indicates bright light or a very close obstacle and 0 darkness or no obstacle. Note that the proximity sensors are sensitive to different colors and glossyness of the obstacle. Darker objects can reduce the distance of the first sensory reading to as low as ~10 mm. The figure below shows the readings obtained from the wooden blocks (Figure 3) [2].
b) Actuators

Figure 3 Sensitive of proximity sensors to different colors and glossyness of the obstacle

To move inside their environment and interacting with it, a robot is equipped with actuators. For example, a robot is provided with one or more motors witch can rotate its wheels to perform movements. Generally, the wheels of the robot are controlled by two motor commands, a forward speed and rate of rotation. Usually, these commands are expressed in meters per second (m / s) and degrees of rotation per second (deg / s).

2.1.2 Software modules To operate a mobile robot, several software modules are involved. These modules can be used to interpret data collected by the sensors in order to extract information or to process high-level commands to generate more commands to a lower level. Among the most frequently modules used are modules of positioning, navigating, visioning... (a) Location One of the most important functions for the robot is to be able to locate in its environment. Using data provided by the sensors, the location module estimates the current position of the robot. Typically, this position is expressed by a tuple (X, Y, _) representing a position and an orientation on a two dimensional plan [5]. Localization can be done using techniques based on the theory of Markov decision processes [6], using sampling techniques Monte Carlo (particle filter) [7] or other methods. (b) Vision When we analyze the pictures captured by cameras, we can extract a wealth of information. For example, by using a segmentation algorithm, we can recognize object colors in addition to estimating their relative position (angle) relative to the camera view. Using threedimensional vision techniques, it is also possible to estimate some distances in the environment. We can also recognize symbols, characters and read messages such as posters in a corridor, direction signals or conference badges.
c) Navigation

Figure 2 Perception of some sensors Volume 2, Issue 5 September October 2013

A navigation module is responsible for moving a robot from its current position to a desired destination safely and efficiently. In addition to including features of the environment perception and localization, navigation module also has the responsibility of finding a path between the position of origin and destination, consisting of a list of intermediate points to reach and to guide the Page 222

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
robot through this path. To efficiently finds a path using a plan path. 2.2 Multi agents approach 2.2.1 Notion of agent An agent is a physical or virtual feature, which owns all or part of the following: [3] Located in an environment: means that the agent can receive sensory input from its environment and can perform actions that are likely to change this environment. Independent: means that the agent is able to act without the direct intervention of a human (or another agent) and he has control of its actions and its internal state. Flexible: means that the agent is able to respond in time, it can perceive its environment and respond quickly to changes taking place. Proactive: it does not simply act in response to its environment; it is also able to behave opportunistically, led by its aims or its utility function, and take initiatives when appropriate. Social: it is capable of interacting with other agents (complete tasks or assist others to complete theirs). 2.2.2 Multi agents system A multi - agents system (MAS) is a system composed of a set of agents located in some environment and interacting according to some relations [10]. There are four types of agent architecture [4]: Reactive agent: is responding to changes in the environment. Deliberative agent: makes some deliberation to choose its actions based on its goals. Hybrid agent: that includes a deliberative as well as a reactive component. Learner agent: uses his perceptions not only to choose its actions, but also to improve its ability to act in the future. modeling perception-planning-execution (Figure 5) [12].

Figure 5 Loop modeling perception-planning-execution 3.1 Path planning A path planner is a planner different from task scheduler. When the scheduler generates an action type displacement, the path planner is called to find a way to move the robot from its current position to the destination, optimally and securely. 3.2 Map of the environment To find a path, a path planner needs a map of the environment in which the robot operates. Usually, this map is represented using an occupancy grid obtained by discretizing the environment into cells. A cell is either free or occupied by one or more obstacles, as in the shown below (Figure 6). From a cell, a robot can reach a neighboring cell, if it is free. There are two definitions of neighborhood relationship: 4 and 8 neighbors. In the first, the robot can move in the four cardinal directions, north, south, east and west. In the second, the four oblique directions are also allowed. Model the occupancy grid can be improved by adding some attributes. One can for example assign costs to the cells or even the probability of presence. Mission Planner: converts the objectives and constraints defined by the user cost functions for digital planner way. In simplest case, it is necessary to define a destination point for the robot. Other constraints can be added like staying near a road or be as inconspicuous as possible [13].

Figure 4 Multi agents system

3. APPROACH OF THE CONTROL OF ROBOTS


BASED ON PLANNING

A first approach to robot control favors a centralized representation of the environment. In general, a scheduler is responsible for developing an action plan for future robot to reach a goal in the ideal case of a geometric or topological model of the environment. This approach is to establish a single loop (perception, decision, action)[8] with however many possible activities, including decomposition arises in terms of Volume 2, Issue 5 September October 2013

Figure 6 Example of a path in a grid of occupation. Page 223

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856 4. PLANNING ALGORITHM
equivalent to the library plans, and finally Social-KB represents the beliefs of the agent on other agents in the system, including their abilities to help to achieve its goals.

Figure 8 Layers of planning architecture For example, the reactive behavior layer includes three modules: a module for activation purposes, a module for the recognition of situations and a module for planning and execution. Perceptions of the environment are transmitted to the situation recognition module of the first layer (layer reactive behavior) to the module aims to enable the planning and execution module flow control action passes to the upper layer (local planning layer). Knowledge Base KB-World represents the information that the agent has on the environment (figure 9).

Initial state n = 1 n = 1 layer of reactive behavior BC n = 1 world knowledge base n = 2 layer local planning BC n = 2 knowledge base planning n = 3 layer collaborative planning BC n = 3 basic social knowledge

Figure 7 Planning algorithm

5. ARCHITECTURE
5.1 Planning architecture Our architecture is composed of three layers of control and three associated knowledge bases that represent the agent and the environment at various levels of abstraction, as shown in Figure 8. Each layer has a specific set of operations associated and an upper layer uses the simpler operation of the layer above to run its operations more elaborate. The flow of control passes from bottom to top, and a layer takes control when the previous layer can no longer contribute its operations to the performance goals. Each layer has three modules: a module for the activation of goals and the recognition of situations (RS) and a module of planning and execution (PE). Perceptions of the environment are transmitted to the RS module of the first layer and, from module to module, to the top of the hierarchy. Flow control actions passes up and down to get to the module end of the last layer PE and associated actions are executed on the environment. Knowledge Base KB World represents information that the agent has on the environment (beliefs about the environment), the knowledge base KB-Planning is Volume 2, Issue 5 September October 2013 Figure 9 Reactive behavior layers 5.2 Control architecture Our architecture includes an explicit representation of the cooperation process agent with other agents in the system

Ex: Execution

Ac: Activation

Re: Recognition

Figure 10 Control architecture Page 224

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
The physical layer which defines the physical agent consisting of sensors and its effectors. It establishes a direct contact with the environment. The reaction layer includes collection agent which allows the perception after treatment to determine the location of the mobile unit and to create representations of the environment [11]. For the perception of the environment, several modules can perform the execution of a different action, to choose the most appropriate action; the modules are organized in hierarchical layers, each layer having a different priority. The upper layers represent more abstract tasks that are detailed with tasks more concrete and simple, the upper layers with a priority less than the lower layers. The lower layers correspond to simple tasks and they have a higher priority. The agent action defines the sequence of actions leading to the goal by the remote user, the location of the mobile system, the current action, the representations of the environment and their validity. The communication layer provides communication between layers (physical, reaction, planning and control). Finally, with the layer of control, mobile system can monitor and make decisions. There is a hierarchy that is connected with the knowledge base and layers of planning. Each level of abstraction gives information to the next level. for a task simpler but more "emergent". For this purpose, the operation of a module located in an upper layer is subject to a lower module.

6. TEST AND EXPERIMENTATION


The proposed software platform used to control the robot in its environment, which can be known in advance or not. Our application allows the control of the robotic platform from a remote computer using internet. The remote computer is running a program that can send commands over the Internet to the local computer embarked on the robot (figure 11). Our agent is an agent that can move on a grid and must collect objects that are in some grid squares while avoiding any obstacles (figure 12), the agent is able to perform the following actions: Move up, down, left, right, perceive the environment to see if there is an object in the box, avoid obstacles and collect objects. We can define: M0 - a module that has the ability to avoid obstacles; M1 - M1 module which is responsible for the movement in the environment while avoiding obstacles using M0; M2 - a module that has the superior skill, which can abstract the environment systematically (the grid) by moving through to the actions of the module M1. M3 - a module that collects objects. A module on a lower layer has a higher priority than a module located on a higher layer, because it is responsible Volume 2, Issue 5 September October 2013

Figure 11 Control software platform A module on a lower layer can modify the input of a module with a higher node deletion, and invalidate the action of the upper module with a node inhibition, for example, if our robot wants move eastwards starting from some position and that there is no obstacle in this direction, the action performed by the execution component that controlled by M1 to move eastward. If there is an obstacle, the module M0 takes into account this obstacle by its perception of the environment and inhibits the moving eastward. M1 will then try to move in another direction.

Figure 12 Applied test for robot control Page 225

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856 7. Conclusion
In this article, our goal was to create a behavior arbitration mechanism for autonomous mobile agent; the mechanism must ensure the adaptation of the agent to unfamiliar environments. The proposed architecture is generic enough to be used in the same way on different mobile systems to combine the capabilities of the mobile agent and allow the agent autonomous navigation without the use of man. The autonomy of mobile agents is an international concern, this autonomy is necessary for the application of mobile agents in dangerous activities such as exploration of nuclear areas. While we have shown several examples of agent usage, there are many more. [13] A. Albore, E. Beaudry, P. Bertoli et F. Kabanza : Using a contingency planner within a robot architecture. paratre dans Proceedings of Workshop on Planning, Learning and Monitoring with Uncertainty and Dynamic Worlds, in conjunction with the 17th European Conference on Artificial Intelligence, 2006.

AUTHORS
Rabia Moussaoui received her engineer degree in Computer science in 2009 at the National School of computer science and systems analysis (ENSIAS) School, Rabat, Morocco. In 2010 she joined the system architecture team of the National and High School of Electricity and Mechanic (ENSEM), Casablanca, Morocco. Her actual main research interests concern Modeling and simulating complex systems based on Multi-Agent Systems. Ms. Moussaoui is actually a Software Engineer in the Office of Vocational Training and Employment Promotion (OFPPT) of Casablanca. Hicham Medroumi received his PhD in engineering science from the Sophia Antipolis University in 1996; Nice, France .He is responsible of the system architecture team of the ENSEM Hassan II University, Casablanca, Morocco. His current main research Interests concern Control Architecture of Mobile Systems Based on Multi Agents Systems. Since 2003 he is a full professor for automatic productic and computer sciences at the ENSEM school, Hassan II University, Casablanca.

References
[1] J.C. Gallagher, S. Perretta, WSU Khepera Robot Simulator User's Manual, universit de Wright State, Dayton, Ohio, 24 Mars 2005. [2] Stephan C.F Neuhauss, University Zrich SwitzerlandA Robotics-Based Behavioral Paradigm to Measure Anxiety-Related Responses in Zebrafish,July 29, 2013. [3] B. Chaib-Ddraa, I. Jarras, B. Moulin, Systmes multiagents : Principes gnraux et applications, (2001) Herms. [4] J. Ferber, Les systmes multi-agents, vers une intelligence collective, Inter Editions (1995). [5] Aerospace and Electronic Systems, IEEE Transactions on (Volume:41 , Issue: 4 ) Oct. 2005 [6] Robust Markov Decision Processes Mathematics of Operations Research 2013 38:153-183 [7] Monte Carlo Localization: Efficient Position Estimation for Mobile Robots 1999. [8] Perception, Planning, and Execution for Mobile Manipulation in Unstructured environments 2012. [9] Modeling Agent Interaction Protocols with AUML Diagrams and Petri Nets 2003. [10] R.MOUSSAOUI, A.SAYOUTI, H.MEDROMI, " Conception dune architecture Machine to machine applique la localisation des systmes mobiles". Les 2mes Journes Doctorales en Technologie de lInformation et de la Communication, Fs, Maroc, Juillet 2010 [11] R.MOUSSAOUI,H.MEDROMI,H.MANSOURI, Intelligent Architecture based on SMA track, locate and communicate with mobile systems .l'internationale workshop de technologie de l'information et de la communication (WOTIC'11) ,Casablanca, Maroc, [12] R.MOUSSAOUI, H.MEDROMI,H.MANSOURI, Architecture de la localisation des systmes mobiles en utilisant les SMA (systme multi agent).la 3me dition des Journes Doctorales en Technologies de l'Information et de la Communication (JDTIC 2011). Volume 2, Issue 5 September October 2013

Page 226

You might also like