Professional Documents
Culture Documents
Planning Architecture For Control of An Intelligent Agent
Planning Architecture For Control of An Intelligent Agent
Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
Hassan II University Casablanca, National Height School of Electricity and Mechanics, Architecture System Team LISER, Eljadida Road, BP 8118 Oasis, Morocco
listed above, is the ability to make decisions while conducting and coordinating complex missions. This capacity is very important because, for many robotic tasks, may be many ways to achieve them. The robot has to select the best actions to take to achieve its mission adequately, by reasoning. Although many other capacities of perception and action can be added to this list, it is important to bear in mind that the mobile robotic is a highly multidisciplinary subject of much research in diverse disciplines. For this reason, in this article, we use most of these robotic capabilities and we believe that we have access to them because they are derived from other works, apart those presented here.
Keywords: Multi-Agent Systems, Sensors, Actuators, Control architecture, Internet, Agent UML.
1. INTRODUCTION
The most basic tasks in the daily life of a human can become extremely complex when analyzed more closely. For example, attending a seminar or conference can be a real headache. Above all, he must consult his calendar to ensure availability. Subsequently, he must go to the scene of the event by combining various modes of transportation such as cars, buses and perhaps even plane while these tasks may seem simple to us humans, it is not nearly as obvious to a robot. To be autonomous, mobile robot must have many skills. First, it must be able to perceive its environment and locate in it. To do this, a robot has sensors, such as sonar and laser scanning device for measuring distances between itself and nearby obstacles. Once located in the environment, the robot must be able to move from one point to another by finding safe and effective ways to avoid collisions with obstacles. In addition, a robot is often called upon to communicate with people or other agents nearby. This can be done in various ways, such as voice or from a Gateway. In addition to be able to perceive its environment, a robot must often be able to identify objects, recognize people, read signs, and even identifying graphic symbols. These operations are performed by analyzing pictures acquired by the camera(s) installed on the robot. After identifying and locating an object, we can imagine that the robot is then manipulating that object with its robot arm. Finally, another robotic capacity, as important as those Volume 2, Issue 5 September October 2013
2. STATE OF ART 2.1 Concepts of robotics Before entering in our subject, it is important to have a general idea about mobile robots function in order to understand the interactions between the different modules we refer to. 2.1.1 Components of a mobile robot Basically, a mobile robot consists of hardware and software components. Among the hardware components, there is a moving platform which holds all the other components such as sensors, actuators and energy source (batteries).
a) Sensors
The sensors are operable to acquire data from the environment. Sensors typically installed on a mobile robot (see Figure 1) there are ultrasonic sonar, proximity laser sensor, wheels encoders (odometer), one or two optical cameras and microphones. The kinds of information collected as well as their accuracy change greatly from one sensor to another. For example, Figure 2[1] shows a laser proximity sensor (c) which has a better perception of the contours of the environment than (a) sonar and (b) sonar because the sensor offers better angular resolution and better accuracy on distance.
Page 221
Figure 1 Mobile robot components The robot has eight IR-sensors of which six are placed in the front and two in the back (Figure 1). These sensors can be used both to sense obstacles (called proxy sensors in this text) and to sense bright light in the vicinity (called light sensors in this text). These sensors all return real-values in the range [0..1] where the extreme values 1 indicates bright light or a very close obstacle and 0 darkness or no obstacle. Note that the proximity sensors are sensitive to different colors and glossyness of the obstacle. Darker objects can reduce the distance of the first sensory reading to as low as ~10 mm. The figure below shows the readings obtained from the wooden blocks (Figure 3) [2].
b) Actuators
Figure 3 Sensitive of proximity sensors to different colors and glossyness of the obstacle
To move inside their environment and interacting with it, a robot is equipped with actuators. For example, a robot is provided with one or more motors witch can rotate its wheels to perform movements. Generally, the wheels of the robot are controlled by two motor commands, a forward speed and rate of rotation. Usually, these commands are expressed in meters per second (m / s) and degrees of rotation per second (deg / s).
2.1.2 Software modules To operate a mobile robot, several software modules are involved. These modules can be used to interpret data collected by the sensors in order to extract information or to process high-level commands to generate more commands to a lower level. Among the most frequently modules used are modules of positioning, navigating, visioning... (a) Location One of the most important functions for the robot is to be able to locate in its environment. Using data provided by the sensors, the location module estimates the current position of the robot. Typically, this position is expressed by a tuple (X, Y, _) representing a position and an orientation on a two dimensional plan [5]. Localization can be done using techniques based on the theory of Markov decision processes [6], using sampling techniques Monte Carlo (particle filter) [7] or other methods. (b) Vision When we analyze the pictures captured by cameras, we can extract a wealth of information. For example, by using a segmentation algorithm, we can recognize object colors in addition to estimating their relative position (angle) relative to the camera view. Using threedimensional vision techniques, it is also possible to estimate some distances in the environment. We can also recognize symbols, characters and read messages such as posters in a corridor, direction signals or conference badges.
c) Navigation
A navigation module is responsible for moving a robot from its current position to a desired destination safely and efficiently. In addition to including features of the environment perception and localization, navigation module also has the responsibility of finding a path between the position of origin and destination, consisting of a list of intermediate points to reach and to guide the Page 222
Figure 5 Loop modeling perception-planning-execution 3.1 Path planning A path planner is a planner different from task scheduler. When the scheduler generates an action type displacement, the path planner is called to find a way to move the robot from its current position to the destination, optimally and securely. 3.2 Map of the environment To find a path, a path planner needs a map of the environment in which the robot operates. Usually, this map is represented using an occupancy grid obtained by discretizing the environment into cells. A cell is either free or occupied by one or more obstacles, as in the shown below (Figure 6). From a cell, a robot can reach a neighboring cell, if it is free. There are two definitions of neighborhood relationship: 4 and 8 neighbors. In the first, the robot can move in the four cardinal directions, north, south, east and west. In the second, the four oblique directions are also allowed. Model the occupancy grid can be improved by adding some attributes. One can for example assign costs to the cells or even the probability of presence. Mission Planner: converts the objectives and constraints defined by the user cost functions for digital planner way. In simplest case, it is necessary to define a destination point for the robot. Other constraints can be added like staying near a road or be as inconspicuous as possible [13].
A first approach to robot control favors a centralized representation of the environment. In general, a scheduler is responsible for developing an action plan for future robot to reach a goal in the ideal case of a geometric or topological model of the environment. This approach is to establish a single loop (perception, decision, action)[8] with however many possible activities, including decomposition arises in terms of Volume 2, Issue 5 September October 2013
Figure 8 Layers of planning architecture For example, the reactive behavior layer includes three modules: a module for activation purposes, a module for the recognition of situations and a module for planning and execution. Perceptions of the environment are transmitted to the situation recognition module of the first layer (layer reactive behavior) to the module aims to enable the planning and execution module flow control action passes to the upper layer (local planning layer). Knowledge Base KB-World represents the information that the agent has on the environment (figure 9).
Initial state n = 1 n = 1 layer of reactive behavior BC n = 1 world knowledge base n = 2 layer local planning BC n = 2 knowledge base planning n = 3 layer collaborative planning BC n = 3 basic social knowledge
5. ARCHITECTURE
5.1 Planning architecture Our architecture is composed of three layers of control and three associated knowledge bases that represent the agent and the environment at various levels of abstraction, as shown in Figure 8. Each layer has a specific set of operations associated and an upper layer uses the simpler operation of the layer above to run its operations more elaborate. The flow of control passes from bottom to top, and a layer takes control when the previous layer can no longer contribute its operations to the performance goals. Each layer has three modules: a module for the activation of goals and the recognition of situations (RS) and a module of planning and execution (PE). Perceptions of the environment are transmitted to the RS module of the first layer and, from module to module, to the top of the hierarchy. Flow control actions passes up and down to get to the module end of the last layer PE and associated actions are executed on the environment. Knowledge Base KB World represents information that the agent has on the environment (beliefs about the environment), the knowledge base KB-Planning is Volume 2, Issue 5 September October 2013 Figure 9 Reactive behavior layers 5.2 Control architecture Our architecture includes an explicit representation of the cooperation process agent with other agents in the system
Ex: Execution
Ac: Activation
Re: Recognition
Figure 11 Control software platform A module on a lower layer can modify the input of a module with a higher node deletion, and invalidate the action of the upper module with a node inhibition, for example, if our robot wants move eastwards starting from some position and that there is no obstacle in this direction, the action performed by the execution component that controlled by M1 to move eastward. If there is an obstacle, the module M0 takes into account this obstacle by its perception of the environment and inhibits the moving eastward. M1 will then try to move in another direction.
AUTHORS
Rabia Moussaoui received her engineer degree in Computer science in 2009 at the National School of computer science and systems analysis (ENSIAS) School, Rabat, Morocco. In 2010 she joined the system architecture team of the National and High School of Electricity and Mechanic (ENSEM), Casablanca, Morocco. Her actual main research interests concern Modeling and simulating complex systems based on Multi-Agent Systems. Ms. Moussaoui is actually a Software Engineer in the Office of Vocational Training and Employment Promotion (OFPPT) of Casablanca. Hicham Medroumi received his PhD in engineering science from the Sophia Antipolis University in 1996; Nice, France .He is responsible of the system architecture team of the ENSEM Hassan II University, Casablanca, Morocco. His current main research Interests concern Control Architecture of Mobile Systems Based on Multi Agents Systems. Since 2003 he is a full professor for automatic productic and computer sciences at the ENSEM school, Hassan II University, Casablanca.
References
[1] J.C. Gallagher, S. Perretta, WSU Khepera Robot Simulator User's Manual, universit de Wright State, Dayton, Ohio, 24 Mars 2005. [2] Stephan C.F Neuhauss, University Zrich SwitzerlandA Robotics-Based Behavioral Paradigm to Measure Anxiety-Related Responses in Zebrafish,July 29, 2013. [3] B. Chaib-Ddraa, I. Jarras, B. Moulin, Systmes multiagents : Principes gnraux et applications, (2001) Herms. [4] J. Ferber, Les systmes multi-agents, vers une intelligence collective, Inter Editions (1995). [5] Aerospace and Electronic Systems, IEEE Transactions on (Volume:41 , Issue: 4 ) Oct. 2005 [6] Robust Markov Decision Processes Mathematics of Operations Research 2013 38:153-183 [7] Monte Carlo Localization: Efficient Position Estimation for Mobile Robots 1999. [8] Perception, Planning, and Execution for Mobile Manipulation in Unstructured environments 2012. [9] Modeling Agent Interaction Protocols with AUML Diagrams and Petri Nets 2003. [10] R.MOUSSAOUI, A.SAYOUTI, H.MEDROMI, " Conception dune architecture Machine to machine applique la localisation des systmes mobiles". Les 2mes Journes Doctorales en Technologie de lInformation et de la Communication, Fs, Maroc, Juillet 2010 [11] R.MOUSSAOUI,H.MEDROMI,H.MANSOURI, Intelligent Architecture based on SMA track, locate and communicate with mobile systems .l'internationale workshop de technologie de l'information et de la communication (WOTIC'11) ,Casablanca, Maroc, [12] R.MOUSSAOUI, H.MEDROMI,H.MANSOURI, Architecture de la localisation des systmes mobiles en utilisant les SMA (systme multi agent).la 3me dition des Journes Doctorales en Technologies de l'Information et de la Communication (JDTIC 2011). Volume 2, Issue 5 September October 2013
Page 226