Cooperation Observation The Framework and Basic Task Patterns

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Cooperation by Observation

- The Framework and Basic Task Patterns -

Yasuo Kuniyosh? Sebastien Rougeauxt Makoto Ishiit


Nobuyuki Kita” Shigeyuki Sakane” Masayoshi Kakikurat
* Intelligent Systems Division, ‘Institut d’hformatique * Tokyo Denki University,
Electrotechnical Laboratory d’Entreprise, 2-2 Kanda Nishiki-cho,
1-1-4 Umezono, Tsukuba City, 18 Allee Jean Rostand, 91000 Chiyoda-ku,
Ibaraki 305, JAPAN Evry, FRANCE Tokyo 101, JAPAN

Abstract This provides rich information about the current task situ-
ation around each robot which facilitates highly-structured
A novel framework f o r multiple robot cooperation called
task coordination. The aspect of extraction and interpreta-
“Cooperation b y Obsemtation” is presented. I t introduces
tion of a variety of concrete information about the current
m a n y interesting issues such a3 viewpoint constraint and
task situation, and their effects on the task execution is
role interchange, as well as novel concepts like “atten-
largely unexplored.
tional structure”. T h e framework has the potential t o re-
In section 2, we look at the related work and extend the
alize a high level of task coordination by decentralized au-
above discussion. Section 3 presents our framework and
tonomous robots allowing minimum explicit communica-
discuss its implications. The basic visuo-motor routines
tion. Its source of power lies in a n advanced capability
which constitutes the fundamental behaviors are briefly
given to each robot for recognizing other agent’s actions by
presented in section 4. Concrete examples and experiments
(primarily visual) observation. This provides rich infor-
using real mobile robots are presented in section 5. The
mation about the current task situation around each robot
final section summaries our results and proposes future
which facilitates highly-structured task Coordination. T h e
directions within our framework.
basic visuo-motor routines are described. Concrete exam-
ples and experiments using real mobile robots are also pre-
sented.
2 Related Work
In the previous literature, the level of task coordina-
1 Introduction tion roughly parallels the amount of explicit communica-
tion among the robots. The no-communication extremes
In order to execute complex tasks with multiple mo- (e.g. Steels (11, Brooks et al. [2], and Arkin [3)) rely on
bile robots and achieve superlinear increase of task per- indirect interactions among the robot behaviors through
formance to the number of robots, we need a variety of physical states of the environment. But the variety of tasks
flexible and highly-structured coordination among robot realized by these approaches is limited, e.g. collecting scat-
behaviors. As for the method of coordination, we have tered rock samples.
strong restrictions imposed by the distributed and asyn- In contrast, the highest levels of cooperative tasks
chronous nature of the entire system, namely, minimum ex- demonstrated using real mobile robots (e.g. Sakane et
plicit communication, decentralized control, and real time al. [8] and Noreils [7]) use distributed planning methods
response. and rely on intensive inter-robot communication.
There have been many attempts to achieve an appro- Recently, closer analyses have been made as to the bal-
priate balance between the goal and the restrictions given ance between local vs. global control, and the effect of
above (e.g.[l, 2 , 3 , 4 , 5,6, 7,8,9]). But in these approaches, communication on the task. Mataric [4] analyzed and
especially when we look at the implementations using real demonstrated the effect of local inter-agent sensing ability
mobile robots [4,7, 8, 91, there has been a consistent ten- (implicit communication) on the performance of naviga-
dency that the higher the level of coordination becomes, tion and flocking behavior by a group of robots. Arkin [6]
the more the restriction is violated. also analyzed the effect of broadcasting each robot’s coor-
In this paper, we present a novel framework for multiple dinates and goal-achieving states on the performance of co-
robot cooperation called “Cooperation by Observation”, operative navigation. Parker [5] explored a proper balance
which has the potential to realize a high level of task co- between local vs. global information for a formation keep-
ordination by decentralized autonomous robots allowing ing cooperative navigation by a team of locally controlled
minimum explicit communication. Its source of power lies agents. Here the transferred/shared information included
in an advanced capability given t o each robot for recogniz- the global navigation goal, intermediate goals, and com-
ing other agent’s actions by (primarily visual) observation. plete global path. She also suggested the use of behavior

767
1050-4729/94 $03.00 0 1994 IEEE
observation and anticipation. robots. Examples of these functions are given in section 4
Our work is on the lines of Mataric, Arkin, and es- and 5.
pecially Parker, in that we assume implicit communica- 3.2 Assumptions
tion through each agent’s sensing ability and some shared
knowledge about the global goals. By introducing a capa- The assumptions which characterize the framework of
bility of visually recognizing other agents’ actions and an- cooperation by observation are given below.
ticipating forthcoming events, we attempt to achieve high Local control: There is no central processor nor a central
levels of task coordination while using least amount of ex- database. Each robot controls its own behavior.
plicit communication. More specifically, we demonstrate Least communication: Explicit inter-robot communi-
three types of cooperative behaviors, namely, posing, un- cation is minimized.
blocking, and passing, with absolutely no explicit commu- Shared knowledge: Members of a robot team share a
nication between the robots. static portion of global knowledge, such as high level
task goals and common strategies.
3 Cooperation by Observation
Complex and dynamic environment: Many objects
Many kinds of social animals, including humans, often and robots are scattered all over the task space.
show highly coordinated group behaviors without explicit For the current task of a specific robot, some ob-
communication. For example, some carnivores carry out jects/robots are relevant, but others are independent.
group hunting successfully in silence [lo]. In our everyday
Task complexity: In order to achieve the global goal,
life, we routinely perform cooperative tasks without speak-
highly structured coordination of multiple robot ac-
ing, often without thinking. One typical example is that
tions is required. The structures are defined in terms
when someone carrying a large baggage with both hands
of temporal ordering/synchronization and combina-
approaches a closed door, a person near by would open the
tion of actions such as relative positions/movements.
door.
One benchmark task will be for dozens of robots to
3.1 Action Understanding carry all cargoes (of various sizes, shapes and weight)
The ability for tacit cooperation is based on “action scattered in a warehouse outside through a narrow
understanding”. It is defined as the following process: door as soon as possible.
Homogeneous robots: The robots have equivalent
Observe other agent’s action, and Choose appro-
structures, physical ability and cognitive competence.
priate actions regarding the observed action and
the current task situation. Observation ability: Each robot is equipped with ap-
propriate sensors (ex. stereo vision, proximity, force,
The observation portion of the process is further re- touch, etc.) and recognition routines for observing
fined 1111: other robot‘s actions.
Classify qualitative movement of the agent’s 3.3 Issues
body, Look f o r the target of the action based o n
causality, Anticipate OT Detect the e f e c t of the Many interesting problems arise from the above as-
movement o n the target. sump tions.
Selective attention: Action recognition requires ex-
Machine understanding of limited class of human ac-
tracting a causally related pair of movement and its
tions has been developed 111, 12, 131. Also, the possibility
effect. Since the observation must be made in a com-
of coordinated manipulation based on force observation
plex environment, it is mandatory to dynamically se-
without explicit communication has been shown [14].
lect and observe only relevant entities in the surround-
Details of the theory and the implementation of action
understanding is beyond the scope of this paper (see [ll]). ings.
Nevertheless, it is important to note that action under- Viewpoint constraint: There is no central database and
standing provides significantly richer information about explicit communication is strictly limited. Therefore,
the ongoing task situation than previous implicit commu- a bird-eye view (a global model) of a task situation
nication methods such as exchanging each agent’s position cannot be obtained. All observations must be made
and goal achievement state. The acquired information in- from the viewpoint of each robot.
cludes movements, relative positions, and relative velocity Interchange of roles: One merit of homogeneous robots
of the agents regarding other objects, states of the objects, is that the role of each robot can be dynamically
and their temporal evolution. The agents and the objects changed according to the task situation. For example,
which define the relations are automatically selected using one robot can be either an auxiliary helper in one sit-
attention control so that they are meaningful (relative) to uation or an independent task performer in another
the current task of the observer. Also, based on attention situation. This improves the overall performance of
switching and qualitative events, temporal segmentation of the robot group. Then it becomes an important issue
the observed task situation is automatically done, which is how to dynamically select an appropriate role based
used for synchronizing/ordering the behaviors of different on local observations.

768
Sensing: Each robot must have sensory functions which
are appropriate for action recognition in a complex
environment. We found that a combination of con-
trolled vergence stereo vision and optical flow analy-
sis is quite powerful. But there are many other candi-
date methods, including other sensor modalities like
force/touch sensing, audio, etc.
0:agent -: attention
Behaviors: For real time responses, a behavior-based ar- Fig. 1: Attentional structures.
chitecture [15] with extensions to incorporate atten-
tion control and qualitative representations of coop- We adopt a generalized version of the notion of atten-
erative task patterns. A strong candidate is the one tional structure:
proposed by Chapman [16]. But it must be modified
to meet the viewpoint constraint and to deal with A n attentional structure is a set of attentional
gaze shifts, i.e. maintaining a limited temporal per- relations among all members of a cooperative
sistence of spatial cognition. group and related objects. A n attentional rela-
tion describes who watches the actions o,f w h o m
Representation of situations: In order to select appro- and/or who watches which object(s) in order t o
priate actions regarding the current role and task control self behavior.
state, a robot must have internal qualitative repre-
sentations of cooperative behavior patterns. Such a 3.5 Classification of Cooperative Behav-
representation should only describe partial aspects of iors
the target situation which characterize the type of The framework of cooperation by observation intro-
cooperation. There are infinite instances of actual duces a wide variety of cooperative task patterns. As a
positions/movements of the robots and the objects preliminary study, we choose the example tasks shown in
in one qualitative situation. Also, since the number Fig. 2 and analyze their structures with regard to the fol-
of permutations of roles in one situation can increase
lowing features: (1) The number of robots ( N R ) and ob-
exponentially to the number of robots contained, we jects ( N o )involved, (2) The attentional structure ( A ) ,(3)
need an efficient way to handle the role interchange.
Physical interactions between involved robots and objects,
Grounded communication: The communicated infor- (4)Temporal relations among robot actions ( R T ) e.g., or-
mation, if any, must be grounded to the local situation dered (ord) or simultaneous (sim), in the two-robot cases.
of the robot which uses it. Basically, the information Here we assume that one robot can manipulate at most
should be qualitative, i.e. relative to the receiver’s one object, hence, 11’~< N o , and there is no direct physi-
current action/state. When and how to generate use- cal contact between robots. Possible attentional structures
ful information for other robots, and how to use the are either uni-directional (e.g. Rl+R2) or bi-directional
received information will be a challenging problem. (e.g. RlwR2).
The first two patterns, (a) pushing and (11) tumbling,
3.4 Attentional Structure have {No = ~ , R = T sirn,A = {RI -+ 01,RZ 01)).
-+

Chance and Jolly [17] classified the social structures Basically, the robots only need to observe (using vision
of monkeys and apes into two categories, centripetal and and/or force sensing) the physical state of 0 1 for coor-
acentric. A centripetal society is formed around one su- dinating their behaviors [14]. Tumbling requires much
perior male. Other members of the society mainly pay tighter coordination than pushing.
attention to the behavior of this male and control their The other patterns (c-f) involve no physical interac-
own behaviors. When attacked, the superior male and its tions between the objects or the robots during the task,
company go in front of the defense or the retreat, followed so the use of visual observation is mandatory for coordi-
by the others. An acentric society behaves in an opposite nation. The pat,tern (c), giving way [9], is the simplest
way, e.g. it scatters under attack. case where { N o = 0, RT = sim,A = {RI -+ R2)).
The idea of attentional structure is quite important be- Instead of just stopping and waiting, if RI keeps a con-
cause it explains various social structures grounded on ac- stant distance and orientation to R2, this can be used as a
tion observation. Conversely, attention control in action preparatory behavior for helping R2, which we call “Pos-
observation process can be a candidate driving force in ing” behavior. Mutual avoidance and formation keeping
emergence of various social structures. Some examples of also fall into this category but with bi-directional atten-
attentional structures are illustrated in Fig. 1. All of these tion. The pattern (d), passing, requires sequencing the
have ground instances in animal societies [lo]. actions, e.g. {RT = ord, N o = 1). The attentional struc-
The original notion of attentional structure describes ture can be uni-directional ( A = {RZ 4 R I , R2 -+ 0 1 ) )
purely social interactions. When a group of robots engage in this case, but if the robots grasp the object and should
in a cooperative task, we must also consider physical in- pass it in free space, bidirectional attention is necessary.
teractions with other objects. The pattern (e), unblocking { N o = 2, RT = sim}, is a

7 69
complex situation: The transporter robot R1 is carry- an incremental version of the Horn&Schunck algo-
ing an object 01, whose path is blocked by an obsta- rithm [lS]. Our current implementation executes 1 or
cle 0 2 . Because of limited maneuvering ability, it costs 2 iterations per each frame acquisition within about
much time for RI to avoid 0 2 . The helper R2 recognizes 100 [msec]. Then, noise removal by local averaging is
this situation and helps R1 by approaching the 0 2 and applied. Next, the flow field is compared with a the-
pushing it away. (A similar but simpler version has been oretical background flow to cancel self motion and to
demonstrated by Noreils [7] using explicit communication.) detect interesting targets. In order to deal with the
This pattern can be achieved with uni-directional atten- significant noise in the flow field, we currently assume
tion {A = {R2 + R1,R2 --* 0 l 1 R 2-+ 0 2 } } ,but the pure translation or pure rotation as the self motion
flexibility increases if bi-directional type is applied. For and do the comparison with coarsely quantized in-
example, in case R2 fails to approach 02, R1 should wait tensity and orientation of the flow vectors. The back-
for R2 to recover and accomplish the unblocking. The fi- ground flow for a rotation is trivial. For a translation,
nal pattern (fj, cargo loading {No = 2, RT = sim,A = we compute a theoretical flow produced by a virtual
{Rl 0 1 , R1 + 0 2 , R2 + 0 2 , R2 -+ O l } } ,requires
-+ vertical wall in front of the robot at a preset distance
tight coordination through visual observation. (which is actually a time to collision), using self ve-
locity. Using this virtual wall field as a threshold,
we can ignore distant targets and extract only close
obstacles and quickly approaching robots. The re-
sult field is then segmented by clustering the residual
vectors. The above method can detect moving/static
objects from a moving robot.
(2)Track: Strong spatial selectivity is required for track-
ing in a complex environment. We adopt ver-
gence controlled stereo with zero disparity filtering
(ZDF) [19], which has strong depth selectivity. The
ZDF extracts zero disparity image features by first
extracting vertical edges from the left and right edge
images and simply taking “AND” of the results.
Fig. 2: Cooperative task patterns: (a) Pushing. (b) Tumbling. The result contains only those features lying on a
(c) Giving way. (d) Passing. (e) Unblocking. (f)Cart loading. “horopter”, a circle defined by the two camera cen-
ters and the current fixation point. In combination
with the classical 2D region of interest method, the
4 Basic Visuo-Motor Functions system can actively select a 3D region to extract vi-
We have implemented a set of visuo-motor functions sual information. The ZDF output edges are tracked
which serves as a basis for cooperative behaviors. The al- by controlling the vergence angle of stereo cameras.
gorithms are selected so as to meet the conditions given in In order to follow the target motion in depth axis, we
section 3, especially, (1) the complex environment (many extended the original ZDF by incorporating a lim-
robots and many objects), and (2) viewpoint constraint. ited search around near-zero disparities to find the
Currently, we assume a mobile robot with vergence con- best match, and feedback the result to vergence con-
trol [20].
(3)Anticipate: In the situation of Fig. 3(left), the helper,
R2, need to anticipate a possible collision of Ol(R1)
with 0 2 . When the target motion is tracked over a
constant distance, a 3D line is fit to the recorded tra-
jectory by the least mean square method. Then the
Fig. 3: Situation recognition of a n unblocking t a s k by R2 (left) system quickly scans the gaze point along this line by
and recognition of a release action (right). vergence control. During the scan, the ZDF output is
continuously monitored to detect an object. An ex-
trolled stereo cameras and no arm. Figure 3 contains ma- perimental result is shown in Fig. 4(rightj. Near-zero
jor functions required for the example tasks presented in disparity search is turned off in this phase, because
the next section. Since the details of vision algorithms are this time we have a top-down specification of the tar-
out of the scope of this paper, we will describe them only get depth.
briefly. (4)Event Detection: For a passing type of task, the re-
(1)Find: This redirects an observer’s attention to a new ceiving robot must synchronize its action by recog-
target, e.g. a moving robot and obstacles. First, nizing a release action of another robot. This is done
an optical flow field is continuously computed by by monitoring the temporal change in the segmented

770
Fig. 4: Tracking a mobile robot (left). Trajectory estimatron and obstacle detection (right). In both figures, raw $tert.o images are
shown at the top, extracted edges in the middle, ZDF output4 at bottom-right. and results of romputation are shown at bottom-left.
T h e whole processing is done at near frame rate (20 - 30Hz).

optical flow field, as shown in Fig. 3(right). The event rect the significant odometric error in self orientation
is signalled when the area of segmented flow suddenly imposed by turning itself to the desired pushing di-
decreases and the average velocity of the area is re- rection.
versed (Tistarelli and Sandini [21] presented a similar
method for recognizing a push action). In order to 5 Example Tasks
simplify the computation and to cope with the noise, In order to verify the effectiveness of the visuo-motor
we assume that the observer is not moving while de- routines and to present a concrete examples of coopera-
tecting the release action, and approximate the seg- tion by observation, we have realized three task patterns;
mented region by its surrounding rectangle in our cur- posing, passing and unblocking.
rent implementation.
5.1 The Prototype System
(5)Navigation: This guides a robot to the target posi-
tion, e.g. 0 2 in the unblocking situation (Fig. 3(left)),
and the released object in the passing situation
(Fig. 3(right)). When there is no other constraints
and the robot can take a straight line path to the
visible target (as in the unblocking situation), the
robot uses visual guidance. The steering angle is se-
lected from {-5, 0, 5}[deg] according to the sign of
the displacement angle of the current fixation point
of the cameras from the current forward direction
of the mobile base. Of course, this crude control
does not converge well, but is sufficient for prelimi-
nary experiments since the cycle time is short (about
lOO[msec])comparedto the speed of the mobile base.
In some cases, a robot cannot directly approach the Fig. 5: A mobile robot with a stereo gaze platform.
target but must follow a precomputed trajectory. For
example, in the passing situation, the receiver robot Fig. 5 shows one of our mobile robot equipped with a
must push the target object in the same direction as stereo gaze platform. The gaze platform has 2 DOF for
the first robot did. In such cases, the robot must com- vergence control. The robot has two onboard processors,
pute an approach path in order to cope with the non- one for controlling the mobile base and another for the gaze
holonomic nature and/or preserve enough approach platform. Image processing is currently done on a remote
distance so that the visual guidance routine can cor- host processor which consists of pipeline image processors

771
.................................................................
Mobile robot
Gazeplatform Mobilebase i
Cameras actuators actuators i
V25(8086)8MHz V25(8086)8MHz f

! M
! E- Mavideo 200 1._....__._.._._
....;
l
MvME167/Lyn~oS
7
7 ~0stptoce-r
C.-...........-.....-.-........-....-...-..-..-.......----.......~

Fig. 6: Overview of the prototype system.

(Datacube Maxvideo system), and a CPU board running


a real time operating system. The system configuration is
given in Fig. 6.
5.2 Posing

(e) (f)
Fig. 9: Vision Processing d u r i n g a Receiving Action; A view
,..” : from t h e receiver robot. ( a ) A segmented optical flow region
i
including t h e robot a n d t h e can. ( b ) A release action is detected.
( c ) T h e released can is segmented. ( d ) Followed a n a p p r o a c h
; Chaser p a t h a n d recaptured t h e can. (e),(f) An a p p r o a c h behavior
Fig. i : C h a s i n g t h e t a r g e t r o b o t a m o n g o t h e r s (left) a n d t h e using visual servo based o n optical flow. T h e w h i t e m a r k e r
attentional s t r u c t u r e ( r i g h t ) . indicates t h e target position.

The situation of the posing experiment is illustrated ing the temporally spatial memory. After making a steep
in Fig. 7. The mission of the chaser robot is to keep a turn, the can becomes visible again (d), then the robot ap-
fixed relative position (distance and orientation) with re- proaches it by visual guidance (e) - (f). Figure 10 shows a
gard to the target robot. There is a distractor robot mov- bird-eye view of the entire situation. Evidently, the pass-
ing around, that sometimes comes in between the chaser ing behavior pattern was successfully demonstrated.
and the target. The chaser must lock onto the target. This 5.4 Unblocking
is the simplest example of the attentional structure which
We have made a simplified experiment of the unblocking
emerges from local attention relations. It is also a basic
task discussed before. In this experiment, the transporter
behavior which supports other cooperative behaviors in
robot ( R l ) does not carry an object ( 0 1 ) (different from
complex, dynamic environments. In this case, the chaser
Fig. 3). This simplification is justified because the purpose
and the target form an ordered pair, and the distractor
of this experiment is to verify the behavior of the helper
is isolated, not spatially, but at the level of cooperative
behaviors. The result is shown in Fig. 8. In this experi- (R2).
Fig. 11 shows a result. The helper robot successfully
ment, only the chaser was computer controlled, where as
found the anticipated obstacle and helped the transporter
the others were manually controlled. The purpose of the
robot by pushing away the obstacle just in time for the
experiment was to test the stereo tracking and visual servo
transporter to pass.
routines. They turned out to be quite robust.
5.3 Passing 6 Summary and Conclusions
The realized “Pass” behavior pattern is shown in Fig. 9- A novel approach to achieving highly structured task
10. Figure 9(a)-(c) shows the result of motion segmenta- coordination without resorting to explicit communication
tion and release detection. The released can is detected is presented. It is supported by mutual observation of ac-
and the receiver robot starts to follow the computed ap- tions, so the framework is called “cooperation by observa-
proach trajectory. This is done in an open-loop manner us- tion”. The main contribution of the framework is to focus

772
Fig. 8: Posing behavior. The chaser (left) follows the target (moving downward). Even when the distractor (moving upward) comes
in between the chaser and the target, the chaser is not disturbed and locks onto to t h e target.

on the extraction and the interpretation of useful infor- [5] L. E. Parker. Designing control laws for cooperative agent
teams. In Proc. I E E E Int. Conf. Robotics a n d Automation,
mation for task coordination by cooperating autonomous pages 582-587, 1993.
agents. The framework also introduces many interesting [6] R. C . Arkin and E. Nitz. Communication of behavioral s t a t e in
issues such as viewpoint constraint and role interchange, multi-agent retrieval tasks. In Proc. I E E E Int. Conf. Robotics
as well as novel concepts like “attentional structure” which a n d Automation, pages 588-594, 1993.
can be used for classifying cooperative task patterns. (71 F. R. Noreils. An architecture for cooperative and autonomous
mobile robots. In Proc. I E E E In$. Conf. Robotics a n d Au-
Another major contribution of this paper is that it tomatron, pages 2703-2710, 1992.
presents working examples of high levels of cooperative (81 S. Sakane et al. A decentralized and cooperative sensing system
tasks based on implicit communication through visual ac- for robot vision. In Proc. 23rd Int. Symp. Industrial Robots,
pages 151-156, 1992.
tion recognition. These are the most advanced examples in
[9] S. Premvuti and S. Yuta. Consideration on t h e cooperation of
the implicit communication framework demonstrated us- multiple autonomous mobile robots. In Proc. I E E E Int. Work-
ing real mobile robots. shop on Intelligeni Robots a n d Systems, pages 59-63, 1990.
The basic visuo-motor functions are characterized by [lo] E. 0. Wilson. Sociobiology: The New Synthesis. Harvard,
1975.
spatial selectivity, anticipatory functions, and the dynamic [ll] Y. Kuniyoshi and H . Inoue. Qualitative recognition of ongoing
nature. Although these routines basically have general human action sequences. In Proc. lJCAI93, pages 1600-1609,
1993.
utility, we need more refinement and augmentation. Es-
[12] S. Hirai and T. Sato. Motion understanding for world model
pecially, following functions need to be realized: (1) Quick management of telerobot. In Proc. ISRRS, pages 124-131,
and robust methods for finding and identifying objects, (2) 1989.
Multiple attention under parallel processing. [13] K . Ikeuchi and T. Suehiro. Towards an assembly plan from ob-
servation. Technical Report CMU-CS-91-167, School of Com-
In the presented experiments, we have one autonomous puter Science, Carnegie Mellon Univ., Pittsburgh, PA 15213,
robot acting as a helper for other, fixed-program or man- USA, 1991.
ually controlled robots. Since each robot system is com- [14] N . Sawasaki and H. Inoue. Cooperative manipulation by au-
tonomous intelligent robots. In J S M E Conf. o n Robotics a n d
pletely independent of others, there will be no problem in Mechatronrcs, pages 561-566, 1992. (In Japanese).
duplicating our prototype system. But there is more to be [15] J . Connel. A colony architecture for an artificial creature. Tech-
done before we achieve a versatile implementation of co- nical Report AI-TR 1151, M I T AI Lab., 1989.
operation by observation, including, qualitative represen- [16] D . Chapman. Vrsron, Instructron a n d Actton. MIT Press,
tation of cooperative task patterns, dynamic selection of 1991. ISBN 0-262-03181-7.
an appropriate cooperative behavior from a large number [17] M . R. A. Chance and C. J . Jolly. Socral group.9 of monkeys,
apes a n d men. E. P. Dutton, 1970.
of potential patterns, implementation of role interchange, [la] B. K. P. Horn and B. G. Schunck. Determining optical flow.
and treatment of shared knowledge. Artrficral Intelligence, 1:185-203, 1981.
References [19] P. von Kaenel, C. M. Brown, and D . J . Coombs. Detecting
regions of zero disparity in binocular images. Technical report,
[l] L. Steels. Cooperation between distributed agents through self- University of Rochester, 1991.
organization. In Proc. I E E E Int. Conf.Intelligent Robots and
Systems., 1990. [20] S . Rougeaux, N. Kita, Y. Kuniyoshi, and S. Sakane. Tracking a
moving object with a stereo camera head. In Proc. 11th A n n u a l
[2] R. A. Brooks e t al. Lunar base construction robots. In Proc. Conf. of Robotrcs Society of Japan, 1993. ( t o appear).
IEEE Int. Workshop o n Intelligent Robots and Systems,
pages 389-392, 1990. [21] M. Tistarelli and G . Sandini. Dynamic aspects in active vision.
CVGIP: Image Understandrng, 56(1):108-129, 1992.
[3] R. C. Arkin. Cooperation without communication: Multiagent
schema-based robot navigation. J. Robotic Systems, 9(3):351-
364, 1992.
[4] M. J. Mataric. Minimizing complexity in controlling a mobile
robot population. In Proc. I E E E Int. Conf. Robotics a n d Au-
tomation, pages 830-835, 1992.

773
(g)
Fig. 10: Passing Task. ( a ) ( b ) A robot is pushing a can from left
t o right. (c) T h e first r o b o t releases t h e can a n d r e t u r n s t o fetch
a n o t h e r c a n , a n d t h e helper r o b o t recognizes t h e situation. ( d )
T h e helper robot d e t e r m i n e s t h e pushing direction. a n d s t a r t s
t o follow a c o m p u t e d a p p r o a c h t r a j e c t o r y . ( e ) ( f ) T h e helper Fig. 11: Unblocking behavior: T h e binocular robot ( h e l p e r ) o n
robot a p p r o a c h e s t h e can using visual servo. ( g ) P u s h i n g t h e t h e left watched t h e monocular t r a n s p o r t e r robot moving from
can in t h e s a m e direction as t h e first r o b o t d i d . right t o left, estimated t h e p a t h a n d found t h e possible obstacle
o n t h e p a t h ( t o p figure), t h e n quickly approached t h e obstacle
using visual servo (middle figure), a n d successfully pushed it
away t o help t h e t r a n s p o r t e r ( b o t t o m figure).

114

You might also like