Professional Documents
Culture Documents
10 1 1 112 9258
10 1 1 112 9258
Nilanjan Sarkar
Mechanical Engineering
Vanderbilt University
Nashville, USA.
Nilanjan.sarkar@vanderbilt.edu
Abstract
A truly collaborative human-robot interaction framework
should allow the participating agents to assume and
relinquish initiative depending upon their own capabilities
and their understanding of the environment. The goal of this
work is to define and develop a mixed-initiative human
robot collaborative architecture in which affect-based
sensing plays a critical role in initiative switching. Affectbased sensing implies that the robot detects the humans
emotional state in order to determine which actions to
pursue. We have conducted an extensive literature review of
Mixed-Initiative Interaction that has provided a basis for our
architectural development. In particular, we are applying
Rileys (Riley 1989) general model of mixed-initiative
interaction to our architecture development. We have
developed a preliminary architecture and are now collecting
affect-based participant data that will be used to test the
system. The purpose of this paper is to provide an overview
of Rileys model, its application in our development, present
our preliminary architecture, and present the affect-based
interaction constraints that affect the architecture
development.
Introduction
In the foreseeable future, humans will remain a critical
component in robotic systems. The robotics field has not
developed fully autonomous capabilities for real-world
situations. The recent DARPA Grand Challenge is an
excellent example. The Grand Challenge mapped a 149
mile route between California and Nevada that robots were
to autonomously navigate with no human intervention.
Teams were provided with complete route information just
a few hours before the race. The result was that none of the
robots made it through the entire route. Carnegie Mellon
Universitys Red Team made the longest autonomous trek
(7.4 miles) before it ran off course, became stuck, and
caught on fire. The participating teams should be
applauded for their efforts and the advances that they
made, but this race illustrates the need for humans-in-theloop. In the Red Teams situation, a human could have
intervened to inform the robot that it was potentially going
to run off the course. If the robot had run off course, then
the human could have intervened in an attempt to help the
robot understand what went wrong and how to remedy the
situation. Such relationships are not strict teleoperation;
rather collaboration is needed between the human and the
robot. If the human is to be elevated to a true supervisory
position, an environment facilitating such collaborate must
Horwitz (Horwitz 1999) defines MII in a humancomputer interaction context. He uses the term to refer too:
the methods that explicitly support an efficient, natural
interleaving of contribution by users and automated
services aimed at converging on solutions to problems.
The idea was to develop a shared understanding of the
goals and provide the capability for both entities to solve
the problem in the most appropriate manner.
Ramakrishnan et al. (Ramakrishnan et al. 2002) claim
that true MII cannot occur unless users are able to take
out-of-turn interactions. They have developed a speechbased system that provides such interactions.
We surveyed over thirty articles. As can been observed
from the discussion thus far, there are many interpretations
of MII. In addition to the varied interpretations, this
concept has been applied to many domains including:
Planning (Burnstein et al. 2003, Hearst 1999,
Mitchell 1997),
Agents (DAloisi et al. 1997, Lester et al. 1999,
Rich & Sidner 1998, Skinner Unknown),
Human-machine interaction (Bell et al. 2000,
Fleming & Cohen 1999, Goldman et al. 1997,
Penner et al. 1997), and
Understanding dialogue and discourse (Chu-Carroll
& Brown 1997, Donaldson & Cohen 1997, Guinn
1998, Ishizaki et al. 1999, Lemon et al. 2001).
Kortenkamp et al. (Kortenkamp et al. 1997) introduce
MII for human-robot teams. The MII occurs at the
planning level of the 3T architecture. They list four
advantages to adding MII: Increased flexibility of overall
solutions; more robust system behavior; more tractable
planning; and improved user involvement in planning.
Sherwood et al. (Sherwood et al. 2001) also incorporates
MII at the mission planning stage for a Mars Rover.
Anzai (Anzai 1994) raises the issue of incorporating MII
into human-robot interaction. Anzai states call the
mixture of robot-to-human communication with the one
(communication) from humans to robots as mixed initiative
interaction, Anzais platform is that prior HRI work
has focused on the human specifically communicating to
the robot but not robots communicating to humans. MII
was intended to provide the capability for the robot to
request information from the human.
Horiguchi and Sawaragi (Horiguchi & Sawaragi 2001)
provide MII for a mobile robot via force-feedback through
the joystick. The joystick is the primary interaction
capability to control the robot. They provide a graphical
display of additional information but this display does not
permit the human to command the robot. The strength of
the joystick inputs is employed to determine the humans
initiative level when commanding the robot. This group
(Zheng et al. 2004) recently started to extend their work to
the Urban Search and Rescue domain.
Bruemmer et al. (Bruemmer et al. 2002, 2003)
incorporate MII into a fully teleoperated system for
hazardous environments. In this case, the MII permits the
Proposed Architecture
Based upon the results of completing the first
component of Rileys FAIT analysis, the resulting system
requires various levels of automation up to and including
full autonomy. As well, the system requires levels of
intelligence from the raw data level up to and including
operator predictive capabilities.
Discussion
The robots ability to interpret implicit human operator
states (emotional/affective state) is critical when
implementing a mixed-initiative interaction between
humans and robots. There are various emotional indicators
that can be exploited to identify affective states: facial
expressions, gestures, vocal intonation, physiology etc. We
focus on using physiological responses, as these are
generally involuntary and less dependent on culture,
gender, and age than the other emotional indicators. These
responses offer an opportunity to recognize emotions that
Conclusions
References
Anzai, Y. 1994. Human-Robot-Computer Interaction: a New Paradigm of
Research in Robotics. Advanced Robotics, 8(4): 357-369.
Bell, B., Franke, J., & Mendenhall, H. 2000. Leveraging Task Models for
Team Intent Inference. Proc. 2000 International Conference on Artificial
Intelligence.
Bradley, M., M., 2000. Emotion and Motovation. In J. T. Cacioppo, L G.
Tassinary, & G. G. Berntson (Eds.). Handbook of Psychophysiology, 2nd
Ed, pp. 602-642. New York: Cambridge University Press.
Bruemmer, D.J., Marble, J.L., Dudenhoeffer, D.D., Anderson, N.O. &
McKay, M.D. 2002. Mixed-Initiative Control for Remote
Characterization of Hazardous Environments. Proc. of the 36th Hawaii
International Conference on System Sciences.
Burnstein, M., Ferguson, G., & Allen, J. 2003. Integrating Agent-Based
Mixed-Initiative Control with an Existing Multi-Agent Planning System.
University of Rochester, Computer Science Department, Technical
Report 729.
Carbonell, J.R. 1971. Mixed-Initiative Man-Computer Instructional
Dialogues, BBN Report No. 1971, Job No. 11399, Bolt Beranek and
Newman, Inc.
New York:
Cambridge
Penner, R.R., Nelson, K.S., & Soken, N.H. 1997. Facilitating Human
Interactions in Mixed Initiative Systems Through Dynamic Interaction
Generation. Proc. of the IEEE International Conference on Systems, Man
and Cybernetics, pp. 714-719.
Ramakrishnan, N., Capra, R., & Perez-Quinones, M. 2002. MixedInitiative Interaction = Mixed Computation. 2002 ACM SIGPLAN
Workshop on Partial Evaluation and Semantics-Based Program
Manipulation, pp. 119-130.
Picard, R. 1997. Affective Computing. The MIT Press, Cambridge,
Massachusetts.
Rani, P., Sarkar, N., Smith, C. & Kirby, L. 2004. Anxiety Detecting
Robotic Systems Towards Implicit Human-Robot Collaboration.
Robotica, 22(1): 85-95.
Rich, C., & Sidner, C.L. 1998. COLLAGENT: A Collaboration Manager
for Software Interface Agents. User Modeling and User-Adapted
Interaction, 8: 315-350.
Riley, V. 1989. A General Model of Mixed-Initiative Human-Machine
Systems. Proc. of the Human Factors Soceity33rd Annual Meeting, pp.
124-128.
Roy, N., Baltus, G, Fox, D., Gemperle, F., Goetz, J., Hirsch, T.,
Margaritis, D., Montemerlo, M., Pineau, J., Schulte, J., & Thrun, S. 2000.
Towards Personal Service Robots for the Elderly. Workshop on
Interactive Robots and Entertainment.
Ishizaki, M., Crocker, M., & Mellish, C. 1999. Exploring MixedInitiative Dialogue Using Computer Dialogue Simulation. User Modeling
and User-Adapted Interaction, 9L 79-91.
Kortenkamp, D., Bonasso, R.P., Ryan, D. & Schreckenghost, D. 1997.
Traded Control with Autonomous Robots as Mixed Initiative Interaction.
1997 AAAI Spring Symposium: Computational Models for Mixed
Initiative Interaction, Technical Report SS-97-04.
Lester, J.C., Stone, B.A., & Stelling, G.D. 1999. Lifelike Pedagogical
Agents for Mixed-Initiative Problem Solving in Constructivist Learning
Environments. User Modeling and User-Adapted Interaction, 9: 1-44.
Lemon, O., Bracy, A., Gruenstein, A., & Peters, S. 2001. The WITAS
Multi-Modal Dialogue System I. Proc. of the 7th European Conference
on Speech Communication and Technology, pp. 1559-1562.
Mitchell, S. 1997. An Architecture for Real-Time Mixed Initiative
Collaborative Planning. 1997 AAAI Spring Symposium: Computational
Models for Mixed Initiative Interaction, Tech Report SS-97-04, pp. 111113.
Murphy, R., Casper J.L., Micire, M.J., & Hyams, J. 2000. MixedInitiative Control of Multiple Heterogeneous Robots for Urban Search
and Rescue. University of South Florida, Center for Robot Assisted
Search and Rescue Technical Report, CRASAR-TR2000-11.
hman, A., Hamm, A., & Hugdahl, K. 2000. Cognition and the
autonomic nervous system: Orienting, anticipation, and conditioning. In
J. T. Cacioppo, L G. Tassinary, & G. G. Berntson (Eds.). Handbook of
Sherwood, R., Mishkin, A. Estlin, T., Chein, S., Backes, P., Norris, J.,
Cooper, B., Maxwell, S., & Rabideau, G. 2001. Autonomously
Generation Operations Sequences for a Mars Rover using AI-based
Planning. Proc. of the 2001 IEEE/RSJ International Conference on
Intelligent Robots and Systems, pp. 803-808.
Skinner, J.M. Unknown. Multi-Agent Systems and Mixed-Initiative
Intelligence.
LEF
Grant
report.
www.csc.com/aboutus/lef/mds67_off/uploads/skinner_mixed_initiative_a
gents.pdf.
Smith, C., A., & Lazarus, R., S. 1990. Emotion and adaptation. Handbook
of personality: Theory and research, L.A. Pervin (Ed.), pp. 609-637,
New York, Guilford.
Smith, C. A. 1989. Dimensions of appraisal and physiological response in
emotion. Journal of Personality and Social Psychology, 56(3): 339-353.
Wortman, B.C., Loftus, E. F., & Weaver, C. 1998. Psychology, McGrawHill, 5th edition.
Zheng, X.-Z., Tsuchiya, K., Sawaragi T., Osuka, K., Tsujita, K.,
Horiguchi, Y., & Aoi, S. 2004. Development of Human-Machine
Interface in Disaster-Purposed Search Robot Systems that Serve as
Surrogates for Human. Proc. of the IEEE International Conference on
Robotics and Automation, pp. 225-230.