Agent Child Interaction 2017 MIT

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

“Hey Google is it OK if I eat you?


Initial Explorations in Child-Agent
Interaction
Stefania Druga* Cynthia Breazeal Abstract
MIT Media Lab MIT Media Lab Autonomous technology is becoming more prevalent in our
Cambridge, MA, 02139 USA Cambridge, MA, 02139 USA daily lives. We investigated how children perceive this
sdruga@media.mit.edu cynthiab@media.mit.edu
technology by studying how 26 participants (3-10 years old)
interact with Amazon Alexa, Google Home, Cozmo, and
Randi Williams* Mitchel Resnick
Julie Chatbot. We refer to them as “agents“ in the context of
MIT Media Lab MIT Media Lab
this paper. After playing with the agents, children answered
Cambridge, MA, 02139 USA Cambridge, MA 02139 USA
randiw12@media.mit.edu mitch@media.mit.edu questions about trust, intelligence, social entity, personality,
and engagement. We identify four themes in child-agent
interaction: perceived intelligence, identity attribution,
playfulness and understanding. Our findings show how
different modalities of interaction may change the way
children perceive their intelligence in comparison to the
agents’. We also propose a series of design considerations
for future child-agent interaction around voice and prosody,
interactive engagement and facilitating understanding.

Author Keywords
Child-agent interaction; interaction modalities

Permission to make digital or hard copies of part or all of this work for ACM Classification Keywords
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that H.5.2 [Information interfaces and presentation (e.g., HCI)]:
copies bear this notice and the full citation on the first page. Copyrights Interaction Styles
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author. Introduction
IDC '17, June 27-30, 2017, Stanford, CA, USA
© 2017 Copyright is held by the owner/author(s). Prior research in child-computer interaction explores the
ACM ISBN 978-1-4503-4921-5/17/06. role of computers in shaping the way children think. In her
http://dx.doi.org/10.1145/3078072.3084330
book ”Second Self”, Sherry Turkle describes them as beings. An analysis of 178 false-belief studies led to a
intersectional objects that allow children to investigate model that showed that across cultures, Theory of Mind
”matter, life, and mind”[7]. Similarly, emerging autonomous usually develops when a child is 3 to 5 years old [8].
technologies invite children to think about what it means to This finding indicates that age is a significant factor in
be intelligent and conscious. These technologies are children‘s reasoning about technology. Through our work,
becoming increasingly prevalent in our daily lives. Yet, there we build on prior research by analyzing how children
is little research on how children perceive them. This paper interact with and perceive modern agents. In our analysis,
presents the preliminary findings of a study on how children, we consider developmental differences between children
ages 3 to 10, interact with modern autonomous agents. In and functional differences between agents.
our results, we analyze how and why children ascribe
particular attributes to agents. We address how different Method
Figure 1: Alexa
modalities affect children’s perception of their own Participants were randomly divided into four groups, roughly
Internet search, games, jokes,
intelligence. 4-5 in each group, then groups were assigned to a station.
music
We had four stations, one for each agent. Each station had
Morphology: Cylinder, LED
Related Work enough devices for participants to interact alone or in pairs.
ring, female voice
In the fields of human-computer interaction (HCI), At the stations researchers introduced the agent, then
Input: voice
human-robot interaction (HRI), and applied developmental allowed participants to engage with it. After playing with the
Outputs: voice and LEDs
psychology, there is extensive research on how children first agent for 15 minutes, participants rotated to the next
perceive robotic and conversational agents. Tanaka found station to interact with a second agent. Each session of
that children build relationships with these agents the same structured play was followed by a questionnaire, in the form
way that they build relationships with people [6]. Kahn found of a game, analyzing children’s perception of the agent. We
that they consider robots as ontologically different from interviewed 5 children (3 boys and 2 girls), to further probe
other objects, including computers [3]. He also saw that age their reasoning. We selected children that played with
and prior experience with technology led to more thoughtful different agents and displayed different interaction behavior.
reasonings about robots’ nature [3]. Turkle argues that the Between interviews, participants were allowed to free-play
intelligence of computers encourages children to revise their with the agents. This methodology is based on the prior
ideas about animacy and thinking [7]. She observed that research work of Turkle [7].
children attributed intent and emotion to objects that they
could engage with socially and psychologically. Bernstein’s Participants
Figure 2: Google home
study revealed that voice, movement, and physical This study consisted of 26 participants. Of these
Internet search, games, jokes,
appearance are other factors that children consider when participants, four were not able to complete the entire study
music
deciding how to place agents [2]. due to lack of focus. The children were grouped according
Morphology: Cylinder, LED
To understand why children may attribute characteristics of to their age range to gather data about how reasoning
ring, female voice
living beings to inanimate objects, we must consider Theory changes at different development stages. There were 12
Inputs: voice, touch
of Mind development [3]. Those with a developed Theory of (46.15%) children aged 3 or 4 years old classified as our
Outputs: voice, LEDs
Mind can perceive the emotional and mental states of other younger children group. The older children group contained
14 (53.85%) children between the ages of 6 and 10. Of the Agents
26 participants, 16 offered information about previous Age Response Alexa Google Cozmo Julie
technology experience. A total of 6 (37.5%) of the
respondents had programmed before and 9 (56.25%) had Younger Smarter 20% 0% 40% 60%
interacted with similar agents (Google, Siri, Alexa, and Neutral 20% 100% 0% 40%
Cortana) previously. Not as smart 60% 0% 60% 0%
Figure 3: Cozmo Older Smarter 100% 43% 20%
Agents Neutral 0% 57% 40%
Games, autonomous explo-
The children interacted with four different agents: Amazon‘s Not as smart 0% 0% 40%
ration, object/face recognition
Echo Dot “Alexa" (Figure 1), Google Home (Figure 2),
Morphology: robot (wheels,
Anki‘s Cozmo (Figure 3), and Julie Chatbot (Figure 4). Table 1: How children perceive agent’s intelligence compared to
lift), garbled voice, block acces-
These agents were chosen for their commercial prevalence theirs. Note: Only one session of interaction and questionnaire
sories
and playful physical or conversational characteristics. Alexa data with Julie was collected, the group was restless after
Inputs: visual information, dis-
is a customizable, voice-controlled digital assistant. Google interaction
tance, app
Home is also a voice-controlled device, for home
Outputs: sound, movement,
automation. Cozmo is an autonomous robot toy. Finally,
LED eyes, block LEDs, app
Julie is a conversational chatbot. interacting with the conversational and chat agents. They
enjoyed very much playing with the Cozmo. The older
The Monster game: interactive questionnaire
children (6-10 years old) enjoyed interacting with all the
After interacting with an agent, participants completed a ten
agents, although they had their favorites based on the
item questionnaire in the form of a monster game. The
different modalities of interaction. Responses varied
game was adapted from the work of Park et.al, and was
depending on the agent and age of the participant. These
used to keep younger children engaged [4]. In the game,
are discussed below as: perceived intelligence, identity
two monsters would share a belief about an agent then
attributions, playfulness, and understanding.
children placed a sticker closer to the monster they most
agreed with. We vetted the usability of this method and the "She probably already is as smart as me" - perceived intelli-
Figure 4: Julie
clarity of the questions in a pilot test. The questions queried gence
Conversations, jokes, and
how children felt about the agent in terms of trust, Younger participants had mixed responses, but many older
games
intelligence, identity attribution, personality, and participants said that the agents are smarter than them.
Morphology: Android app, tablet
engagement and were adapted from a larger questionnaire The older participants often related the agent‘s intelligence
TTS voice
found in the work of Bartneck[1]. to its access to information.
Inputs: text, voice
Outputs: text, voice
Findings For example, Violet1 (7 years old) played with Alexa and
Overall, most participants agreed that the agents are Google Home during structured playtime, then Julie and
friendly (Figures 5,6) and trustworthy (Figures 7,8). The Cozmo during free play. She systematically analyzed the
younger children (3-4 years old) experienced difficulty
1
Participants names are changed
intelligence of the agents by comparing their answers in children playfully testing their limits. A 6 year old girl asked
regards to a topic she knew a lot about: sloths.“Alexa she several agents “Is it OK if I eat you?". Other children offered
knows nothing about sloths, I asked her something about the agents food and asked if they could see the objects in
sloths but she didn‘t answer... but Google did answer so I front of them, “Alexa what kind of nut am I holding?"
think that‘s a little bit smarter because he knows a little
more". Mia(9.5 years old) played with Julie and Google “Sorry I don‘t understand that question“ - understanding
Home during the structured play, then tried all the other We observed that the majority of participants experienced
Figure 5: How children perceive agents during free play. She used a similar strategy of various challenges getting the agents to understand their
friendliness of agents by age referring to the things she knows when trying to probe the questions. Several children tried to increase the level of
intelligence of the agent: “[Google Home] probably already their voice or make more pauses in their questions, things
is as smart as me because we had a few questions back that may help people understand them. However, it did not
and forth about things that I already know“. always lead to better recognition with the agents.
Often, when the agents would not understand them,
“ What are you?" - identity attribution and playfulness children would try to reword their question or make it more
We observed probing behavior, where the participants were specific. Throughout the playtest, we observed shifts in
trying to understand the agent. Younger children tried to children‘s strategies, under the encouragement of the
understand the agents like a person, “[Alexa], what is your facilitators and parents or by observing strategies that
Figure 6: How children perceive favorite color“, “Hey Alexa, how old are you“. Older children worked for other children.
friendliness by agent
tested what the agent would do when asked to perform Despite these challenges, participants became fluent in
actions that humans do, “Can you open doors?" “[Cozmo] voice interaction quickly and even tried to talk with the
can you jump?" They also asked the agents how they agents that didn‘t have this ability (e.g Cozmo).
worked, “Do you have a phone inside you?" and how they
defined themselves, “What are you?" The children used Discussion
gender interchangeably when talking about the agents. Building on prior work and based on the observations from
Gary and Larry referred to Cozmo as both “he“ and “she“. “I this study, we propose the following considerations for
don’t really know which one it is“ - Gary. “It’s a boy...maybe child-agent interaction design: voice and prosody,
Figure 7: How children perceive because of the name but then again you could use a boy interactive engagement, and facilitating understanding.
truthfulness of agents by age name for a girl and and girl name for a boy“ - Larry. Later,
they concluded that Cozmo “is a bob-cat with eyes“. Voice and prosody
Multiple agents of the same type were provided during the Voice and tone made a difference in how friendly
playtest which led the participants to believe that different participants thought the agent was (Figure 6). When asked
Alexas could give different answers , “She doesn’t know the about differences between the agents Mia replied, “I liked
answer, ask the other Alexa“.They would repeatedly also Julie more because she was more like a normal person, she
ask the same question to a device to see if it would change had more feelings. Google Home was like ‘I know
its answer. everything‘...Felt like she [Julie] actually understood what I
Figure 8: How children perceive As they became familiar with the agents we observed was saying to her“.
the truthfulness of each agent
Her favorite interaction with the agents was the following children by referring to their previous questions, by asking
exchange she had with Julie, “She said ‘I‘m working on it‘’...I more clarifying questions and by expressing various
said ‘What do you mean you‘re working on it ? ‘ She said ‘I reactions to children inputs.
don‘t speak Chinese‘[laughs] I wrote back ‘I‘m not speaking
Chinese‘ and she said ‘It sounded like it‘. That is an Facilitating understanding
interaction I would have with a normal human“. Mia‘s During the play-test the facilitators, parents, and peers
perception of this interaction could be a result of the helped the children rephrase or refine their questions. We
“mirroring effect“, where the agent‘s unexpected response is wonder how some of this facilitation could be embedded in
perceived as a sense of humor, a reflection of Mia‘s own the design of the agent’s mode of interaction. If the agent
style of communication. Through this interaction and could let the children know why they cannot answer the
previous experiments ran by Disney Research, we see an question and differentiate between not understanding the
opportunity for future agents to imitate the communication question and not having access to a specific information,
style of children and create a prosodic synchrony in the this would help the users decide how to change their
conversations in order to build rapport [5]. question, either by rephrasing it, or by being more specific.
Another issue we recognized was that sometimes the
Interactive engagement amount of information provided to the participants was
Gary and Larry said they liked interacting with Cozmo the overwhelming. The agent’s answers could be scaffolded to
most “because she could actually move and all the other provide information gradually. This would enable the
ones that we did she couldn‘t move“. Also because Cozmo children to decide how much they want to know about a
had expressions, “he has feelings, he can do this with his specific topic and get more engaged by having a
little shaft and he can move his eyes like a person, confused conversation with the agent.
eyes, angry eyes, happy eyes...Everybody else like they
didn‘t have eyes, they didn‘t have arms, they didn‘t have a Conclusion
head, it was just like a flat cylinder“. This testimony reveals This research raises the following question: if these agents
how mobile and responsive agents appeal to children and are becoming embedded in our lives, how could they
how the form factor plays a significant role in the interaction. influence children’s perception of intelligence and the way
Through its eyes and movements, Cozmo was able to they make sense of the world? During our study the
effectively communicate emotion, and so the children participants believed that they could teach the agents, and
believed that Cozmo had feelings and intelligence. Many they could learn from them. This leads us to imagine novel
participants, who tried to engage in dialogue with the ways in which the agents could become learning
agents, were limited by the fact that the agents weren’t able companions. In future work, we hope to design interactions
to ask clarifying questions. While the children were where children are able to tinker with and program the
attracted to the voice and expressions of the agents at first, agents and thus expand their perception of their own
they lost interest when the agent could not understand their intelligence and different ways to develop it.
questions. We recognize the potential for designing a voice
interface that could engage in conversations with the
Acknowledgements 4. Hae Won Park, Rinat Rosenberg-Kima, Maor
We would like to thank the students at the Media Lab who Rosenberg, Goren Gordon, and Cynthia Breazeal.
assisted with the study, our friends and mentors who 2017. Growing growth mindset with a social robot peer.
reviewed the various drafts, the parents and children who In Proceedings of the 12th ACM/IEEE international
participated in the study, and our colleagues who offered conference on Human robot interaction. ACM.
advice on this work. This research was supported by the
5. Najmeh Sadoughi, André Pereira, Rishub Jain, Iolanda
National Science Foundation (NSF) under Grant
Leite, and Jill Fain Lehman. 2017. Creating Prosodic
CCF-1138986. Any opinions, findings and conclusions, or
Synchrony for a Robot Co-player in a
recommendations expressed in this paper are those of the Speech-controlled Game for Children. In Proceedings
authors and do not represent the views of the NSF. of the 2017 ACM/IEEE International Conference on
Human-Robot Interaction. ACM, 91–99.
REFERENCES
1. Christoph Bartneck, Dana Kulić, Elizabeth Croft, and 6. Fumihide Tanaka, Aaron Cicourel, and Javier R
Susana Zoghbi. 2009. Measurement instruments for Movellan. 2007. Socialization between toddlers and
the anthropomorphism, animacy, likeability, perceived robots at an early childhood education center.
intelligence, and perceived safety of robots. Proceedings of the National Academy of Sciences 104,
International journal of social robotics 1, 1 (2009), 46 (2007), 17954–17958.
71–81.
7. Sherry Turkle. 2005. The second self: Computers and
2. Debra Bernstein and Kevin Crowley. 2008. Searching the human spirit. Mit Press.
for signs of intelligent life: An investigation of young
8. Henry M Wellman, David Cross, and Julanne Watson.
children’s beliefs about robot intelligence. The Journal
2001. Meta-analysis of theory-of-mind development:
of the Learning Sciences 17, 2 (2008), 225–247.
the truth about false belief. Child development 72, 3
3. Peter H Kahn Jr, Batya Friedman, Deanne R (2001), 655–684.
Perez-Granados, and Nathan G Freier. 2006. Robotic
pets in the lives of preschool children. Interaction
Studies 7, 3 (2006), 405–436.

You might also like