Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

Trust Lengthens Decision Time on Unexpected


Recommendations in Human-agent Interaction
Hiroyuki Tokushige Takuji Narumi Sayaka Ono
The University of Tokyo The University of Tokyo Toyota Motor Corporation
Tokyo, Japan Tokyo, Japan Aichi, Japan
tokushige@cyber.t.u-tokyo.ac.jp narumi@cyber.t.u-tokyo.ac.jp sayaka_ono@mail.toyota.co.jp

Yoshitaka Fuwamoto Tomohiro Tanikawa Michitaka Hirose


Toyota Motor Corporation The University of Tokyo The University of Tokyo
Aichi, Japan Tokyo, Japan Tokyo, Japan
yoshitaka_fuwamoto@ tani@cyber.t.u-tokyo.ac.jp hirose@cyber.t.u-tokyo.ac.jp
mail.toyota.co.jp

ABSTRACT especially that which utilizes the technology of inference


As intelligent agents learn to behave increasingly using large-scale data, it sometimes happens that highly
autonomously and simulate a high level of intelligence, autonomous and intelligent agents have broader
human interaction with them will be increasingly understanding than humans do. Thus, in interactions with
unpredictable. Would you accept an unexpected and such agents, we may obtain unexpected information or
sometimes irrational but actually correct recommendation by recommendations that are sometimes perceived to be
an agent you trust? We performed two experiments in which irrational but are actually correct. In a possible future
participants played a game. In this game, the participants scenario, for example, a fire breaks out in a high-rise
chose a path by referring to a recommendation from the agent apartment building and a resident on the upper floors
in one of two experimental conditions: the correct or the receives directions to evacuate and wait for help, where his
faulty condition. After interactions with the agent, the or her voice-assisted agent knows that the origin of the fire
participants received an unexpected recommendation by the is downstairs. These directions might be perceived to be
agent. The results showed that, while the trust measured by unexpected or irrational because it is not obvious that
a questionnaire in the correct condition was higher than that evacuating the upper floors is sometimes effective. In this
in the faulty condition, there was no significant difference in case, his or her decision and the time taken to make it
the number of people who accepted the recommendation. regarding the directions from the agent are crucial, given
Furthermore, the trust in the agent made decision time such a serious and time-pressured situation.
significantly longer when the recommendation was not
rational. Trust is said to be one of the crucial factors in interactions
with an artificial entity, such as an automation system [8, 12],
Author Keywords robot [4, 14], and agent. It is generally agreed today that the
trust; unexpected information; decision time; rationality; more trust people have in a person or agent, the more likely
recommedation reliability; they will accept its directions and recommendations.
ACM Classification Keywords However, it is not clear that the same is true in unexpected
H.5.2. User Interfaces interactions since little has been reported on people’s
behavior when they receive directions and recommendations
INTRODUCTION that they have never expected. Therefore, this study aims to
As artificial intelligence continues to develop and plays an observe people’s reactions when an agent provides an
increasingly significant role in our day-to-day lives, unpredictable recommendation and measure how much trust
Permission to make digital or hard copies of all or part of this work for in the agent affects their behavior. We designed our
personal or classroom use is granted without fee provided that copies are experiment based on a study presented by Salem et al. [15],
not made or distributed for profit or commercial advantage and that copies which observed how the faulty behavior of a home
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
companion robot affected people’s perceptions and
Abstracting with credit is permitted. To copy otherwise, or republish, to behaviors when they received both normal and unpredictable
post on servers or to redistribute to lists, requires prior specific permission directions from the robot.
and/or a fee. Request permissions from Permissions@acm.org.
HAI '17, October 17–20, 2017, Bielefeld, Germany In this paper, we define “unexpected” as a qualifier for
© 2017 Association for Computing Machinery. information or recommendations that we are not able to
ACM ISBN 978-1-4503-5113-3/17/10…$15.00 predict from either initial impressions or continuous
https://doi.org/10.1145/3125739.3125751

245
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

interaction with an agent. This is true only with highly much trust on a navigation robot in an emergency situation,
autonomous and intelligent agents, when the behavioral while it previously performed poorly [13]. Booth et al.
patterns are multifarious and complicated, as contrasted with showed that people are prone to assist unknown robots in
simple automated machines and robots. Sudden warnings entering secured facilities [3]. They warn about the potential
and abnormal situations might be unpredictable for significant threats, including bomb threats that would
circumstances; however, they are unlikely to be perceived as arise while procuring dangerous robots access to buildings.
unexpected because we know that there is a possibility that
While these reports only mention that participants tend to
these will happen.
follow the unexpected requests from robots more than the
RELATED WORK authors’ expectations, no useful knowledge has been
Trust of humans in an entity is said to be an important factor obtained about the relationship between the trust in robots
that significantly affects the willingness of people to and the judgments rendered upon unexpected requests made
exchange information, follow the suggestions, or use by those robots. There are two possible causes. The first is
information obtained from that entity. A number of studies rationality. In the experiments conducted by Salem et al. [15]
have addressed trust in interactions with entities having a low and Robinette et al. [13], they examined how a robot’s errors
level of autonomy, such as automated systems [8, 12], robots affect people’s perceptions and behaviors when they
used in military operations [4], agents in role-playing games received unexpected recommendations. There was no
[11], or those used in e-commerce [10]. However, there have difference in acceptance, as most or all participants followed
been few empirical studies on trust in interaction with social the recommendations from the robot regardless of its
and autonomous agents perceived to behave with their own reliability, even though its reliability affects one’s subjective
purposes, motivations, and intentions. In the following, we perception of trust about a robot. This may be due to the fact
introduce several studies about trust in robots with autonomy that the disadvantages of not following the robot’s
and sociality. recommendations were not clear and the participants were
Several studies have addressed the factors that affect trust in not taking the experiment seriously. In order to find out
human-robot interaction (HRI). A meta-analysis provided by whether there is a difference, we changed the rationality of
Hancock et al. identified factors that influence trust in robots, the agent’s recommendations and conducted an experiment
and concluded that trust is critical in maintaining human- in which the benefit of accepting the recommendations was
robot relationships and enabling effective interaction [4]. In obvious. The second is the number of people. Compared with
interactions with home companion robots, Salem et al. quantitative data, the number of followers is represented as a
examined how robot errors would influence human binary value: following or not following, and a large number
perceptions [15]. Martelaro et al. found that perceived of people are needed to sufficiently evaluate the data.
vulnerability afforded more trust in HRI [9]. Certain studies Therefore, we recruited a large number of participants using
indicate that the method of interaction with robots also crowdsourcing.
influences the perception of trust. For example, Bainbridge In addition to acceptance, the amount of decision time must
et al. showed that physical qualities of robots improved trust be considered as another behavioral element when a
during a collaborative human-robot task [1]. Ullman and recommendation is received. Decision time on unexpected
Malle found that involvement with a robot’s task contributed requests, which is characteristic in interactions with
to developing trust with the robot [16]. intelligent and autonomous entities, has been rarely
Little has been reported on how the perception of trust affects measured. As an example, Bainbridge et al. measured
people’s behaviors in interactions with a robot or agent, decision time when participants received unusual requests by
especially when the interaction is unpredictable and a robot [1]. However, useful insight about the relationship
unprecedented, although much of the literature on human- between trust and decision time was not obtained, as they
automation has focused on the impact of trust in automation focused on the influence of the physical existence of the
systems, which showed that, for example, not trusting a robot on people’s perception. In addition, all the reports
system (distrust) leads to disuse of that system or rejection of introduced above have a high situational specificity, such as
its suggestions [17]. In an experiment conducted by Salem et the interaction with a home companion robot and a scenario
al., people were asked by a home companion robot to to escape from fire. In order to be applicable to other fields,
perform unusual tasks, such as disposing of letters on a table we conducted experiments that were as generalized as
or pouring orange juice into a plant on a windowsill [15]. possible without impairing the characteristics of the agent.
They found that participants still fulfilled the irrational METHOD
request made by the robot that had showed faulty behavior. We performed two experiments to gain an understanding of
Bainbridge et al. used acceptance of unusual requests made how trust in an agent might influence human decision on an
by robots as an indicator of trust, which asked participants to unexpected recommendation from the agent. For this, we
throw away expensive-looking books [1]. In an experimental investigated participants’ trust in the agent by a questionnaire
scenario where a fire occurs and smoke fills up in a building, and measured participants’ acceptance and decision time.
Robinette et al. indicated that people tended to place too
We developed three main hypotheses for our experiments:

246
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

1. Trust. Participants’ trust measured by the questionnaire comparing participants’ behavior of the two conditions, we
will be significantly higher with an agent that performs aimed to gain insight into how trust in the agent affected
well than with one that performs poorly. participants’ acceptance and decision time on the unexpected
recommendation.
2. Acceptance. Participants will tend to accept unexpected
recommendations from an agent when they trust it. Once you encounter an unexpected situation, the situation is
no longer unexpected. Therefore, only one set of
3. Decision time. When participants trust an agent, they
experimental data can be obtained from each participant.
will take more time to decide whether to accept an
Crowdsourcing, a paradigm whereby we can employ
unexpected recommendation from it.
temporary workers through the Internet, was used to recruit
In order to examine the impact of trust in the agent, we must a large number of participants. We chose Lancers, one of the
compare the conditions where participants’ trust in the agent largest crowdsourcing services in Japan, as the
differ. Therefore, we set Hypothesis 1 in order to confirm crowdsourcing platform.
that we succeeded arranging two conditions properly. We set
The task title we used to gather participants was “Playing
Hypothesis 2 based on a number of studies that showed that
Game with Browser and Filling in Questionnaire.”
trust enhances the acceptability of entities in situations
Participants consented via a consent form and reviewed an
unrelated to unexpected situations. Hypothesis 3 was
outline for this experiment, in which we did not tell them that
set because participants would be more puzzled when they
there was an unexpected situation. In the explanation, we
received an unexpected recommendation from an agent they
asked participants to collect as many coins as they could by
trust than one that they did not.
choosing a path over 20 iterations, referring to information
Common items in two experiments from the agent, although the unexpected situation occurred
in 15th or 16th round and we did not use any experimental
We performed experimental studies in which participants
data after this unexpected round. We arranged 20 rounds in
interacted with an agent in a “coin-gathering game.” This
order to eliminate the possibility that participants would
game drew inspiration from studies presented by Komatsu et
predict that something would happen at the end of the game
al. [6, 7], which measured how human-like and artificial
by placing the unexpected situation during the game and
expressions from an agent affected participants’ reactions
ensuring that this unexpected round was actually unexpected
and impressions. The purpose of this game was to gather as
for the participants. After performing an operation check,
many coins as possible while participants drove a car with an
participants played the “coin gathering game,” which was
agent looking like a bell. They repeatedly chose one of two
made by Unity version 5.4.2 and WebGL, with a browser on
paths at a fork in the road and the agent recommended a path
their own computers using a mouse button to select a path.
to them in each trial. The correct path, where they either
At the start of the game, they were instructed to gather as
earned the most coins or lost the least, was randomly
many coins as possible using the recommendations of the
assigned.
agent. The experimental data was sent to a server in our
We manipulated the agent’s behavior in two conditions: the laboratory. We checked whether the experiments were
correct and the faulty condition. In the correct condition, the conducted properly by giving a password to each participant
agent always recommended the correct path to participants. and confirming correspondence with the data sent to the
In the faulty condition, the agent recommended a path at server.
random.
In order to measure participants’ trust in the agent, we used
The experiments consisted of two interaction stages: a trust “Checklist for Trust between People and Automation,”
building stage and an unexpected recommendation stage. which is an empirically based scale to measure trust and
The first stage aimed to demonstrate the competence of the distrust in automated systems [5]. This questionnaire has 12
agent and build participants’ trust in the agent. That is, in the standard seven-point Likert-scale questions, in which 7
correct condition, the agent showed its flawlessness so as to questions are for measuring trust such as “I am confident in
build participants’ trust in the agent, whereas in the faulty the system,” and 5 questions are for measuring distrust, such
condition, its imperfection was demonstrated by as “The system is deceptive.” We used the average points of
recommending a path at random in order that participants the 7 Likert-scale questions about trust and 5 questions about
distrusted the agent. We asked participants to fill in a distrust as “the trust scale” and “the distrust scale” of
questionnaire to measure their trust in the agent. The trust participants for the agent, respectively. We translated this
measured was unrelated to the unexpected recommendation, questionnaire into Japanese and changed the order so that
since this questionnaire was conducted before the second questions about trust and distrust come alternately in order to
stage. In the second stage, participants encountered an confirm whether participants answered without thinking. We
unexpected situation that was unpredictable from used a questionnaire about human-automation interaction
interactions with the agent and explanation of the game. We because there are few established methods for measuring
measured whether participants accepted the unexpected trust, even in human-agent interactions, and much less in
recommendation and the time taken to choose a path. By human-robot interactions, although there exist several

247
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

measurements of trust in human-automation and human-


interpersonal interactions in previous studies [2].
Method of Experiment 1
The purpose of experiment 1 was to examine how trust in an
agent impacted participants’ behavior when they got an
unexpected recommendation for which the benefit was
unpredictable. In each trial, a red car moved from the bottom
of the screen to the branching point of a fork in the road in
1.5 seconds. The recommendation from the agent and two
red buttons (right and left arrows) to select a path appeared Figure 1: Screenshots of Experiment 1: a) trust building
at the same time as the car stopped (see Figure 1a). The stage; b) unexpected recommendation stage in which the agent
agent said “the right path” or “the left path” in a balloon. says “The center forest”
After pushing a button, the car proceeded along the chosen
path in three seconds. In the correct path, a screen that
showed an image of a coin appeared and participants got one
coin (see Figure 2a), whereas in the incorrect path, the stone
image appeared and they got no coins (see Figure 2b). The
result screen appeared for four seconds and afterwards they
proceeded to the next trial or to the questionnaire. They
started the game with no coins and the number of coins they
possessed was always confirmable.
In the trust building stage, the participants chose a path 15
Figure 2: Result screens of Experiment 1: a) result screen of
times and filled in the questionnaire to measure trust in the
the correct path; b) result screen of the wrong path
agent after finishing the 10th trial. In the 16th trial—the
unexpected recommendation stage—the agent coin when they chose the correct path and they lost three
recommended that the participants proceed in the forest coins when they chose the incorrect path (see Figure 4c &
between the paths saying “the center forest”, although there 4d). They chose a path 14 times and filled in the same
seemed to be no roads in the forest (see Figure 1b). A new questionnaire as experiment 1 after finishing the 10th trial.
button (upward arrow) to proceed in the forest appeared. We We arranged the order so that the participants came across
measured which button they pushed and the decision time the reward road and the penalty road the same number of
that elapsed, from the appearance of the recommendation times in random order. Therefore, they encountered the
and the buttons, to their decision made by the button press. reward road and the penalty road 5 times each before the
Eighty Japanese participants took part in experiment 1. They questionnaire and twice each between the questionnaire and
were randomly assigned to one of the two experimental the unexpected recommendation. Besides building trust in
conditions in which the accuracy of the agent’s the agent, the purpose of this stage was to give the impression
recommendation was manipulated. In the correct condition that they would get coins in the reward road, which had a
(40 participants), the agent always recommended the correct signs of a coin, and lose coins in the penalty road, which
path; that is, the participants got a coin every time they looked rough and dangerous.
followed the agent’s recommendation. In the faulty condition In the 15th trial as the unexpected recommendation stage, the
(40 participants), the agent recommended the correct path new fork in the road emerged, which was composed of the
with 50% probability and the incorrect path with 50% reward road and the penalty road. In addition to the correct
probability, and therefore there was no informational value and the faulty conditions, we manipulated the agent’s
in the recommendation. We informed participants that we unexpected recommendation: the rational and the irrational
would pay 100 Japanese yen (about 1 US dollar), regardless conditions. In the rational condition, the agent recommended
of the number of coins, and the duration of the experiment the reward road (see Figure 5a), whereas in the irrational
would be from 5 to 10 minutes. condition, the agent recommended the penalty road (see
Method of Experiment 2 Figure 5b). Unlike experiment 1, the two buttons (right and
In order to examine the impact of rationality of a left arrows) appeared and the benefit of its unexpected
recommendation from an agent in detail, we conducted recommendation was predictable. After finishing the game,
experiment 2. In the trust building stage, participants we asked them about their age and sex.
randomly encountered two kinds of forks in the road: the A total of 240 Japanese participants (87 female, 153 male)
reward road and the penalty road (see Figure 3). In the took part in experiment 2, ranging in age from 20 to 81 years
reward road, they obtained three coins when they chose the (M = 37.88, SD = 9.98). They were randomly assigned to one
correct path and one coin when they chose the incorrect path of the following four experimental conditions: the correct
(see Figure 4a & 4b), while in the penalty road, they lost one rational, the faulty rational, the correct irrational, and the

248
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

RESULT
Since the decision time data did not show normal distribution
(Kolmogorov-Smirnov test), we used non-parametric
procedures. Mann-Whitney U-tests were used to compare
two independent conditions. In experiment 2, we used the
Kruskal-Wallis test to test whether there were any
differences among the four conditions and the Steel-Dwass
test for multiple comparisons. The χ2 test was used to analyze
whether the number of participants who accepted the
recommendation differed.
Figure 3: Screenshots of Experiment 2 in trust building stage:
a) the reward road b) the penalty road Result of Experiment 1
We found a significant effect from the conditions on the
participants’ trust in the agent. As shown in Figure 6, the
participants in the correct condition gave significantly higher
scores on "the trust scale" (U=29; p<0.001) and lower scores
on "the distrust scale" (U=27; p<0.001).
In the unexpected recommendation stage, 39 participants out
of 40 (97.5%) accepted the agent’s recommendation in the
correct condition, while 35 participants (87.5%) accepted it
in the faulty condition. There was no significant condition
effect on the participants’ acceptance (χ2(df=1)=2.88;
p>0.05). On the contrary, the decision time taken on the
unexpected recommendation was significantly longer
(U=540; p<0.05) in the correct condition. These results are
illustrated in Figure 7.
Result of Experiment 2
As with experiment 1, we found a significant effect of the
condition in the trust building stage on the participants’ trust
in the agent. A statistical test was performed to compare two
correct conditions with two faulty conditions since the
Figure 4: Result screens of Experiment 2: a) screen when recommendation in the unexpected recommendation stage
participants choose the correct path in the reward road; b) the was arranged after the questionnaire, and the condition of
wrong path in the reward road; c) the correct path in the rationality did not affect the trust and distrust scale. Figure
penalty road; d) the wrong path in the penalty road 8 indicates that the participants in the two correct conditions
gave significantly higher scores on “the trust scale”
(U=1383; p<0.001) and lower scores on “the distrust scale”
(U=1934.5; p<0.001).
In the unexpected recommendation stage, 59 participants out
of 60 (98.3%) accepted the agent’s recommendation in the
correct rational condition, all participants (100%) accepted it
in the faulty rational condition, 29 participants (48.3%)
accepted it in the correct irrational condition and 27
participants (45.0%) accepted it in the faulty irrational
condition, as indicated in Figure 9. There was no significant
Figure 5: Screenshots of Experiment 2 in unexpected
condition effect on the participants’ acceptance in the two
recommendation stage: a) rational condition in which the
agent recommends the reward road; b) irrational condition in rational conditions (χ2(df=1)=1.01; p>0.05) nor in the two
which the agent recommends the penalty road irrational conditions (χ2(df=1)=0.134; p>0.05).
faulty irrational conditions. Each condition had 60 The Kruskal-Wallis one-way analysis of variance showed a
participants. We informed participants that we would pay significant difference of the decision time on unexpected
200 Japanese yen (about 2 US dollars) and the duration of recommendations across the four conditions (H=41.67;
the experiment would be from 10 to 15 minutes. They started p<0.001). As post-hoc analysis, the Steel-Dwass multiple
the game with 10 coins. The other experimental settings were comparisons test showed that the decision time on
the same as in experiment 1. unexpected recommendations was significantly longer in the
correct irrational condition than in the correct rational

249
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

*** : p < 0.001 *** : p < 0.001


*** *** *** ***
7 7 7 7
6 6 6 6

Distrust Scale
Distrust Scale

Trust Scale
Trust Scale

5 5 5 5
4 4 4 4
3 3 3 3
2 2 2 2
1 1 1 1
Correct Faulty Correct Faulty Correct Faulty Correct Faulty
(n = 40) (n = 40) (n = 40) (n = 40) (n = 120) (n = 120) (n = 120) (n = 120)

Figure 6: Result of trust building stage in Experiment 1. Box Figure 8: Result of trust building stage in Experiment 2. Box
plots of the trust scale (left) and the distrust scale (right). plots of the trust scale (left) and the distrust scale (right).
* : p < 0.05 60
8 Followed
Followed
40

Number of Participants
Not Followed 50
Number of Participants

Not Followed
*
Decision Time (sec)

6 40
30
98.3 100
30
% %
97.5 4
20 87.5 20 55.0
% 48.3 51.7 45.0
% 4.8 %
% % %
12.5 2 sec 3.6 10
10 1.7%
2.5 % sec 0%
% 0
Correct Faulty Correct Faulty
0 0 Rational Rational Irrational Irrational
Correct Faulty Correct Faulty (n = 60) (n = 60) (n = 60) (n = 60)
(n = 40) (n = 40) (n = 40) (n = 40)
Figure 9: Result of unexpected recommendation stage in
Figure 7: Result of unexpected recommendation stage in Experiment 2. Number of participants who did or did not
Experiment 1. Left: the number of participants who did or did follow the unexpected recommendation
not follow the unexpected recommendation. Right: the average
* : p < 0.05, ** : p < 0.01, *** : p < 0.001
over the decision time on the unexpected recommendation. 10
Error bars indicate standard error. *
***
8 **
Decision Time (sec)

condition (p<0.001); in the correct irrational condition than *** *


in faulty rational condition (p<0.001); in the correct
irrational condition than in the faulty irrational condition 6
(p<0.05); in the faulty irrational condition than in the correct
rational condition (p<0.05); and in the faulty irrational 4
condition than in the faulty rational condition (p<0.01). 6.0
sec
These results are shown in Figure 10. 2 3.9
2.2 sec
1.8
DISCUSSION sec sec
The results both in experiment 1 and 2 supported Hypothesis 0
Correct Faulty Correct Faulty
1, which predicted that people trust a flawless agent over a Rational Rational Irrational Irrational
faulty agent. In the questionnaire, the participants gave (n = 60) (n = 60) (n = 60) (n = 60)
significantly higher scores in “the trust scale” and lower
scores in “the distrust scale” in the correct condition that Figure 10: Result of unexpected recommendation stage in
Experiment 2. Average of decision time on the unexpected
always recommended a correct path than they did in the
recommendation; error bars indicate standard error.
faulty condition that recommended a path at random. These
results appear to agree with a number of studies in human- which they did not. Hereinafter, we will continue our
automation interaction, human-computer interaction, and discussion on the basis of the fact that participants trusted the
HRI, which find that reliability is significantly important in agent in the correct conditions and distrusted it in the faulty
building trust in artificial entities. Therefore, our conditions.
experiments succeeded in creating two conditions, one in
which the participants trusted the agent and the other in

250
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

Surprisingly, Hypothesis 2 was not supported: although the result suggests that rationality is important when the agent
accuracy of the recommendation affected the trust in the recommends that people make a decision quickly.
agent, there was no significant difference in the number of
In summary, our findings suggest that trust in an agent might
people who accepted the unexpected recommendation. In
not influence humans' acceptance when they receive a
experiment 1, most of the participants accepted the
unexpected recommendation from the agent but lengthen the
unexpected recommendation regardless of whether they
decision time taken in the situation when the
trusted the agent or not. One possible explanation of this
recommendation is irrational. Since our research focused on
result is the curiosity to see what would be in the forest. Since
trust in an agent and rationality, further exploration of other
the “coin gathering game” was not realistic and there was no
factors that affect acceptance and decision time when facing
incentive when participants obtained more coins, they did
unexpected recommendations is needed. Moreover, further
not feel any risk to accept the recommendation. In
work is required to confirm that the same conclusions will be
experiment 2, regardless of the trust in the agent, almost all
reached in more realistic or complex conditions, such as an
participants accepted the rational unexpected
interaction with a real-life robot or a highly intelligent agent.
recommendation and nearly half of participants accepted the
irrational recommendation. These findings support that we CONCLUSION
succeeded in arranging two conditions in which the benefit As agents increasingly learn to behave autonomously with
and risk to follow the recommendation was predictable. increasingly high intelligence, human-agent interaction will
become unpredictable. We explored how humans’ trust in an
The comparison of the results between two irrational agent affected their acceptance and decision time when they
conditions strongly conflicts with Hypothesis 2, because it got unexpected recommendations from the agent. By varying
would be easier to observe the impact of trust on acceptance the accuracy of information from the agent before the
when half of participants accept the recommendation than unexpected recommendation, we compared participants’
the situation in which most participants accepted the reactions between two conditions in which participants’ trust
suggestion of agents or robots, such as experiment 1, the in the agent differed.
rational condition in experiment 2 and other studies [13, 15].
Therefore, we conclude that trust in an agent does not affect The main contributions of this paper are as follows:
the acceptance of its unexpected recommendation regardless
- Trust in an agent might not have an impact on humans’
of its rationality. It is generally agreed today that the higher
decisions regarding whether they accept its unexpected
people trust an artificial entity, the more likely they will
recommendations.
accept its directions and recommendations in many domains
[4, 8, 12]. This result indicates that redefinition or - Trust in an agent lengthens the decision time taken by
classification of “trust” is required in HRI and HAI. a human when facing an unexpected recommendation
lacking rationality.
The results support Hypothesis 3 only when the unexpected
recommendation lacks rationality. In experiment 1, the trust Thus, our study could give insight to agent designers: it is
in the agent made the decision time significantly longer. In important to design an agent’s information to have rationality
experiment 2, the trust did not affect the decision time when during time-pressured situations, such as those seen when
the unexpected recommendation was rational, whereas the operating a vehicle or facing a disaster. However, little is yet
trust significantly lengthened the decision time when the known about the humans’ reaction when they encounter
recommendation was irrational. These results suggest that unexpected situations in interaction with an agent. Further
participants will be more puzzled when they get a research is strongly required to establish effective
recommendation from the agent, which they trust than one interactions with autonomous and intelligent agents.
they do not. In HRI, trust is said to highly influence behavior
regarding interaction and collaboration [4]. When people
encounter time-constrained situations with robots or agents,
such as encountering disasters or driving, it is desirable to
consider their recommendations provided, think deeply, and
make a decision quickly. However, our results may cause a
paradox: the more they trust the agent, the longer it takes to
decide whether to accept unexpected recommendations,
whereas the acceptance would not change. Further research
is required to solve this paradox and find a way to interact in
which a human can trust an agent but also make a quick
decision.
In experiment 2, the decision time was longer in two
irrational conditions than in two rational conditions. This

251
Session 10: Collaboration and Human Factors I HAI 2017, October 17–20, 2017, Bielefeld, Germany

REFERENCES International Conference on Human Robot Interaction


1. Wilma A. Bainbridge, Justin W. Hart, Elizabeth S. (HRI '16). IEEE Press, Piscataway, NJ, USA, 181-188.
Kim, and Brian Scassellati. 2011. The benefits of
10. Tetsuya Matsui and Seiji Yamada. 2016. Building
interactions with physically present robots over video-
Trust in PRVAs by User Inner State Transition through
displayed agents. International Journal of Social
Agent State Transition. In Proceedings of the Fourth
Robotics 3, 1: 41-52. International Conference on Human Agent Interaction
2. Deborah R. Billings, Kristin E. Schaefer, Jessie Y.C. (HAI '16). ACM, New York, NY, USA, 111-114. DOI:
Chen, and Peter A. Hancock. 2012. Human-robot https://doi.org/10.1145/2974804.2974816
interaction: developing trust in robots. In Proceedings
11. Aline Normoyle, Jeremy B. Badler, Teresa Fan,
of the seventh annual ACM/IEEE international
Norman I. Badler, Vinicius J. Cassol, and Soraia R.
conference on Human-Robot Interaction (HRI '12).
Musse. 2013. Evaluating perceived trust from
ACM, New York, NY, USA, 109-110.
procedurally animated gaze. In Proceedings of Motion
DOI=http://dx.doi.org/10.1145/2157689.2157709
on Games (MIG '13). ACM, New York, NY, USA,
3. Serena Booth, James Tompkin, Hanspeter Pfister, Jim Article 119, 8 pages. DOI:
Waldo, Krzysztof Gajos, and Radhika Nagpal. 2017. http://dx.doi.org/10.1145/2522628.2522630
Piggybacking Robots: Human-Robot Overtrust in
12. Raja Parasuraman, and Victor Riley. 1997. Humans
University Dormitory Security. In Proceedings of the
and automation: Use, misuse, disuse, abuse." Human
2017 ACM/IEEE International Conference on Human-
Factors: The Journal of the Human Factors and
Robot Interaction (HRI '17). ACM, New York, NY,
Ergonomics Society 39, 2: 230-253.
USA, 426-434. DOI:
https://doi.org/10.1145/2909824.3020211 13. Paul Robinette, Wenchen Li, Robert Allen, Ayanna M.
Howard, and Alan R. Wagner. 2016. Overtrust of
4. Peter A. Hancock, Deborah R. Billings, Kristin E.
Robots in Emergency Evacuation Scenarios. In The
Schaefer Jessie Y. C. Chen Ewart J. de Visser, and
Eleventh ACM/IEEE International Conference on
Raja Parasuraman. 2011. A meta-analysis of factors
Human Robot Interaction (HRI '16). IEEE Press,
affecting trust in human-robot interaction. Human
Piscataway, NJ, USA, 101-108.
Factors 53: 517-527.
14. Maha Salem, and Kerstin Dautenhahn. 2015.
5. Jiun-Yin Jian, Ann M. Bisantz, and Colin G. Drury.
Evaluating trust and safety in hri: Practical issues and
2000. Foundations for an empirically determined scale
ethical challenges. Emerging Policy and Ethics of
of trust in automated systems. International Journal of
Human-Robot Interaction.
Cognitive Ergonomics 4, 1: 53-71.
15. Maha Salem, Gabriella Lakatos, Farshid
6. Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada,
Amirabdollahian, and Kerstin Dautenhahn. 2015.
Kotaro Funakoshi, and Mikio Nakano. 2014.
Would You Trust a (Faulty) Robot?: Effects of Error,
Augmenting expressivity of artificial subtle
Task Type and Personality on Human-Robot
expressions (ASEs): preliminary design guideline for
Cooperation and Trust. In Proceedings of the Tenth
ASEs. In Proceedings of the 5th Augmented Human
Annual ACM/IEEE International Conference on
International Conference (AH '14). ACM, New York,
Human-Robot Interaction (HRI '15). ACM, New York,
NY, USA, Article 40 , 10 pages.
NY, USA, 141-148.
DOI=http://dx.doi.org/10.1145/2582051.2582091
DOI=http://dx.doi.org/10.1145/2696454.2696497
7. Takanori Komatsu, Seiji Yamada, Kazuki Kobayashi,
16. Daniel Ullman, and Bertram F. Malle. 2017. Human-
Kotaro Funakoshi, and Mikio Nakano. 2010. Artificial
Robot Trust: Just a Button Press Away. In Proceedings
subtle expressions: intuitive notification methodology
of the Companion of the 2017 ACM/IEEE International
of artifacts. In Proceedings of the SIGCHI Conference
Conference on Human-Robot Interaction (HRI '17).
on Human Factors in Computing Systems (CHI '10).
ACM, New York, NY, USA, 309-310. DOI:
ACM, New York, NY, USA, 1941-1944.
https://doi.org/10.1145/3029798.3038423
DOI=http://dx.doi.org/10.1145/1753326.1753619
17. Peter de Vries, Cees Midden, and Don Bouwhuis.
8. John D. Lee, and Katrina A. See. 2004. Trust in
2003. The effects of errors on system trust, self-
automation: Designing for appropriate reliance. Human
confidence, and the allocation of control in route
Factors: The Journal of the Human Factors and
planning. International Journal of Human-Computer
Ergonomics Society 46, 1: 50-80.
Studies 58, 6: 719-735.
9. Nikolas Martelaro, Victoria C. Nneji, Wendy Ju, and
Pamela Hinds. 2016. Tell Me More: Designing HRI to
Encourage More Trust, Disclosure, and
Companionship. In The Eleventh ACM/IEEE

252

You might also like