Professional Documents
Culture Documents
Critic, Compatriot, or Chump?: Responses To Robot Blame Attribution
Critic, Compatriot, or Chump?: Responses To Robot Blame Attribution
Victoria Groom, Jimmy Chen, Theresa Johnson, F. Arda Kara, Clifford Nass
Department of Communication
Stanford University
Stanford, CA
vgroom@stanford.edu
Abstract—As their abilities improve, robots will be placed in roles provide guidelines for designing robots that can issue blame
of greater responsibility and specialization. In these contexts, while still maintaining a positive human-robot relationship.
robots may attribute blame to humans in order to identify
problems and help humans make sense of complex information.
In a between-participants experiment with a single factor (blame
II. RELATED WORK
target) and three levels (human blame vs. team blame vs. self
blame) participants interacted with a robot in a learning context, A. Robot Social Actors
teaching it their personal preferences. The robot performed The Computers as Social Actors (CASA) paradigm [2][3]
poorly, then attributed blame to either the human, the team, or suggests that people respond to technologies as social actors,
itself. Participants demonstrated a powerful and consistent applying the same social rules used during human-human
negative response to the human-blaming robot. Participants interaction. The original studies that established CASA
preferred the self-blaming robot over both the human and team featured simple desktop computers, but subsequent studies
blame robots. Implications for theory and design are discussed. have revealed that people also apply social rules when
interacting with voice-based interfaces and pictorial-agents [4].
Keywords – human-robot interaction; blame attribution;
politeness; face-threatening acts More recent field and experimental research has
demonstrated that people treat robots as social actors,
I. INTRODUCTION establishing social rapport with robots [5]. For example, as
with humans, the competitive or collaborative dynamic of
As robots become more sophisticated and are deployed human-robot relationships affects attitudes towards robots [6].
more widely, their ability to communicate effectively and It is not surprising that robots elicit social responses. Robots’
politely will become increasingly important. Whether or not social cues may be even more powerful than the cues of
robots should be treated as teammates [1], robots will assume computers and characters, due in large part to the embodied
specialized roles, sometimes acquiring and leveraging nature of robots. Bodies facilitate nonverbal communication,
information unknown to human partners. With this increased with proxemics and gesturing affecting humans’ responses
expertise, robots may be put in roles more equal to humans, [7][8][9].
and, in some cases, will be in superior positions to make
judgments. 1) Face-threatening acts: With the increased knowledge
and status of robots, they will inevitably need to have difficult
Many current technologies violate basic rules of politeness.
Computers and interfaces alert users to inadequacies or failures converations with humans involving disagreement or
with error messages that deflect responsibility, sometimes assigning blame. These assertions are face-threatening acts, in
implicating the user as the source of the problem. Such which humans are likely to feel threatened, upset, or
deflections elicit frustration and anger from users, negatively humiliated [10][11]. Because humans apply the same social
affecting users’ attitudes toward the technology. As robots are rules to robots as they do to humans, robots may be able to
increasingly expected to demonstrate high levels of expertise, mitigate the damage of these face-threatening acts on the
the need for robots to present information that humans may not human-robot relationship by leveraging strategies used by
want to hear is growing. As demonstrated by current humans to navigate difficult discussions. Though researchers
technologies, failure to present this information politely can are only beginning to explore this topic, recent findings
generate significant negative consequences.
indicate that, as with human-human conflict, when robots
This study examines one type of difficult conversation that employ politeness strategies, humans’ negative responses to
robots may soon engage in: attributing blame to humans. disagreement are reduced [12] and perceptions of robots are
Attributing blame can be useful, as it aids understanding and improved [13]. This study extends recent work on difficult
helps identify problems. If robots are to succeed in blaming human-robot conversations, this time exploring the influence
humans for failures, however, they must do so in a way that of robot blame attribution on human attitudes.
does not humiliate and aggravate human partners. This study
evaluates the effects of robots attributing blame to different B. Blame Attribution
targets, varying whether the robot attributes blame to a human As people perceive the world around them, they actively
partner, a human-robot team, or itself. Results from this study infer the causes of events [14]. The manner in which people
make these causal inferences, or “attributions,” affects peoples’
Self bests human. People will prefer the robot when it attributes
III. STUDY DESIGN blame to itself, rather than when it attributes blame solely to
the human.
The present study compared the effects of attributing blame
to three different blame targets. Participants engaged in a
collaborative learning task, teaching the robot their personal
preferences. When the robot failed to learn, the robot blamed
212
A. Participants Participants were instructed that the goal of the robot was to
Forty-eight undergraduate students participated in the learn the participant’s preferences well enough to deduce which
study. Gender was balanced across conditions (24 male and 24 of the three items in each category the participant preferred.
female). Participants were given course credit. The researcher explained that before the robot would try to
determine the preferred item in a category, the participant
needed to teach the robot her likes and dislikes regarding that
B. Materials
category of items. This was accomplished through a series of
A modified Robosapien™ robot was used in the study. The “this or that” questions in which the robot would state two
robot was capable of walking, moving its arms, bending at the items belonging to the category and the participant would
waist, and turning its head. The robot’s speakers were disabled indicate which item was preferred. For example, for the food
and a small walkie talkie was attached to its back. This enabled category, the robot asked participants if they preferred
the researcher to transmit customized utterances from a spaghetti or salmon, and the participants stated which of the
separate hidden room. To play each statement, the researcher two foods they preferred.
held a walkie talkie to a laptop’s speaker located in the hidden
room. Researchers viewed the experiment room through a one- After the robot asked three this or that questions for a
way mirror. Debriefings of participants revealed they were category, the robot initiated the preference-guessing process by
unaware the researcher was viewing them and controlling the simulating looking at each of three items in the category. The
robot, instead believing the robot was autonomous. researcher did this by making the robot bend forward to the
appropriate angle for viewing the row of items, then turning the
In between the participant and the robot, eighteen postcard- robot’s head to the right and left so that it appeared to be
sized images were laid out on the floor in a grid of three viewing the items.
columns and six rows (see Fig. 1 for the participant’s view of
the robot and items). Each image was a photograph of a Participants had been given three pieces of cardboard with
common, easily-recognized item, such as a pizza. one side white and the other side blue, red, or yellow,
corresponding to the colored borders of the images.
C. Procedure Participants were told that when the robot asked which of the
three items on the floor they preferred, they were to select the
Participants were welcomed to the lab by a researcher, cardboard with the same color as the border of their preferred
seated at a desk, and given consent forms. If the participant item and place it on an easel with the colored side facing away
consented, she was asked to take a seat in another chair. The from the robot. This was ostensibly so the robot could not use
robot was located approximately five feet away facing the its vision system to cheat, but in reality, participants were
participant, and the images were laid out on the floor. The positioned in front of the one-way mirror, allowing the
researcher explained that the robot had been designed to learn researcher to see which color had been selected.
people’s preferences and the goal of the study was to assess the
robot’s performance learning preferences. Participants were The researcher then entered the participants’ selection into a
also told the robot featured a vision system and could program on the laptop, which automatically loaded the correct
understand some basic verbal commands. voice prompt and ensured the number and order of incorrect
guesses made by the robot were identical for all participants,
The researcher explained that each row of images regardless of their selections. For example, in the first category
represented a certain category, such as food, and each picture in the robot made two incorrect guesses. If the participant selected
that row represents an item in that category, such as pizza. the yellow item, the researcher selected “yellow” from the
Each picture was surrounded with a colored border. Items in program interface. The robot would guess blue and then red; if
the left column had a red border, middle items had a blue the participant selected red, it would guess yellow, then blue.
border, and items in the right column had a yellow border.
After the participants made their selections and the robot
stated its guess, the participant would say if it was correct or
incorrect. If the robot was wrong, it would make one additional
guess. At the end of categories in which the robot guessed
incorrectly, the robot made a blame statement (for a complete
list of condition-specific blame statements, see Table 1). The
robot issued one blame statement after guessing incorrectly in
rounds one, three and six, and issued two blame statements in
round two.
After either guessing correctly or guessing incorrectly twice
and in some cases making a blame statement, the robot asked
participants to turn the card around. Participants were told this
enabled the robot to record their preferences and improve its
understanding. The robot would then initiate the process for the
next category, asking category-relevant this or that questions
Figure 1. Participant’s view of robot and items at beginning of task.
and then readjusting its body so that it appeared to be looking
at the appropriate row.
213
TABLE I. BLAME STATEMENTS BY CONDITION other two items, participants indicated how much they agreed
Self Blame Team Blame Human Blame of disagreed with the following statements: “I felt like the
I think I am doing a I think we are doing a I think you are doing robot and I were a team,” and “I would make personal
bad job. bad job. a bad job. sacrifices to help the robot.” Participants rated these two items
I am disappointed in I am disappointed in I am disappointed in
myself. us. you.
on a ten-point scale ranging from “Strongly Disagree” to
I think I am being I think we are being I think you are being “Strongly Agree.” The index was reliable (Į=.67).
inconsistent with my inconsistent with our inconsistent with
choices. choices. your choices. E. Results
I am frustrated with I am frustrated with I am frustrated with All statistical analyses were conducted using a one-way
my answers. our answers. your answers.
I think I should be I think we should be I think you should be
analysis of variance (ANOVA) with blame attribution as the
doing better. doing better. doing better. independent variable. All post-hoc analyses were conducted
using Tukey’s LSD. Participant gender was tested as a
covariate in the analysis as gender may have influenced
This process continued until all six categories had been attitudes toward the robot, but it did not have any substantive
completed. When the process was finished, the robot stated that
effects on the current results and are not presented in the
the task was over and the participant should notify the
following section. For a summary of results, see Table 2.
researcher by knocking on the door.
1) Robot competemce: A main effect of blame attribution
Once the experimental task was completed, the researcher on robot competence was found, F(2, 45)=4.42, p<.05, partial
returned to the room. Participants completed a questionnaire on Ș²=.17. Consistent with the hypothesis that team blame bests
a computer and were then fully debriefed and dismissed. human blame, post-hoc analysis, p=.01, indicated that team
blame participants, M=5.66, SD=1.16, perceived the robot to
D. Measures be more competent than did human blame participants,
1) Robot competence: Robot competence was an index of M=4.67, SD=1.32. Support was also found for the self bests
seven items. Participants indicated how well the words, human blame hypothesis, with post-hoc analysis, p=.01,
“competent,” “incompetent” (reverse coded), “intelligent,” revealing that self blame participants, M=5.65, SD=0.68,
“qualified,” “quick learner,” “skilled,” and “trained” described perceived the robot to be more competent than did human
the robot, on seven ten-point scales ranging from “Describes blame participants. Post-hoc analysis revealed no significant
Very Poorly” to “Describes Very Well.” The index was difference in ratings of robot competence for team blame and
reliable (Į=.87). self blame participants.
2) Robot friendliness: Robot friendliness was an index of 2) Robot friendliness: A main effect of blame attribution
eight items. Participants indicated how well the words, on robot friendliness was found, F(2, 45)=45.01, p<.001,
“cheerful,” “cooperative,” “friendly,” “happy,” “jovial,” partial Ș²=.67. As predicted by the hypothesis that team blame
“kind,” “likeable,” and “warm” described the robot, on eight bests human blame, post-hoc analysis, p<.01, demonstrated
ten-point scales ranging from “Describes Very Poorly” to that team blame participants, M=5.36, SD=0.58, percieved the
“Describes Very Well.” The index was very reliable (Į=.93). robot to be friendlier than did human blame participants,
3) Robot belligerence: Robot belligerence was an index of M=3.06, SD=0.54. Support for the self bests human
five items. Participants indicated how well the words hypothesis was also demonstrated, with post-hoc analysis,
“aggressive,” “assertive,” “bigheaded,” “harsh,” and “rude” p<.01, revealing that self blame participants, M=6.10,
described the robot on ten-point scales ranging from SD=1.43, rated the robot friendlier than did human blame
“Describes Very Poorly” to “Describes Very Well.” The participants. Lastly, post-hoc analysis, p<.05, showed support
index was very reliable (Į=.91). for the hypothesis that self blame bests team blame, as self
4) Participant comfort: Participant comfort was an index blame participants rated the robot friendlier than did team
of six items. Participants were asked, “How well do these blame participants.
words describe YOUR feelings while interacting with the 3) Robot belligerence: A main effect of blame attribution
robot?” Participants rated each item on a ten-point scale on robot belligerence was found, F(2, 45)=21.24, p<.01,
ranging from “Describes Very Poorly” to “Describes Very partial Ș²=.49. As predicted by the team bests human
Well.” The index was composed of participants’ ratings of the hypothesis, post-hoc analysis, p<.01, revealed that human
following six words: “angry” (reverse coded), “comfortable,” blame participants, M=6.53, SD=2.00, rated the robot more
“cooperative,” “relaxed,” “uncomfortable” (reverse coded), belligerent than team blame participants, M=4.36, SD=1.60.
and “warm.” The index was very reliable (Į=.85). Post-hoc analysis, p<.01, also demonstrated support for the
5) Sense of team: Sense of team was an index of four self bests human hypothesis, with human blame participants
items. For two of the items, participants indicated how rating the robot more belligerent than self blame participants,
accurately the following statements described their feelings M=2.93, SD=0.94. Lastly, post-hoc analysis, p=.01,
about the robot: “It is similar to me,” and “It thinks like me.” demonstrated support for the hypothesis that self blame bests
Participants rated each item on a ten-point scale ranging from team blame, with team blame participants rating the robot as
“Describes Very Poorly” to “Describes Very Well.” For the significantly more belligerent than self blame participants.
214
TABLE II. SUMMARY OF RESULTS B. Implications for design
Dependent Blame target Main The results of this study indicate clearly that attributing
effect blame solely to humans is a generally poor blame attribution
variable Human Team Self F(2, 45)
Robot
strategy, as it leads people to perceive the robot as less friendly
4.67ab (1.32) 5.66a (1.16) 5.65b (0.68) 4.42* and competent and more belligerent, and feel less comfortable
competence
Robot ab ac bc than when blame is attributed to other targets. When compared
3.06 (0.54) 5.36 (0.58) 6.10 (1.43) 45.01***
friendliness to self blame, human blame makes people feel like less of a
Robot team with the robot.
6.53ab (2.00) 4.36ac (1.60) 2.93bc (0.94) 21.24**
belligerence
Participant
5.97ab (0.95) 7.50a (1.39) 7.66b (1.37) 8.89**
Because of these negative consequences, any explicit use of
comfort human blame by robots should be considered carefully.
Sense of Attributing blame correctly is generally advantageous, as
2.54a (0.91) 3.20 (1.26) 3.61a (1.17) 3.67*
team
accurately identifying the source of a problem is helpful for
Standard deviations are indicated in parentheses.
Superscripts indicate post-hoc comparisons with p<.05, using Tukey’s LSD analysis. fixing it. However, even when a human is solely at fault for a
* p<.05; ** p<.01; ***p<.001. failure, the negative impact of accurate human blame on the
human-robot relationship must be weighed. In some cases, the
4) Participant comfort: A main effect of blame attribution contribution of correct human blame is more important than the
on participant comfort was found, F(2, 45)=8.89, p<.01, partial negative effect it has on people’s attitudes. Identifying and
Ș²=.28. Consistent with the team bests human hypothesis, correcting the source of failure is particularly important in high-
post-hoc analysis, p<.01, demonstrated that team blame stakes situations, such as search and rescue operations, where
participants, M=7.50, SD=1.39, felt more comfortable than did misattribution may have tragic consequences. In these cases,
human blame participants, M=5.97, SD=0.95. Post-hoc human blame for human mistakes should be used. However, in
many cases, the negative effects of human blame outweigh the
analysis, p<.01, also demonstrated support for the self bests
positives. In these cases, an alternative strategy should be used.
human hypothesis, as self blame participants, M=7.66,
SD=1.37, were more comfortable than human blame In some human-robot collaborations failures, the human is
participants. Post-hoc analysis revealed no difference in solely responsible, but attribution of fault to the team is
comfort for team blame and human blame participants. believable and acceptable to the human. Robots do not feel
5) Sense of team: A main effect of blame attribution on pride and can attribute blame to the team just as easily as to the
human. In human-robot collaborations, attributing blame to the
sense of team was found, F(2, 45)=3.67, p<.05, partial Ș²=.14.
team rather than the human is a good default strategy. In a
Post-hoc analyses revealed no significant differences between dramatic performance, for example, if a human controller
team blame, M=3.20, SD=1.26, and human blame participants, guides a robot to the wrong part of the stage or if a human actor
M=2.54, SD=0.91, and team blame and self blame bumps into a robot accidentally, use of “we” statements by the
participants, M=3.61, SD=1.17. However, post-hoc analysis, robot may indicate awareness of the mistake without alienating
p=.01, did demonstrate support for the self bests human the human. Humans may be aware that they were the primary
hypothesis, with self blame participants reporting a greater source of the problem and succeed in addressing it, but will still
sense of team than human blame participants. appreciate the generalized attribution of blame to the team. Only
in cases where the source of failure is so obviously attributable
to the human that the robot cannot be implicated should human
IV. DISCUSSION blame be considered.
Self blame is another strategy that may be used effectively
A. Summary and interpretation of results
to note failures of human-robot collaborations. Our results
The results demonstrated support for all three hypotheses. showed some support for the self bests team hypothesis, with
Particularly strong support was shown for the team bests human participants rating self-blaming robots friendlier and less
hypothesis and the self bests human hypothesis. Consistent with belligerent than team-blaming robots. In cases where the cause
the team bests human hypothesis, team blame participants rated of a failure is ambiguous, with the robot being likely more or
the robot more positively than human blame participants for equally culpable, robot self blame may be effective at both
four of the five dependent variables, all with post-hoc analysis identifying the problem and eliciting a positive attitudinal
alpha’s of .01 or smaller. The results also demonstrated very response from the human. Use of self blame may be
strong, consistent support for the self bests human hypothesis, problematic, however, if it is obviously misleading. Attributing
with self blame participants providing significantly more blame to a completely innocent party, while possibly making
positive ratings of the robot for all five measures. As with the the person feel good, could prevent the identification of the
team bests human hypothesis, post-hoc analyses for all five cause of problems, lead the person to doubt the robot’s sincerity,
measures had alphas of .01 or smaller. Support was also shown and negatively impact future performances. Because of this, self
for the self bests team hypothesis, though results were limited to blame should be used in place of team blame in cases where the
source of blame is unknown, nothing can be done to address the
perceptions of the robot’s personality. Self blame participants
problem, or the problem is too minor to warrant addressing.
rated the robot friendlier and less belligerent than team blame
Personal entertainment robots, for example, should default to
participants. self blame rather than team blame to maintain human liking.
215
No one robot blame attribution strategy is ideal for all provides further indication that the strategies employed by
situations. For optimal use of blaming in collaborative contexts, robots during difficult conversations can increase or decrease
robots must be designed to take into account at least four negative responses to these face-threatening acts. Given that
factors: the likely cause of failure, the ambiguity of the source designers of robots aspire to create robots with even greater
of failure, the precision needed to accurately address the abilities and specialization, with many designers hoping to
problem, and the likely human response. Each of these factors create robot team mates capable of serving as peers to humans,
needs to be balanced. In cases where precisely identifying who robots will need to challenge humans. Considering humans’
or what is responsible is far more important than pleasing the responses to these acts will enable designers to do so without
human, blaming humans for human failures is acceptable. In seriously compromising the human-robot relationship.
cases where the relative responsibility of the robot and human
for failure is ambiguous and the cause of failure can be ACKNOWLEDGMENT
identified without solely implicating the human, team or self This work was supported in part by the National Science
blame should be used. Foundation under Grant Number 0746109.
C. Limitations REFERENCES
There are several limitations to this study. First, our [1] V. Groom and C. Nass, “Can robots be teammates?: Benchmarks and
participant pool was limited to college students living in the predictors of failure in human-robot teams,” Interaction Studies, vol. 8, no.
United States. Replicating this study with people of different 3, pp. 483-500.
ages, backgrounds, and cultures is an important next step. [2] B. Reeves and C. Nass, The Media Equation. New York, NY: Cambridge
University Press, 1996.
Second, our study featured a single robot, the Robosapien™.
[3] C. Nass, J.S. Steuer and E. Tauber, “Computers are social actors,”
Future studies should vary features of the robot, such as material Proceedings of Human Factors in Computing Systems, Boston, MA [CHI
and size. Third, we studied interactions between humans and ’94, ACM Press, New York, NY, 72-77, 1994].
robots in a lab setting, using only one task. Interactions in more [4] C. Nass and S.B. Brave, Wired for Speech. Cambridge, MA: MIT Press,
natural settings featuring different tasks may produce different 2005.
results. Fourth, participants spent only a brief period of time [5] B. Friedman, R.H. Kahn and J. Hagman, “Hardware Companions? What
Online AIBO Discussion Forums Reveal about the Human-Robotic
with the robots, and participants’ responses were measured Relationship,” Proceedings of CHI 2003 [CHI ’03, ACM Press, 273-280,
shortly after the interaction. Future studies should examine both 2003].
long-term interactions and long-term effects of interactions. [6] B. Mutlu, S. Oman, J. Forlizzi and S. Kiesler, “Perceptions of ASIMO,”
Proceedings of HRI 2006, [ACM Press, 351-352, 2006].
V. CONCLUSION [7] C. Breazeal, C.D. Kidd, A.L. Thomaz, G. Hoffman and M. Berlin, “Effects
Attributing blame can be a useful process for identifying of nonverbal communication on efficiency and robustness in human-robot
teamwork,” presented at IROS 2005.
problems and making sense of failures. Without blame, people
[8] M.L. Walters, K. Dautenhahn, S.N. Woods and K.L. Koay, “Robot
are not held accountable for negative outcomes and problems go etiquette,” Proceeding of HRI 2007 [ACM Press, 317-324 ,2007].
uncorrected. Robots are becoming increasingly capable of
[9] E. Wang, C. Lignos, A. Vatsal, and B. Scassellati, “Effects of head
identifying sources of failures; in some cases, they will be in a movement on perceptions of humanoid robot behavior, Proceedings of
better position to do so than human partners. If robots can HRI 2006, 180-185.
successfully attribute blame, human-robot projects will be more [10] C. Geertz, The interpretation of cultures. New York, NY: Basic Books,
successful. 2000.
[11] E. Goffman, E. The Presentation of Self in Everyday Life. New York, NY:
This study examined one aspect of blame attribution: Anchor Books, 1959.
identifying a target. The results clearly reveal that blaming [12] L. Takayama, V. Groom, P. Ochi, C. Nass, “I’m sorry Dave: I’m afraid I
humans outright is a problematic strategy. The study results won’t do that: Social aspects of human-agent conflict,” Proceeding of
suggest that designing a robot to attribute blame to the team or Human Factoris in Computing Systems, Boston, MA, 2009.
to itself produces a more positive human response than blaming [13] C. Torrey, “How robots can help: Communication strategies that improve
the human. social outcomes,” 2009.
[14] F. Heider, The Psychology of Interpersonal Relations. New York, NY:
For robots to be the most effective collaborators, they need John Wiley and Sons, 1958.
not only to accurately identify the source of failures, but should [15] L. Ross, “The intuitive psychologist and his shortcomings,” in Advances in
also demonstrate consideration of the interaction context, the Experimental Social Psychology, vol. 10, L. Berkowitz, Ed. New York:
importance of precisely identifying responsible parties, and the Academic Press, 1977, pp. 173-220.
importance of human attitudes toward the robot. When the [16] D.T. Miller and M. Ross, “Self-serving biases in the attribution of
cause of a problem can be accurately identified while attributing causality: Fact or fiction?,” Psychological Bulletin, vol. 82, no. 2, pp. 213-
225.
blame to the team or the robot, human blame should be avoided.
When both self blame and team blame are effective strategies, [17] R.E. Lane, “Moral blame and causal explanation,” Journal of Applied
Psychology, vol. 1, no. 1, pp. 45-58.
self blame should be used.
[18] K. Shaver, The Attribution of Blame: Causality, Responsibility and
This study not only provides insights into the impact of Blameworthiness. New York, NY: Springer-Verlag, 1985.
using different blame attribution targets; it also reveals that [19] B. Weiner, Judgements of Responsibility. New York, NY: Guilford Press,
1995.
blame attribution strategies can have significant impacts on
human-robot relationships. Even more generally, the study
216
[20] W.L. Davis and D.E. Davis, “Internal-external control and attribution of [24] Y. Moon, “Don’t blame the computer: When self-disclosure moderates the
responsibility for success and failure,” Journal of Personality, vol. 40, no. self-serving bias,” Journal of Consumer Psychology, vol. 75, no. 5, pp.
1, pp. 123-136. 125-137.
[21] V.S. Folkes, “Recent attribution research in consumer behavior: A review [25] I.M. Jonsson, C. Nass, J. Endo, B. Reaves, H. Harris, J.L. Ta, N. Chan, S.
and new directions,” Journal of Consumer Research, vol. 14, pp. 548-565. Knapp, “Don’t blame me I am Only the Driver:Impact of blame attribution
[22] P.D. Sweeney, K. Anderson, S. Bailey, “Attributional style in depression: on attitudes and attention to driving task,” presented at CHI ’04.
A meta-analytic view,” Journal of Personality and Social Psychology, vol. [26] B. Mutlu and J. Forlizzi, “Robots in Organizations: Workflow, Social, and
50, no. 5, pp. 974-991. Environmental Factors in Human-Robot Interaction,” Proceedings of the
[23] T. Green, R. Bailey, O. Zinser and D.E. Williams, “Causal attribution and 3rd ACM/IEEE Conference on Human-Robot Interaction, Amsterdam,
affective response as mediated by task performance and self-acceptance,” The Netherlands.
Psychological Reports, vol. 75, no. 3, pp. 1555-1562.
217