Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction 2020
DOI: 10.1145/3319502.3374793
|View full text |Cite
|
Sign up to set email alerts
|

Taxonomy of Trust-Relevant Failures and Mitigation Strategies

Abstract: We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
24
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 57 publications
(31 citation statements)
references
References 59 publications
4
24
0
2
Order By: Relevance
“…Moreover, people are more likely to trust and rely on an automated decision-support system when given an explanation why the decision aid might err [6], or when they inferred such explanations after observing system behaviour themselves [60]. The effectiveness of a trust repair strategy seems to depend on situational factors such as timing [46], violation type [50,54] and agent type [19]. Research on the effect of timing suggests that apologies for a costly act were only effective when performed not immediately after the violation occurred, but rather when a new opportunity for deciding whether to trust the robot arose [46].…”
Section: Non-human Apologymentioning
confidence: 99%
“…Moreover, people are more likely to trust and rely on an automated decision-support system when given an explanation why the decision aid might err [6], or when they inferred such explanations after observing system behaviour themselves [60]. The effectiveness of a trust repair strategy seems to depend on situational factors such as timing [46], violation type [50,54] and agent type [19]. Research on the effect of timing suggests that apologies for a costly act were only effective when performed not immediately after the violation occurred, but rather when a new opportunity for deciding whether to trust the robot arose [46].…”
Section: Non-human Apologymentioning
confidence: 99%
“…Share failures and efforts to relieve them between stakeholders in the HRE in a way that optimizes their ability to understand the situation and contribute to solutions (e.g., by including all relevant information, by preventing negative emotional responses like panic or stress, etc.) Engelhardt et al (2017), Honig and Oron-Gilad (2018), Nayyar and Wagner (2018), Sebo et al (2019), Banerjee et al (2020), Cameron et al (2020), Choi et al (2020), Kontogiorgos et al (2020), Tolmeijer et al (2020), Washburn et al (2020) Purpose: to prevent decompensation, to prevent working at cross purposes, and to empower decentralized initiative.…”
Section: Preparing For Unexpected Robot Failures That Challenge the Ecosystemmentioning
confidence: 99%
“…It becomes a central issue for the PRinciPLes LayeR to assess how much to build trust and how much to stay within legal/regulatory bounds. Doing what the owner wants the robot to do can build trust, whereas refusing to do something because of minor illegality erodes trust very quickly [237].…”
Section: Home Assistant Robotmentioning
confidence: 99%