2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2016
DOI: 10.1109/hri.2016.7451741
|View full text |Cite
|
Sign up to set email alerts
|

Trust calibration within a human-robot team: Comparing automatically generated explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
160
2

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 191 publications
(165 citation statements)
references
References 33 publications
3
160
2
Order By: Relevance
“…A majority of prior work has focussed on performance-related factors, particularly robot capabilities; for example, Soh et al [9] examined the dynamics and transfer of trust in robot capabilities across tasks, where transfer is the ability to employ knowledge acquired in one task to improve performance in another [24]. Recent work has explored the role of the robot's intention, e.g., its policy [14], [25] and decision-making process [26]. This work adds to this body of literature and considers both intent and capability across tasks.…”
Section: B Trust In Automation and Robotsmentioning
confidence: 99%
“…A majority of prior work has focussed on performance-related factors, particularly robot capabilities; for example, Soh et al [9] examined the dynamics and transfer of trust in robot capabilities across tasks, where transfer is the ability to employ knowledge acquired in one task to improve performance in another [24]. Recent work has explored the role of the robot's intention, e.g., its policy [14], [25] and decision-making process [26]. This work adds to this body of literature and considers both intent and capability across tasks.…”
Section: B Trust In Automation and Robotsmentioning
confidence: 99%
“…When humans have an accurate mental model of a robot, their subsequent interactions with this robot are safer and more seamless. This mental model may include the robot's intentions [1], [2], [3], its objectives [4], its capabilities [5], [6], or its decision-making process [7].…”
Section: Introductionmentioning
confidence: 99%
“…Unlike prevailing approaches (e.g., [17,37]), a key benefit of these trust models is that they are able to leverage intertask structure and can be applied in situations where the agent (our robot) performs actions across many different tasks. As predictive models, they can be operationalized in decisiontheoretic frameworks to calibrate trust during collaboration with human teammates [5,36,23,13]. Trust calibration is crucial for preventing over-trust that engenders unwarranted reliance in robots [28,30], and under-trust that can cause poor utilization [18].…”
Section: Introductionmentioning
confidence: 99%