2015
DOI: 10.1007/978-3-319-24586-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Improving Trust-Guided Behavior Adaptation Using Operator Feedback

Abstract: It is important for robots to be trusted by their human teammates so that they are used to their full potential. This paper focuses on robots that can estimate their own trustworthiness based on their performance and adapt their behavior to engender trust. Ideally, a robot can receive feedback about its performance from teammates. However, that feedback can be sporadic or non-existent (e.g., if teammates are busy with their own duties), or come in a variety of forms (e.g., different teammates using different v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…This is the opposite to a number of other conceptions of HRI where the switch from autonomous to manual mode results from a loss of trust in the robot (e.g. the inverse trust metric in [22] that a robot uses to adapt behavior to increase operator's trust). Also note that here we focus on the high-level switching strategies; the low-level continuous robot motion execution under either manual or autonomous motion planning is still automatic.…”
mentioning
confidence: 71%
“…This is the opposite to a number of other conceptions of HRI where the switch from autonomous to manual mode results from a loss of trust in the robot (e.g. the inverse trust metric in [22] that a robot uses to adapt behavior to increase operator's trust). Also note that here we focus on the high-level switching strategies; the low-level continuous robot motion execution under either manual or autonomous motion planning is still automatic.…”
mentioning
confidence: 71%
“…However, it may be impossible to elicit a complete set of rules for trustworthy behaviors if the robot needs to handle changes in teammates, environments, or mission contexts. Therefore, this work developed a trust model and a case-based reasoning framework to enable a robot to determine when to adapt its behavior to be more trustworthy for the team [126]. Behaviors that the robot itself can directly control were defined as the modifiable component, e.g., speed, obstacle padding, scan time, scan distance, etc.…”
Section: A Performance-centric Algebraic Trust Modelsmentioning
confidence: 99%
“…Overall, performance-centric algebraic trust models offer an efficient trust evaluation mechanism based on observational evidence. They are created based on selective human-robot trust impacting factors, which includes robot performance as the major component [45], [109], [126]. Due to this performance-centric perspective, these works usually mix trust with trustworthiness, which is inaccurate per Remark 1 in Section II.A.…”
Section: A Performance-centric Algebraic Trust Modelsmentioning
confidence: 99%
“…In situations where the agent believes its behavior is untrustworthy, it can modify its behavior in an attempt to learn (and apply) a more trustworthy behavior, thus implementing a form of adaptive autonomy. Preliminary studies in limited simulations have shown that an agent using an inverse trust method can successfully adapt its behavior given implicit feedback [15], and can benefit further from explicit feedback [16] as well as the ability to generate explanations when it modifies its behaviors [14].…”
Section: Adaptive Autonomy and Inverse Trustmentioning
confidence: 99%