2020
DOI: 10.1145/3359616
|View full text |Cite
|
Sign up to set email alerts
|

Trust-Aware Decision Making for Human-Robot Collaboration

Abstract: Trust in autonomy is essential for effective human-robot collaboration and user adoption of autonomous systems such as robot assistants. This paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
64
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 87 publications
(64 citation statements)
references
References 39 publications
0
64
0
Order By: Relevance
“…These robots will be situated and embedded in technologically enriched environments, in which information will be exchanged ‘ naturally ’ between humans and robots, resulting in hybrid worlds where humans coexist in the digital and the real world [ 1 ]. In order to become a natural part in the daily life of humans in domestic, vocational, as well as public contexts, interaction and coexistence with several kinds of robots has to be experienced positively by humans while also being suited to our purposes, with the interaction being smooth and satisfying [ 2 , 3 , 4 , 5 ]. This means that humans should experience that a robot delivers according to existing explicit and implicit goals, which means that it performs efficiently, and in a way that makes humans feel trust, safety, and convenience while being together [ 5 ].…”
Section: Introductionmentioning
confidence: 99%
“…These robots will be situated and embedded in technologically enriched environments, in which information will be exchanged ‘ naturally ’ between humans and robots, resulting in hybrid worlds where humans coexist in the digital and the real world [ 1 ]. In order to become a natural part in the daily life of humans in domestic, vocational, as well as public contexts, interaction and coexistence with several kinds of robots has to be experienced positively by humans while also being suited to our purposes, with the interaction being smooth and satisfying [ 2 , 3 , 4 , 5 ]. This means that humans should experience that a robot delivers according to existing explicit and implicit goals, which means that it performs efficiently, and in a way that makes humans feel trust, safety, and convenience while being together [ 5 ].…”
Section: Introductionmentioning
confidence: 99%
“…To handle this ambiguity in definitions, we take a utilitarian approach to defining trust for HRI-we adopt a trust definition that gives us practical benefits in terms of developing appropriate robot behavior using planning and control [26,27]. As we will see over the course of this paper, this choice of definition allows us to embed the notion of trust into formal computational frameworks, specifically probabilistic graphical models [28], which in turn allows us to leverage powerful computational techniques for estimation, inference, planning, and coordination.…”
Section: A Question Of Trustmentioning
confidence: 99%
“…A more general approach is to directly model the human's dynamic trust in the robot. Work in this area has focused on two problems: (i) estimating trust based on observations of the human's behavior [18,27,[57][58][59][60][61][62][63][64] and (ii) utilizing the estimate of trust to guide robot behavior [27,58,[65][66][67][68][69].…”
Section: Computational Trust Modelsmentioning
confidence: 99%
“…Trust is fundamental to effective collaboration between humans and robotic systems [39]. Trust has been studied by the human-robot interaction (HRI) community, especially from researchers who are interested in robotic technologies acceptance and human-robot teams [8,20,39,41,51]. Researchers have been trying to understand the impacts of robots' behaviors on humans' trust evolution over time [42].…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have been trying to understand the impacts of robots' behaviors on humans' trust evolution over time [42]. Moreover, they aim to use this understanding to design robots that are aware of humans' trust to operate in contexts involving collaboration with those humans [7,8]. Particularly for self-driving vehicles and automated driving systems (ADSs), trust has been used to explore consumer attitudes and enrich the discussion about safety perception [22].…”
Section: Introductionmentioning
confidence: 99%