2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) 2019
DOI: 10.1109/ro-man46459.2019.8956343
|View full text |Cite
|
Sign up to set email alerts
|

Your Robot is Watching: Using Surface Cues to Evaluate the Trustworthiness of Human Actions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…Although there is some work in this direction, see e.g. [2,27,29], none of these works has tried to deconstruct trustworthiness, but rather looked at it as a simple metric. Instead, we hypothesise that we should take several dimensions into account when determining trustworthiness.…”
Section: Conceptual Modelmentioning
confidence: 99%
“…Although there is some work in this direction, see e.g. [2,27,29], none of these works has tried to deconstruct trustworthiness, but rather looked at it as a simple metric. Instead, we hypothesise that we should take several dimensions into account when determining trustworthiness.…”
Section: Conceptual Modelmentioning
confidence: 99%
“…Similarly, in [21], the authors also investigate the effects of robot error on human trust with a specific focus on time-critical situations. Few different works studied trust from a different perspective, such as [24] in which the robot uses surface cues to estimate trustworthiness of the human it interacts with. Other approaches develop methods in which enable the agent to use its own confidence to decide when to ask a human to help it by providing teaching signals [7].…”
Section: Related Workmentioning
confidence: 99%
“…[26], [27]), and 4) team trust (still recent but growing in human-AI contexts), there is little research on how an artificial agent should trust its human teammates. However, there is some work in this direction, for instance on how an artificial agent can detect that a situation requires trust [28], [29] and also how an artificial agent can detect whether a human is being trustworthy, based on episodic memory [30] and social cues [31]. Also, Azevedo-Sa et al [2] has recently proposed a model for trusting tasks in human-robot teams, making a clear distinction between natural trust (when the trustor is a human) and artificial trust (when the trustee is an artificial agent).…”
Section: Introductionmentioning
confidence: 99%