2019
DOI: 10.1007/s12369-019-00596-x
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
144
2

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 219 publications
(169 citation statements)
references
References 120 publications
2
144
2
Order By: Relevance
“…For example, an in-home robot can be used to improve the coordination of patient communication with care providers and to assist the patient with medication management. In order for the human-robot team to interact effectively, the human should establish appropriate trust toward the robotic agents [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, an in-home robot can be used to improve the coordination of patient communication with care providers and to assist the patient with medication management. In order for the human-robot team to interact effectively, the human should establish appropriate trust toward the robotic agents [4][5][6][7].…”
Section: Introductionmentioning
confidence: 99%
“…With few exceptions (e.g. [14][15][16][17][18][19][20][21]), we have little understanding of a human agent's trust formation and evolution process after repeated interactions with a robotic agent [7,20]. Second, trust in automation is usually measured by questionnaires administered to the human agents.…”
Section: Introductionmentioning
confidence: 99%
“…According to this model, trust can be divided into three components, namely dispositional, situational, and learned trust. Whereas dispositional and most situational factors can hardly be influenced, initial and dynamic learned trust result from an employee's mental model of the robot, that is, an internal representation of the cobot's characteristics from which actual expectations are derived and which is continuously modified by experiences [104,105].…”
Section: Trust In the Cobotmentioning
confidence: 99%
“…In the case of self-driving vehicles, the ability to indirectly measure trust would open several design possibilities, especially for adaptive ADSs capable of conforming to drivers' trust levels and modifying their own behaviors accordingly. Trust estimations could be used in solutions for issues related to trust miscalibration-i.e., when drivers' trust in the ADS is not aligned with system's actual capabilities or reliability levels [11,24,31]. In a simplified approach, trust can be inferred with only the identification and processing of observable variables that may be measured and processed to indicate trust levels.…”
Section: Introductionmentioning
confidence: 99%