2020
DOI: 10.1177/1071181320641091
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Agents’ Errors on Performance, Reliance and Trust in Human-Agent Collaboration

Abstract: Trust in automation is often strongly tied to an agent’s performance. However, our understanding of imperfect agents’ behaviours and its impact on trust is limited. In this paper, we study the relationship between performance, reliance and trust in a set of human-agent collaborative tasks. Participants collaborated with different automated agents that performed similarly but made errors in different ways; namely mistakes (error of prioritization), lapses (error of omission) and slips (lowered accuracy). We con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 18 publications
(23 reference statements)
0
7
0
Order By: Relevance
“…However, limitations of self-reported measures consist of interruption of tasks through the survey, the lack of capability of surveys TRUST IN HUMAN-AGENT TEAMS 5 to capture a continuous evolution of trust due to assessment at few single time points, as well as memory failures and subjective bias (Kohn et al, 2021). Therefore, behavioral measures are increasingly utilized in research on trust in HAT (Daronnat et al, 2020(Daronnat et al, , 2021Hafizoğlu & Sen, 2018a, 2018bKulms & Kopp, 2019). Trust in agents can impact behavioral processes or tendencies, including risk-associated decisions (e.g., delegation, reliance, cooperation, or intervention) as well as outcome-related measures (e.g., decision and response time, combined team performance), which are considered behavioral indicators of trust (Kohn et al, 2021).…”
Section: Measuring Trustmentioning
confidence: 99%
See 1 more Smart Citation
“…However, limitations of self-reported measures consist of interruption of tasks through the survey, the lack of capability of surveys TRUST IN HUMAN-AGENT TEAMS 5 to capture a continuous evolution of trust due to assessment at few single time points, as well as memory failures and subjective bias (Kohn et al, 2021). Therefore, behavioral measures are increasingly utilized in research on trust in HAT (Daronnat et al, 2020(Daronnat et al, , 2021Hafizoğlu & Sen, 2018a, 2018bKulms & Kopp, 2019). Trust in agents can impact behavioral processes or tendencies, including risk-associated decisions (e.g., delegation, reliance, cooperation, or intervention) as well as outcome-related measures (e.g., decision and response time, combined team performance), which are considered behavioral indicators of trust (Kohn et al, 2021).…”
Section: Measuring Trustmentioning
confidence: 99%
“…To measure trust, many studies employ game-based frameworks of HAT interaction (Correia et al, 2018;Daronnat et al, 2020Daronnat et al, , 2021Kulms & Kopp, 2016). Examples of these are computer-simulated interaction scenarios with virtual agents, e.g., in military situations like missile shooting (Daronnat et al, 2021(Daronnat et al, , 2022, dependence on a virtual robot in emergency evacuation situations (Robinette et al, 2017), or reliance on a pet feeding robot during absence (Ullrich et al, 2021).…”
Section: Measuring Trustmentioning
confidence: 99%
“…Testing Types of Agents Errors. In our second study [6] (𝑛 = 24), we tested the impact of different agents performing at the same level in terms of reliability but displaying different behaviours when it comes to making errors. Agents' behaviours and their associated types of errors were created using Reason's human errors taxonomy [22], which has been later contextualised in Human-Agent collaborative settings by Baker et al [1].…”
Section: Contributionmentioning
confidence: 99%
“…Our publications [6,7] show a growing interest in the development and assessment of human-agent relationships, with a focus on how users trust decision aids system. Participation in the Doctoral Consortium will allow us to get crucial feedback from the game research community on how to design for keeping players engaged with virtual agents in an interactive real-time task.…”
Section: Dissertation Statusmentioning
confidence: 99%
See 1 more Smart Citation