2021
DOI: 10.3389/frobt.2021.642201
|View full text |Cite
|
Sign up to set email alerts
|

Inferring Trust From Users’ Behaviours; Agents’ Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration

Abstract: Collaborative virtual agents help human operators to perform tasks in real-time. For this collaboration to be effective, human operators must appropriately trust the agent(s) they are interacting with. Multiple factors influence trust, such as the context of interaction, prior experiences with automated systems and the quality of the help offered by agents in terms of its transparency and performance. Most of the literature on trust in automation identified the performance of the agent as a key factor influenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 51 publications
0
14
0
2
Order By: Relevance
“…Therefore, they desired to adjust the extent of AIpredicted updates based on their intention. Echoing literature on predictable AI systems (Daronnat et al, 2021), future interactive editing systems should consider user expectations and empower users to preview AI actions. Systems could also model human editing intention, perhaps via action history like number and location of edits, and adapt AI actions accordingly to better serve user goals.…”
Section: Discussion and Design Implicationsmentioning
confidence: 99%
“…Therefore, they desired to adjust the extent of AIpredicted updates based on their intention. Echoing literature on predictable AI systems (Daronnat et al, 2021), future interactive editing systems should consider user expectations and empower users to preview AI actions. Systems could also model human editing intention, perhaps via action history like number and location of edits, and adapt AI actions accordingly to better serve user goals.…”
Section: Discussion and Design Implicationsmentioning
confidence: 99%
“…Screening of dangerous objects/subjects [4, 14, 15, 19, 36, 51, 54-56, 85, 94, 107, 117], detection of system malfunctions [89,90], crime prevention [10], recidivism prediction [132], watch a video of a house search [66,83] Transportation Responding to take-over requests [2, 6, 7, 47, 61, 68-70, 77, 84, 98], collusion avoidance [8], managing (air) traffic [31,118], pedestrians interaction with AV [48], observe AVs [64,119], drive in driving simulator [87,96,136] Military Screening tasks [17,33,44,63,81,82,127,130,135,139,140], gathering of information [58], mission planning [92], human-AI collaboration for search and destroy missions [116] Production Improving production [73,133,143], disassembly [5], moving objects [34,35,45], demand forecasting [39], harvesting [113], quality checks [142] Gaming trust game [3,23], collaboration game [24], flanker task…”
Section: Security and Safetymentioning
confidence: 99%
“…Drawing on the definition of trust, collaborative tasks should require less trust because the individual autonomy is greater and the dependency on another entity is less, reducing uncertainty and risk. However, need for trust in collaborative tasks might increase for situations of human-AI teaming, where the human team member is dependent on the performance of the system, such as [18,24,116] report. Such situations are characterized by increased dependency, possibly leading to an increased need for trust.…”
Section: Security and Safetymentioning
confidence: 99%
See 1 more Smart Citation
“…Daronnat et al [18] addressed the link between the virtual agent's predictability and its effect on human trust. The assigned task in this study is a collaborative missile command game played with five agents with different performance and predictability skills.…”
Section: A Factors Of Human Trust In Artificial Systemsmentioning
confidence: 99%