2020
DOI: 10.1080/00140139.2020.1764112
|View full text |Cite
|
Sign up to set email alerts
|

The reliability and transparency bases of trust in human-swarm interaction: principles and implications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 52 publications
4
14
0
Order By: Relevance
“…To summarize, our findings regarding accumulation rates represent a significant theoretical advance in explaining benchmark effects in the HAT literature regarding increased automation reliance with increased reliability (e.g., Hussein et al, 2020a; Wiegmann et al, 2001), and more specifically, increased automation reliance when decision aids are more reliable relative to human manual ability (e.g., Avril et al, 2021; Bailey & Scerbo, 2007; Hutchinson et al, 2022). Our computational model suggested that two processes drove the effects of automation reliability.…”
Section: Discussionmentioning
confidence: 63%
See 1 more Smart Citation
“…To summarize, our findings regarding accumulation rates represent a significant theoretical advance in explaining benchmark effects in the HAT literature regarding increased automation reliance with increased reliability (e.g., Hussein et al, 2020a; Wiegmann et al, 2001), and more specifically, increased automation reliance when decision aids are more reliable relative to human manual ability (e.g., Avril et al, 2021; Bailey & Scerbo, 2007; Hutchinson et al, 2022). Our computational model suggested that two processes drove the effects of automation reliability.…”
Section: Discussionmentioning
confidence: 63%
“…Automation reliability (accuracy) is a crucial factor in determining human reliance on automation and subsequent human–automation teaming 1 (HAT) performance outcomes, both when human operators passively supervise automation that is performing tasks for them (e.g., Ferraro et al, 2018), and when operators perform tasks themselves while provided with decision aids that advise actions (e.g., Hussein et al, 2020a; Rovira et al, 2007; Shah & Bliss, 2017; Wiegmann et al, 2001). On the one hand, low-reliability automation is often undesirable because it can cause human operators to drop below manual (no automation) levels of accuracy (Wickens & Dixon, 2007).…”
Section: An Evidence Accumulation Model Of Automation Usementioning
confidence: 99%
“…Riley et al (2010) argued that low-level information negatively impacts operators’ cognitive load, as they must process it to build higher levels of SA ( Riley and Strater, 2006 ). In contrast, swarm-level information enables the human to make sense of swarm behaviors leading to effective collaboration ( Hussein et al, 2020a ). Finally, communication issues impose limits on the amount and speed of information exchange between the human and the swarm.…”
Section: Mission Complexitymentioning
confidence: 99%
“…This element can have substantive impact on the user's trust; at the physical level, [47] found that a robot that could convey its incapabilities solely via informative arm movements was found to be more trustworthy by users. Similarly, robots that came across as transparent, either by providing explanations for its actions [48,49] or simply by providing more information about its actions [50,51], were judged as more trustworthy. At a more cognitive level, one study suggests that robots that took risky actions in an uncertain environment were viewed with distrust [52], although this depends on the individual user's risk appetite [53].…”
Section: Exploiting the Processmentioning
confidence: 99%