2021
DOI: 10.3389/fpsyg.2021.589585
|View full text |Cite
|
Sign up to set email alerts
|

Human–Autonomy Teaming: Definitions, Debates, and Directions

Abstract: Researchers are beginning to transition from studying human–automation interaction to human–autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans interacting with autonomy may vary and affect subsequent collaboration outcomes are beginning to emerge (de Visser et al., 2018; Wynne and Lyons, 2018). In this review, we do a deep dive into human–autonomy teams (HATs) by explaining the differences between automation and auton… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 84 publications
(50 citation statements)
references
References 87 publications
0
36
0
Order By: Relevance
“…Algorithmic transparency without sufficient openness about stakeholder decisions, interests, and overall context, provides not much more than a peephole into a limited part of the whole socio-technical system. Contrary to popular belief, providing transparency-or even explanations-from the system does not mean that we can effectively contest the decisions (Aler Tubella et al, 2020;Lyons et al, 2021). Contestability, i.e., the ability to contest decisions, requires looking beyond why a decision was made.…”
Section: Transparency As Contestabilitymentioning
confidence: 95%
“…Algorithmic transparency without sufficient openness about stakeholder decisions, interests, and overall context, provides not much more than a peephole into a limited part of the whole socio-technical system. Contrary to popular belief, providing transparency-or even explanations-from the system does not mean that we can effectively contest the decisions (Aler Tubella et al, 2020;Lyons et al, 2021). Contestability, i.e., the ability to contest decisions, requires looking beyond why a decision was made.…”
Section: Transparency As Contestabilitymentioning
confidence: 95%
“…In Figure 4 the aggregated population trust optimum x from ( 8) is compared with the Chebyshev centers for the preference polytopes belonging to a subset of participants. 2 For the same subset of participants, we also compare their distinctiveness z k 1 with corresponding TPS-HRI trust scores in Figure 6. We observe that participants with distinctiveness z k 1 ≤ 0.035 and a trust score in the range [42%, 56.5%] express preferences compatible with the population's preferences.…”
Section: B Resultsmentioning
confidence: 99%
“…Much attention has focused on trust in human-automation interaction, initially in human interactions with industrial machines [1] but later expanding to other settings [2]. Trust in human-robot interaction is an active area of research [3], stemming from scenarios that require a delegation of autonomy (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The robot, seen as an equal peer complying with the team's rules, should also be capable of varying its level of autonomy and involvement depending on environment and situation. Recently, research has been using the term Human-Autonomy Teaming (HATs) to describe humans and intelligent, autonomous agents working interdependently toward a common goal (O'Neill et al, 2020;Lyons et al, 2021). Human communication is naturally multimodal, with voice, facial expression and gestures considered as key channels supporting its realization.…”
Section: Face Recognitionmentioning
confidence: 99%