2018
DOI: 10.1177/1541931218621034
|View full text |Cite
|
Sign up to set email alerts
|

Team Situation Awareness in Human-Autonomy Teaming: A Systems Level Approach

Abstract: Project overview. The current study focuses on analyzing team flexibility by measuring entropy (where higher values correspond to system reorganization and lower values correspond to more stable system organization) across all-human teams and Human-Autonomy Teams (HAT). We analyzed teams in the context of a fully-fledged synthetic agent that acts as a pilot for a three-agent Remotely Piloted Aircraft System (RPAS) ground crew. The synthetic agent must be able to communicate and coordinate with human teammates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 1 publication
0
17
0
Order By: Relevance
“…However, most recently, HAT has been frequently used in association with highly intelligent agents based on AI, machine learning, and cognitive modeling that are as competent as humans. Such work has directed the implementation of human-autonomy collaboration from concept to practice [25][26][27][28][29].…”
Section: Human Autonomy Teamwork (Hat)mentioning
confidence: 99%
“…However, most recently, HAT has been frequently used in association with highly intelligent agents based on AI, machine learning, and cognitive modeling that are as competent as humans. Such work has directed the implementation of human-autonomy collaboration from concept to practice [25][26][27][28][29].…”
Section: Human Autonomy Teamwork (Hat)mentioning
confidence: 99%
“…HAT has been described as at least one human working cooperatively with at least one autonomous agent (McNeese et al, 2018), where an autonomous agent is a computer entity with a partial or high degree of self-governance with respect to decision-making, adaptation, and communication (Demir et al, 2016; Mercado et al, 2016; Myers et al, 2019). As noted by Larson and DeChurch (2020, p. 10), “we are quickly approaching a time when digital technologies are as agentic as are human counterparts.” With continuous advancements in artificial intelligence (AI), autonomous agents can perform a greater number of dynamical functions in both teamwork and taskwork than ever before (Seeber et al, 2020), and they are beginning to be viewed as teammates rather than tools (Grimm et al, 2018a; Lyons et al, 2018). For example, autonomous agents can increasingly participate in teamwork activity involving coordination, task reallocation, and continuous interaction with humans and other autonomous agents (Chen et al, 2016; Johnson et al, 2012; Shannon et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…In the 1990s, as the above quotes illustrate, discussions emerged with respect to autonomous agents playing roles as genuine team players. Therefore, the concept of HAT has been considered by academics for three decades, but it was not until more recently that the term HAT has emerged and been used frequently (e.g., Demir et al, 2019; Demir, Likens, et al, 2018; Demir, McNeese, et al 2018; Dubey et al, 2020, Fiore & Wiltshire, 2016, Grimm et al, 2018a, 2018b, Grimm et al, 2018c; McNeese et al, 2018, 2019; Shannon et al, 2017; Wohleber et al, 2017). We believe the emerging use of the term HAT is due to significant advances in AI, machine learning, and cognitive modeling.…”
Section: Introductionmentioning
confidence: 99%
“…As the subjects are exposed to varied task assignments in exploration-based strategies, they are cognizant of which tasks they are most proficient in and thereby have greater situational awareness about how the robot is trying to optimize their schedule. Prior studies in automation have shown how loss of situational awareness can negatively impact trust in the system [28,31,37]. Hence, we hypothesize that a better understanding of the scheduling process by subjects in exploration-based strategies will lead to greater trust in the robot.…”
Section: Hypothesesmentioning
confidence: 92%