Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Softw 2021
DOI: 10.1145/3468264.3468534
|View full text |Cite
|
Sign up to set email alerts
|

Hazard analysis for human-on-the-loop interactions in sUAS systems

Abstract: With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(1 citation statement)
references
References 60 publications
0
1
0
Order By: Relevance
“…• Safety Analysis: A previous study by Vierhauser et al [63] reviewed over 100 publications in order to identify hazardous human interaction errors. They ascribed the errors to three main categories: (1) human-initiated (i.e., for example, an RPIC behaving in a reckless manner), (2) loss of situational awareness (e.g., the system not providing sufficient information through the user interface to rectify/prevent an accident from occurring), and (3) a lack of empowerment (e.g., interfaces limiting the actions of the human, preventing them to intervene when needed).…”
Section: Related Workmentioning
confidence: 99%
“…• Safety Analysis: A previous study by Vierhauser et al [63] reviewed over 100 publications in order to identify hazardous human interaction errors. They ascribed the errors to three main categories: (1) human-initiated (i.e., for example, an RPIC behaving in a reckless manner), (2) loss of situational awareness (e.g., the system not providing sufficient information through the user interface to rectify/prevent an accident from occurring), and (3) a lack of empowerment (e.g., interfaces limiting the actions of the human, preventing them to intervene when needed).…”
Section: Related Workmentioning
confidence: 99%