Abstract:Safety cases and, specifically, software safety cases, have had virtually no presence in engineering practice in the US. Recent interest, in addition to an early attempt to introduce them into practice in the NASA Constellation Program, motivated us to develop a partial safety case for a safety critical subsystem for the Ares I vehicle, namely the abort detection, notification and response (AFDNR) system. This paper relates our experience applying the safety case concept to AFDNR, particularly from the perspec… Show more
“…[18,20,23,24] in this area addresses different aspects of sUAS and UAV safety providing automation support and tools for creating and maintaining safety assurance cases. They emphasise reuse of safety assurance cases by proposing domain-independent and domain-specific patterns [21,28]. Other work has created reusable safety case patterns as building blocks for future product development [9,18,20,22,28,35].…”
Section: Related Workmentioning
confidence: 99%
“…They emphasise reuse of safety assurance cases by proposing domain-independent and domain-specific patterns [21,28]. Other work has created reusable safety case patterns as building blocks for future product development [9,18,20,22,28,35]. In the area of safety cases maintenance, Kelly and Weaver [39] presented a set of patterns and recommended the use of modularity to support safety cases evolution.…”
With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety-thinking and helping users to identify and potentially mitigate human-interaction hazards.
“…[18,20,23,24] in this area addresses different aspects of sUAS and UAV safety providing automation support and tools for creating and maintaining safety assurance cases. They emphasise reuse of safety assurance cases by proposing domain-independent and domain-specific patterns [21,28]. Other work has created reusable safety case patterns as building blocks for future product development [9,18,20,22,28,35].…”
Section: Related Workmentioning
confidence: 99%
“…They emphasise reuse of safety assurance cases by proposing domain-independent and domain-specific patterns [21,28]. Other work has created reusable safety case patterns as building blocks for future product development [9,18,20,22,28,35]. In the area of safety cases maintenance, Kelly and Weaver [39] presented a set of patterns and recommended the use of modularity to support safety cases evolution.…”
With the rise of new AI technologies, autonomous systems are moving towards a paradigm in which increasing levels of responsibility are shifted from the human to the system, creating a transition from human-in-the-loop systems to human-on-the-loop (HoTL) systems. This has a significant impact on the safety analysis of such systems, as new types of errors occurring at the boundaries of human-machine interactions need to be taken into consideration. Traditional safety analysis typically focuses on system-level hazards with little focus on user-related or user-induced hazards that can cause critical system failures. To address this issue, we construct domain-level safety analysis assets for sUAS (small unmanned aerial systems) applications and describe the process we followed to explicitly, and systematically identify Human Interaction Points (HiPs), Hazard Factors and Mitigations from system hazards. We evaluate our approach by first investigating the extent to which recent sUAS incidents are covered by our hazard trees, and second by performing a study with six domain experts using our hazard trees to identify and document hazards for sUAS usage scenarios. Our study showed that our hazard trees provided effective coverage for a wide variety of sUAS application scenarios and were useful for stimulating safety-thinking and helping users to identify and potentially mitigate human-interaction hazards.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.