2023
DOI: 10.1007/s11219-022-09613-1
|View full text |Cite
|
Sign up to set email alerts
|

Ergo, SMIRK is safe: a safety case for a machine learning component in a pedestrian automatic emergency brake system

Abstract: Integration of machine learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We initiated a res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 63 publications
0
4
0
Order By: Relevance
“…22 An example of the independent use of AMLAS (involving none of the AMLAS developers) was for the safety assurance of an emergency braking system for an AV, intended to protect pedestrians. 23 This can be seen as an example of assuring the SOTIF. These two examples are both embedded systems-the initial target of the framework-but AMLAS has also been successfully applied to decision support systems, for example, in healthcare.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…22 An example of the independent use of AMLAS (involving none of the AMLAS developers) was for the safety assurance of an emergency braking system for an AV, intended to protect pedestrians. 23 This can be seen as an example of assuring the SOTIF. These two examples are both embedded systems-the initial target of the framework-but AMLAS has also been successfully applied to decision support systems, for example, in healthcare.…”
Section: Discussionmentioning
confidence: 99%
“…25 While there have been some initial successes in applying the approach, there remain limitations and issues of maturity. As with the standards, our approach is not precise about the level of evidence needed for the safety casealthough both the definitions of SACE 11 and AMLAS 12 and the examples of the use of AMLAS 22,23,24 illustrate the approach and thus assist in interpreting and applying it. But this is work in progress, and there is more to be done.…”
Section: Discussionmentioning
confidence: 99%
“…However, how it shall be implemented and argued for being adequate and sufficient is highly up to the specific stakeholder organization. 31 In this regard, Hawkins et al 8 introduced a methodology for the assurance of ML in autonomous systems, called AMLAS. It presents a systematic process for integrating safety assurance into the development of ML systems and introduces verification procedures at different stages, for example, learning model verification and system-level (integration) verification that happens after integrating the ML model into the system.…”
Section: Simulation-based Adas Testingmentioning
confidence: 99%
“…Standards such as ISO 26262 Functional Safety 11 and ISO 21448 Safety of the Intended Functionality 10 often provide a high‐level view of the requirements that are required to be satisfied in a safety case for an ML‐driven system. However, how it shall be implemented and argued for being adequate and sufficient is highly up to the specific stakeholder organization 31 . In this regard, Hawkins et al 8 introduced a methodology for the assurance of ML in autonomous systems, called AMLAS.…”
Section: Introductionmentioning
confidence: 99%