2022
DOI: 10.48550/arxiv.2204.07874
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ergo, SMIRK is Safe: A Safety Case for a Machine Learning Component in a Pedestrian Automatic Emergency Brake System

Abstract: Integration of Machine Learning (ML) components in critical applications introduces novel challenges for software certification and verification. New safety standards and technical guidelines are under development to support the safety of ML-based systems, e.g., ISO 21448 SOTIF for the automotive domain and the Assurance of Machine Learning for use in Autonomous Systems (AMLAS) framework. SOTIF and AMLAS provide high-level guidance but the details must be chiseled out for each specific case. We report results … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 37 publications
(52 reference statements)
0
2
0
Order By: Relevance
“…The development of SMIRK followed the process defined in SO-TIF [2], i.e., iterative development and safety engineering toward acceptable risks. For a description of the engineering process, we refer readers to the publication describing the safety case [3]. This section presents the logical and process views of the SMIRK architecture and an overview of the ML components.…”
Section: Smirk Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…The development of SMIRK followed the process defined in SO-TIF [2], i.e., iterative development and safety engineering toward acceptable risks. For a description of the engineering process, we refer readers to the publication describing the safety case [3]. This section presents the logical and process views of the SMIRK architecture and an overview of the ML components.…”
Section: Smirk Architecturementioning
confidence: 99%
“…However, the step from demonstrating impressive results on computer vision benchmarks to deploying systems that rely on ML for safety-critical functionalities is substantial. An ML model can be considered an unreliable function that publicly available training set for the ML model and a complete safety case for its ML component [3]. We posit that SMIRK can be used for various types of research on trustworthy AI as defined by the European Commission, i.e., AI systems that are lawful, ethical, and robust.…”
Section: Introductionmentioning
confidence: 99%