2022
DOI: 10.1007/s10515-022-00337-x
|View full text |Cite
|
Sign up to set email alerts
|

How to certify machine learning based safety-critical systems? A systematic literature review

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(8 citation statements)
references
References 164 publications
0
8
0
Order By: Relevance
“…In fact, they studied bugs in ML compilers (such as TVM 4 , Glow 5 , and nGraph 6 ), not in the ML-based systems. Tomban et al [97] also generated a list of bugs which may occur inside ML frameworks. Therefore, it is obvious that understanding about bugs in ML-based systems is still progressing.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact, they studied bugs in ML compilers (such as TVM 4 , Glow 5 , and nGraph 6 ), not in the ML-based systems. Tomban et al [97] also generated a list of bugs which may occur inside ML frameworks. Therefore, it is obvious that understanding about bugs in ML-based systems is still progressing.…”
Section: Discussionmentioning
confidence: 99%
“…Some other studies [103,97,72] also investigated DL bugs, but inside the DL frameworks. For instance, Jia et al [72] studied root causes and symptoms of bugs affecting Tensorflow frameworks and provided a taxonomy.…”
Section: Related Workmentioning
confidence: 99%
“…For each phase, the authors present example methods from the literature. Tambon et al (2022) present a systematic literature review covering 217 primary studies. The authors investigate fundamental topics such as robustness, uncertainty, explainability, and verification -and calls for deeper industry-academia collaborations.…”
Section: Related Workmentioning
confidence: 99%
“…Software systems developed for safety-critical applications must undergo assessments to demonstrate compliance with functional safety standards. However, as the conventional safety standards are not fully applicable for ML-enabled systems (Salay et al, 2018;Tambon et al, 2022), several domain-specific initiatives aim to complement them, e.g., organized by the EU Aviation Safety Agency, the ITU-WHO Focus Group on AI for Health, and the International Organization for Standardization.…”
Section: Introductionmentioning
confidence: 99%
“…However, DNNs may produce unexpected or incorrect results that could lead to significant negative consequences or losses. Therefore, effective testing of such models is crucial to ensure their reliability [4]. For testing and enhancing the performance of DNN-driven applications, a significant amount of labeled data is required.…”
Section: Introductionmentioning
confidence: 99%