Deep Neural Networks and Data for Automated Driving 2022
DOI: 10.1007/978-3-031-01233-4_1
|View full text |Cite
|
Sign up to set email alerts
|

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

Abstract: Deployment of modern data-driven machine learning methods, most often realized by deep neural networks (DNNs), in safety-critical applications such as health care, industrial plant control, or autonomous driving is highly challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability and implausible predictions to directed attacks by means of malicious inputs. Cyber-physical systems employing DNNs are therefore … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 308 publications
(256 reference statements)
0
11
0
Order By: Relevance
“…But according to the Commission's auxiliary report [23], the safety provisions for artificial intelligence applications, robotics, and Internet-of-things devices are seen as attainable mainly through the transparency, accountability, and unbiasedness of algorithms, fallback mechanisms, and keeping a human in the loop. Although these are sensible goals as such, it remains disputed how these could be legislated with sufficient K rigor-especially when considering that safety science is not quite there yet [33]. For these and other reasons, liability for new technologies has been actively debated recently.…”
Section: Discussionmentioning
confidence: 99%
“…But according to the Commission's auxiliary report [23], the safety provisions for artificial intelligence applications, robotics, and Internet-of-things devices are seen as attainable mainly through the transparency, accountability, and unbiasedness of algorithms, fallback mechanisms, and keeping a human in the loop. Although these are sensible goals as such, it remains disputed how these could be legislated with sufficient K rigor-especially when considering that safety science is not quite there yet [33]. For these and other reasons, liability for new technologies has been actively debated recently.…”
Section: Discussionmentioning
confidence: 99%
“…The importance of trustworthy AI methods is increasing, especially because decision-making takes data-based models more and more into account (Bellotti and Edwards, 2001 ; Floridi et al, 2018 ; Lepri et al, 2018 ; Houben et al, 2021 ). In her comprehensive book, Virginia Dignum addresses AI's ethical implications of interest to researchers, technologists, and policymakers (Dignum, 2019 ).…”
Section: Related Workmentioning
confidence: 99%
“…The domain of autonomous driving in- volves object detection [17,18,19], soiling detection [20,21,22], semantic segmentation [23], weather classification [24,25], dynamic object detection [26], depth prediction [27,28,29,30,31], fusion [32], key-point detection and description [33] and multitask learning [34,35,36]. It also poses many challenges due to the highly dynamic and interactive nature of surrounding objects in the automotive scenarios [37].…”
Section: Introductionmentioning
confidence: 99%