2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) 2021
DOI: 10.1109/icse43902.2021.00044
|View full text |Cite
|
Sign up to set email alerts
|

Self-Checking Deep Neural Networks in Deployment

Abstract: The widespread adoption of Deep Neural Networks (DNNs) in important domains raises questions about the trustworthiness of DNN outputs. Even a highly accurate DNN will make mistakes some of the time, and in settings like self-driving vehicles these mistakes must be quickly detected and properly dealt with in deployment.Just as our community has developed effective techniques and mechanisms to monitor and check programmed components, we believe it is now necessary to do the same for DNNs. In this paper we presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
34
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(36 citation statements)
references
References 30 publications
2
34
0
Order By: Relevance
“…However, their approach is only applicable to convex-margin based classifiers and not to NNs. Xiao et al (2021) suggest a self-checking mechanism for NN, where the features of the internal layers are used to check the reliability of predictions. In contrast, our approach uses predictive uncertainties obtained via a softmax function, which is rather simple to implement.…”
Section: Related Workmentioning
confidence: 99%
“…However, their approach is only applicable to convex-margin based classifiers and not to NNs. Xiao et al (2021) suggest a self-checking mechanism for NN, where the features of the internal layers are used to check the reliability of predictions. In contrast, our approach uses predictive uncertainties obtained via a softmax function, which is rather simple to implement.…”
Section: Related Workmentioning
confidence: 99%
“…Their applicability to ImpNet is discussed in Section 6.1. Most are summaried by Li et al [44], and we also examine Xiao et al [45]'s runtime self-checking and Xiao et al [46]'s Metamorphic Testing.…”
Section: Defences Against ML Backdoorsmentioning
confidence: 99%
“…However, they focus on adversarial attacks and there are relatively larger computation costs. Except for the testing on the trained model, some techniques (e.g., [50]) have been proposed to build the self-checking system that can monitor DNN output and trigger an alarm if the output is likely to be incorrect after the model is deployed. For more relevant discussions on the recent progress of machine learning testing, we refer the interesting readers to the recent comprehensive survey [54].…”
Section: Testingmentioning
confidence: 99%