2024
DOI: 10.1109/tdsc.2022.3200421
|View full text |Cite
|
Sign up to set email alerts
|

Self-Checking Deep Neural Networks for Anomalies and Adversaries in Deployment

Abstract: Deep Neural Networks (DNNs) have been widely adopted, yet DNN models are surprisingly unreliable, which raises significant concerns about their use in critical domains. In this work, we propose that runtime DNN mistakes can be quickly detected and properly dealt with in deployment, especially in settings like self-driving vehicles. Just as software engineering (SE) community has developed effective mechanisms and techniques to monitor and check programmed components, our previous work, SelfChecker, is designed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 39 publications
(89 reference statements)
0
2
0
Order By: Relevance
“…SelfChecker operates through a layer-based approach, which necessitates white-box access and may have limited capabilities in detecting issues in shallow DNNs with a few layers. SelfChecker++ [71] has been designed to target both unintended abnormal test data and intended adversarial samples. InputReflector [73], introduced a runtime approach to identify and fix failure-inducing inputs in DL systems inspired by traditional input-debugging techniques.…”
Section: Related Workmentioning
confidence: 99%
“…SelfChecker operates through a layer-based approach, which necessitates white-box access and may have limited capabilities in detecting issues in shallow DNNs with a few layers. SelfChecker++ [71] has been designed to target both unintended abnormal test data and intended adversarial samples. InputReflector [73], introduced a runtime approach to identify and fix failure-inducing inputs in DL systems inspired by traditional input-debugging techniques.…”
Section: Related Workmentioning
confidence: 99%
“…DNNs require that inputs in deployment come from the same distribution as the training dataset [62,63]. However, real world inputs that are semantically similar to a human observer may look different to the model.…”
Section: Introductionmentioning
confidence: 99%