2020
DOI: 10.48550/arxiv.2003.06979
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Anomalous Example Detection in Deep Learning: A Survey

Abstract: Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc anomaly detection techniques to detect (and discard) these anomalous samples have been proposed in the recent past. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications. We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches. We di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 87 publications
(91 reference statements)
0
10
0
Order By: Relevance
“…In [28], the authors were the first who classified the detection methods into: 1) auxiliary models in which a subnetwork or a separate network acts as classifier to predict adversarial inputs, 2) statistical models in which statistical analyses were used to distinguish between normal and adversarial inputs and 3) prediction consistency based models that depends on the model prediction if the input or the model parameters are changed. In the review of Bulusu et al [58] and of Miller et al [59,60], they classified detection methods with respect to the presence of AEs in the training process of the detector into: 1) supervised detection in which AEs are used in the training of the detector and 2) unsupervised detection in which the detector is only trained using normal training data. Carlini et al [25] did an experimental study on ten detectors to show that all can be defeated by building new loss functions.…”
Section: Related Workmentioning
confidence: 99%
“…In [28], the authors were the first who classified the detection methods into: 1) auxiliary models in which a subnetwork or a separate network acts as classifier to predict adversarial inputs, 2) statistical models in which statistical analyses were used to distinguish between normal and adversarial inputs and 3) prediction consistency based models that depends on the model prediction if the input or the model parameters are changed. In the review of Bulusu et al [58] and of Miller et al [59,60], they classified detection methods with respect to the presence of AEs in the training process of the detector into: 1) supervised detection in which AEs are used in the training of the detector and 2) unsupervised detection in which the detector is only trained using normal training data. Carlini et al [25] did an experimental study on ten detectors to show that all can be defeated by building new loss functions.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, [42], [43] and [41] are representative surveys on the generalized anomaly detection techniques. But only the most up-to-date work in Thudumu et al [41] covers the topic of graph anomaly detection.…”
Section: Existing Anomaly Detection Surveysmentioning
confidence: 99%
“…Moreover, this review explains the basic assumption, advantages, computational cost, etc, for each of the techniques. Bulusu [11] has provided a review on DL-based anomalous instance detection methods. Its focus is on discussing unintentional and intentional anomalies, specifically in the context of DNNs.…”
Section: Other Surveysmentioning
confidence: 99%