2020
DOI: 10.48550/arxiv.2010.02310
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Anomaly Detection by Residual Adaptation

Abstract: Deep anomaly detection is a difficult task since, in high dimensions, it is hard to completely characterize a notion of "differentness" when given only examples of normality. In this paper we propose a novel approach to deep anomaly detection based on augmenting large pretrained networks with residual corrections that adjusts them to the task of anomaly detection. Our method gives rise to a highly parameter-efficient learning mechanism, enhances disentanglement of representations in the pretrained model, and o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Despite such negative samples may not coincide with ground-truth anomalies, such contrasting can be beneficial for learning characteristic representations of normal concepts. Moreover, the combination of Outlier Exposure and expressive representations with the hyperphere classifier have shown exceptional results for Deep AD on images (Ruff et al, 2020a;Deecke et al, 2020).…”
Section: −(1 − Ymentioning
confidence: 99%
“…Despite such negative samples may not coincide with ground-truth anomalies, such contrasting can be beneficial for learning characteristic representations of normal concepts. Moreover, the combination of Outlier Exposure and expressive representations with the hyperphere classifier have shown exceptional results for Deep AD on images (Ruff et al, 2020a;Deecke et al, 2020).…”
Section: −(1 − Ymentioning
confidence: 99%
“…They further show that empirical gains can only be guaranteed when anomalies are used for OE that differ strongly from the normal class instead. Nevertheless, gains have been reported by Deecke et al [13], that make use of the HSC objective in conjunction with OE to learn a modulation of frozen features by means of newly introduced, residual adaptation layers. Liznerski et al [35] propose to fine-tune intermediate features extracted from a VGG network using their Fully Convolutional Data Descriptor (FCDD) objective together with OE.…”
Section: Learning Anomaly Detection From Scratchmentioning
confidence: 99%
“…However, due to catastrophic forgetting, feature representations cannot be easily fine-tuned to the datasets at hand. Furthermore, proposed methods that tackle this problem [40,13,38,35,48] currently do not integrate the strong Gaussian prior which can be induced based on ties between deep generative and discriminative modeling [32]. This prior has been confirmed in [42] to enhance AD results in the transfer learning setting in conjunction with fixed ImageNet representations.…”
Section: Introductionmentioning
confidence: 99%