2021
DOI: 10.48550/arxiv.2108.04116
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transfer Learning Gaussian Anomaly Detection by Fine-tuning Representations

Oliver Rippel,
Arnav Chavan,
Chucai Lei
et al.

Abstract: Current state-of-the-art Anomaly Detection (AD) methods exploit the powerful representations yielded by large-scale ImageNet training. However, catastrophic forgetting prevents the successful fine-tuning of pre-trained representations on new datasets in the semi/unsupervised setting, and representations are therefore commonly fixed.In our work, we propose a new method to fine-tune learned representations for AD in a transfer learning setting. Based on the linkage between generative and discriminative modeling,… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 35 publications
(80 reference statements)
0
3
0
Order By: Relevance
“…The paper establishes a model of normality by fitting a multivariate Gaussian to feature representations of a pre-trained network. [39] Mahalanobis Distance EfficientNet The paper generates a multi-variate Gaussian distribution for the normal class and mitigates the catastrophic forgetting in past research. PEDENet [40] Log-Likelihood, Cross-Entropy, Regularization -The model can predict the location of the patch and compare it with the actual location to judge the abnormality.…”
Section: Distribution Mapmentioning
confidence: 99%
“…The paper establishes a model of normality by fitting a multivariate Gaussian to feature representations of a pre-trained network. [39] Mahalanobis Distance EfficientNet The paper generates a multi-variate Gaussian distribution for the normal class and mitigates the catastrophic forgetting in past research. PEDENet [40] Log-Likelihood, Cross-Entropy, Regularization -The model can predict the location of the patch and compare it with the actual location to judge the abnormality.…”
Section: Distribution Mapmentioning
confidence: 99%
“…A different class of methods leverage descriptors from pretrained networks for anomaly detection (Bergmann et al, 2020;Cohen and Hoshen, 2020;Defard et al, 2021;Gudovskiy et al, 2022;Mishra et al, 2020;Reiss et al, 2021;Rippel et al, 2021). The key idea behind these approaches is that anomalous regions produce descriptors that differ from the ones without anomalies.…”
Section: Anomaly Detection In 2dmentioning
confidence: 99%
“…We show that k-means clustering is less successful than our method on this task. Similar techniques were adopted by [30] and improved by [31,34]. Adaptation of pre-trained representations using contrastive learning was recently suggested by [32].…”
Section: Introductionmentioning
confidence: 99%