2020
DOI: 10.48550/arxiv.2010.05903
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation

Abstract: Anomaly detection methods require high-quality features. One way of obtaining strong features is to adapt pre-trained features to anomaly detection on the target distribution. Unfortunately, simple adaptation methods often result in catastrophic collapse (feature deterioration) and reduce performance. DeepSVDD combats collapse by removing biases from architectures, but this limits the adaptation performance gain. In this work, we propose two methods for combating collapse: i) a variant of early stopping that d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(38 citation statements)
references
References 16 publications
0
38
0
Order By: Relevance
“…Analogous pre-training for OCC has been proposed by [3], where they jointly train anomaly detection with the original task, which achieves only limited adaptation success. PANDA [1] proposed techniques based on early stopping and EWC [22], a continual learning method, to mitigate catastrophic collapse. Although PANDA achieved state-of-the-art performance on most datasets, it has yet to solve the problem of catastrophic collapse.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Analogous pre-training for OCC has been proposed by [3], where they jointly train anomaly detection with the original task, which achieves only limited adaptation success. PANDA [1] proposed techniques based on early stopping and EWC [22], a continual learning method, to mitigate catastrophic collapse. Although PANDA achieved state-of-the-art performance on most datasets, it has yet to solve the problem of catastrophic collapse.…”
Section: Related Workmentioning
confidence: 99%
“…The feature extractor φ is initialized with some pre-trained feature extractor φ 0 (an ImageNet pretrained ResNet was shown by [1] to be very effective). The center can be set to be the mean of the training set pre-trained feature representation:…”
Section: Center Lossmentioning
confidence: 99%
See 2 more Smart Citations
“…Leveraging Pre-trained CNN: Usually the sample scale of anomaly detection datasets is relatively smaller than Im-ageNet dataset [17], so it is inadequate to learning a good representation from scratch. Many methods have also been proposed for anomaly detection [18,19,20,21] or segmentation [2,22,23,24,25] by using deep representations pretrained on ImageNet [17]. They compare patch features at the same position from target and normal image to obtain pixellevel anomaly map.…”
Section: Introductionmentioning
confidence: 99%