2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00283
|View full text |Cite
|
Sign up to set email alerts
|

PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
88
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 143 publications
(89 citation statements)
references
References 9 publications
1
88
0
Order By: Relevance
“…A different class of methods leverage descriptors from pretrained networks for anomaly detection (Bergmann et al, 2020;Cohen and Hoshen, 2020;Defard et al, 2021;Gudovskiy et al, 2022;Mishra et al, 2020;Reiss et al, 2021;Rippel et al, 2021). The key idea behind these approaches is that anomalous regions produce descriptors that differ from the ones without anomalies.…”
Section: Anomaly Detection In 2dmentioning
confidence: 99%
“…A different class of methods leverage descriptors from pretrained networks for anomaly detection (Bergmann et al, 2020;Cohen and Hoshen, 2020;Defard et al, 2021;Gudovskiy et al, 2022;Mishra et al, 2020;Reiss et al, 2021;Rippel et al, 2021). The key idea behind these approaches is that anomalous regions produce descriptors that differ from the ones without anomalies.…”
Section: Anomaly Detection In 2dmentioning
confidence: 99%
“…Recently, networks pre-trained on the large dataset are proven to be capable of extracting discriminative features for anomaly detection [7,8,22,24,27,28]. With a pre-trained model, memorizing its anomaly-free features helps to identify anomalous samples [7,27].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, networks pre-trained on the large dataset are proven to be capable of extracting discriminative features for anomaly detection [7,8,22,24,27,28]. With a pre-trained model, memorizing its anomaly-free features helps to identify anomalous samples [7,27]. The studies in [8,28] show that using the Mahalanobis distance to measure the similarity between anomalies and anomaly-free features leads to accurate anomaly detection.…”
Section: Related Workmentioning
confidence: 99%
“…We use the official code by the respective authors 34 . NTL is built upon the final pooling layer of a pre-trained ResNet152 for CIFAR-10 and F-MNIST (as suggested in Defard et al ( 2021)), and upon the third residual block of a pre-trained WideResNet50 for MVTEC (as suggested in Reiss et al (2021)). Further implementation details of NTL are in the Appendix C. We adopt the two proposed LOE methods (Section 3) and the two baseline methods "Blind" and "Refine" (Section 2) to both backbone models.…”
Section: Experiments On Image Datamentioning
confidence: 99%
“…NTL on image data NTL is built upon the final pooling layer of a pre-trained ResNet152 on CIFAR-10 and F-MNIST (as suggested in Defard et al ( 2021)), and upon the third residual block of a pre-trained WideResNet50 on MVTEC (as suggested in Reiss et al (2021)). On all image datasets, the pre-trained feature extractors are frozen during training.…”
Section: Implementation Detailsmentioning
confidence: 99%