The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01466
|View full text |Cite
|
Sign up to set email alerts
|

Multiresolution Knowledge Distillation for Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
164
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 268 publications
(190 citation statements)
references
References 21 publications
2
164
0
Order By: Relevance
“…Given respective nominal representations and novel test representations, anomaly detection can then be a simple matter of reconstruction errors [44], distances to k nearest neighbours [18] or finetuning of a one-class classification model such as OC-SVMs [46] or SVDD [50,56] on top of these features. For the majority of these approaches, anomaly localization comes naturally based on pixel-wise reconstruction errors, saliency-based approaches such as GradCAM [47] or XRAI [28] can be used for anomaly segmentation [52,42,45] as well.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Given respective nominal representations and novel test representations, anomaly detection can then be a simple matter of reconstruction errors [44], distances to k nearest neighbours [18] or finetuning of a one-class classification model such as OC-SVMs [46] or SVDD [50,56] on top of these features. For the majority of these approaches, anomaly localization comes naturally based on pixel-wise reconstruction errors, saliency-based approaches such as GradCAM [47] or XRAI [28] can be used for anomaly segmentation [52,42,45] as well.…”
Section: Related Workmentioning
confidence: 99%
“…To better account for the distribution shift between natural pretraining and industrial image data, subsequent adaptation can be done, e.g. via student-teacher knowledge distillation [24] such as in [6,45] or normalizing flows [17,30] trained on top of pretrained network features [42].…”
Section: Related Workmentioning
confidence: 99%
“…The uniformly generated semiorthogonal matrix [8] can avoid the singular case retaining the better performance while cubically reducing the computational cost for batch-inverse. We achieve new state-of-the-art results for the benchmark datasets, MVTec AD [9], KolektorSDD [10], KolektorSDD2 [11], and mSTC [12] while outperforming the competitive methods using reconstruction error-based [1][2][3] or knowledge distillation-based [4,5] methods with substantial margins. Moreover, we show that our method decoupled with the pre-trained CNNs can exploit the advances of discriminative models without a fine-tuning procedure.…”
Section: Introductionmentioning
confidence: 93%
“…The common idea is to train generative networks to minimize reconstruction errors learning low-dimensional features, and expect the higher error for the anomalies not presented in training than the anomaly-free. However, the networks with a sufficient capacity could restore even anomalies causing performance degradation, although the perceptual loss function for the generative networks [1] or the knowledge distillation loss for teacher-student pairs of networks [4,5] achieves a limited success.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation