2020
DOI: 10.48550/arxiv.2011.11108
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multiresolution Knowledge Distillation for Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Note that the idea of feature reconstruction was first proposed without the need of a frozen pre-trained network, in the "Uninformed Students" paper, 41 where an ensemble of CNNs is trained to mimic a teacher network. The idea has since been extended by using several feature maps from a single network, 42,43 as well as by blurring the input. 44 The methods benefiting from frozen pre-trained networks often perform very well because they cannot forget the richness of the pre-trained feature representations, which often happens when fine-tuning such networks on different data due to catastrophic forgetting.…”
Section: Modelling Normality With Deep Pre-trained Featuresmentioning
confidence: 99%
“…Note that the idea of feature reconstruction was first proposed without the need of a frozen pre-trained network, in the "Uninformed Students" paper, 41 where an ensemble of CNNs is trained to mimic a teacher network. The idea has since been extended by using several feature maps from a single network, 42,43 as well as by blurring the input. 44 The methods benefiting from frozen pre-trained networks often perform very well because they cannot forget the richness of the pre-trained feature representations, which often happens when fine-tuning such networks on different data due to catastrophic forgetting.…”
Section: Modelling Normality With Deep Pre-trained Featuresmentioning
confidence: 99%
“…It was introduced in the "Uninformed Students" paper, 30 where an ensemble of CNNs is trained to mimic a teacher network. The idea has since been extended by using several feature maps from a single network, 31,32 as well as by blurring the input. 33 The methods benefiting from frozen pre-trained networks often perform very well because they cannot forget the richness of the pre-trained feature representations, which often happens when fine-tuning such networks on different data due to catastrophic forgetting.…”
Section: Related Workmentioning
confidence: 99%
“…With the development of deep learning technology, the powerful feature extraction capability of deep networks has greatly enriched the technical methods of unsupervised anomaly detection. Positive sample anomaly detection methods using deep learning networks can be classified into two main categories [ 23 ]: image-reconstruction-based methods [ 24 , 25 , 26 ] and discriminative-embedding-based methods [ 27 , 28 ]. Image-reconstruction-based methods tend to learn the structural information of positive samples for image reconstruction and compare the reconstruction differences between the image to be detected and the abnormality-free map to achieve abnormality detection.…”
Section: Introductionmentioning
confidence: 99%