2020
DOI: 10.1109/access.2020.3043473
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Anomaly Detection Using Style Distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Loss Function Pre-trained Highlights (1) Autoencoder Model [68] L2, SSIM -The paper firstly takes SSIM as a loss to reconstruct image and detect anomalies. [69] L2, SSIM -The paper proposes two AEs and reduces style change during image reconstruction. UTAD [70] L1, Adversarial VGG The paper uses two-stage reconstruction to generate high-fidelity images to avoid reconstruction errors.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Loss Function Pre-trained Highlights (1) Autoencoder Model [68] L2, SSIM -The paper firstly takes SSIM as a loss to reconstruct image and detect anomalies. [69] L2, SSIM -The paper proposes two AEs and reduces style change during image reconstruction. UTAD [70] L1, Adversarial VGG The paper uses two-stage reconstruction to generate high-fidelity images to avoid reconstruction errors.…”
Section: Methodsmentioning
confidence: 99%
“…There are regularly differences in style between the reconstructed image and the original image, resulting in over-detection. Chung et al [69] present an Outlier-Exposed Style Distillation Network (OE-SDN) to preserve the style translation and suppress the content translation of the AE in order to avoid over-detection. As the anomaly prediction, Chung et al replaced the difference between the original image and the reconstruction image of AE with the difference between the reconstruction image of OE-SDN and the reconstruction image of AE.…”
Section: Autoencodermentioning
confidence: 99%
“…To improve the quality of reconstruction images, SSIM-AE [7], [8] propose to train an autoencoder with the structural similarity loss for comparing luminance, contrast and structural information between local image regions. MemAE [18] and OE-SDN [19] use an additional method like memory module or knowledge distillation to complement reconstruction images. Two methods are mainly focused on a classification task of anomaly detection such as MNIST and CIFAR-10.…”
Section: Relatied Work a Reconstruction-based Methodsmentioning
confidence: 99%
“…To avoid this drawback, Gong et al [18] used AE with the memory module to correct the reconfiguration error of the abnormal sample. On the other hand, Chung [19] solved the problem by restoring the image using knowledge distillation and outlier exposure regularization. VOLUME 4, 2016 Recently, embedding similarity-based methods in [13], [14], [15], [1] showed good performance using feature vectors of a pre-trained network to detect the abnormality.…”
mentioning
confidence: 99%
“…Anomaly Detection. A wide range of literature on anomaly detection [18]- [26] has appeared in machine learning. In most anomaly detection tasks, we assume that only the normal data is given during the training and the model predicts whether a test sample is normal or not during the test time.…”
Section: Related Workmentioning
confidence: 99%