2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00954
|View full text |Cite
|
Sign up to set email alerts
|

CutPaste: Self-Supervised Learning for Anomaly Detection and Localization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
276
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 501 publications
(345 citation statements)
references
References 18 publications
0
276
0
Order By: Relevance
“…Only the approach described in [14] analyzes the advantages of transferring features learned by image inpainting to other tasks, and the authors have achieved competitive results with other models that use supervised pretraining. Authors of [28] show that selfsupervised learning can be used for anomaly detection by learning deep representations and then building a generative one-class classifier on learned representations. Another approach described in [29] proposes a deraining method using an unsupervised deraining generative adversarial network that resolves issues of previous approaches by introducing self-supervised constraints from the intrinsic statistics of unpaired rainy and clean images.…”
Section: Related Workmentioning
confidence: 99%
“…Only the approach described in [14] analyzes the advantages of transferring features learned by image inpainting to other tasks, and the authors have achieved competitive results with other models that use supervised pretraining. Authors of [28] show that selfsupervised learning can be used for anomaly detection by learning deep representations and then building a generative one-class classifier on learned representations. Another approach described in [29] proposes a deraining method using an unsupervised deraining generative adversarial network that resolves issues of previous approaches by introducing self-supervised constraints from the intrinsic statistics of unpaired rainy and clean images.…”
Section: Related Workmentioning
confidence: 99%
“…We experiment with three image datasets: CIFAR-10, Fashion-MNIST, and MVTEC (Bergmann et al, 2019). These have been used in virtually all deep anomaly detection papers published at top-tier venues (Ruff et al, 2018;Golan & El-Yaniv, 2018;Hendrycks et al, 2019;Bergman & Hoshen, 2020;Li et al, 2021), and we adopt these papers' experimental protocol here, as detailed below.…”
Section: Experiments On Image Datamentioning
confidence: 99%
“…25 Crafting synthetic outliers can be considered as self-supervised learning, since no manual labelling is required. In CutPaste, 7 the authors propose to transform any normal training image into an anomalous one by locally incorporating a transformed patch coming from any of the other training images. This allows to learn finer features, which will be more specific to the task-at-hand than outliers coming from a different dataset.…”
Section: "Unsupervised" Learning For Anomaly Detectionmentioning
confidence: 99%
“…We first assess in section 4.3 the impact of defective training parts on the performances of these detectors, and show that the latter are already quite robust to pollution compared to other methods such as CutPaste. 7 2. We then propose in section 4.4 a simple yet efficient refinement strategy to remove polluted samples from the training data.…”
Section: Introductionmentioning
confidence: 99%