2021
DOI: 10.48550/arxiv.2104.13897
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inpainting Transformer for Anomaly Detection

Abstract: Anomaly detection in computer vision is the task of identifying images which deviate from a set of normal images. A common approach is to train deep convolutional autoencoders to inpaint covered parts of an image and compare the output with the original image. By training on anomaly-free samples only, the model is assumed to not being able to reconstruct anomalous regions properly. For anomaly detection by inpainting we suggest it to be beneficial to incorporate information from potentially distant regions. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(18 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…RIAD [90] proposes to mitigate this effect by inputting merely partial image, in conjunction with the use of a newly proposed image similarity metric function, as an image restoration problem. ITAD [91] transforms anomaly detection task into a patch sequence inpainting problem based on self-attention. Meanwhile, to compensate for the disadvantage that this type of method is difficult to cover larger anomaly regions, the transformer network is proposed to reconstruct only the covered patches, and local and global embedding methods are designed for different cases.…”
Section: Reconstruction Based Methodsmentioning
confidence: 99%
“…RIAD [90] proposes to mitigate this effect by inputting merely partial image, in conjunction with the use of a newly proposed image similarity metric function, as an image restoration problem. ITAD [91] transforms anomaly detection task into a patch sequence inpainting problem based on self-attention. Meanwhile, to compensate for the disadvantage that this type of method is difficult to cover larger anomaly regions, the transformer network is proposed to reconstruct only the covered patches, and local and global embedding methods are designed for different cases.…”
Section: Reconstruction Based Methodsmentioning
confidence: 99%
“…AUC skipGAN [21] 55.1 Puzzle AE [61] 55.4 DifferNet [39] 84.9 InTra [62] 70.1 CutPaste [14] 60.2 Draem [19] 85.9 Ours 91.4…”
Section: Methodsmentioning
confidence: 99%
“…In [19,1], the gradient of a classifier is used to obtain anomaly maps. Recently, transformer networks [21] were also successfully applied on brain anomaly detection [20]. In [15], a new thresholding method is proposed for anomaly segmentation on the BRATS dataset.…”
Section: Related Workmentioning
confidence: 99%