2022
DOI: 10.1016/j.media.2022.102475
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised brain imaging 3D anomaly detection and segmentation with transformers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 74 publications
(34 citation statements)
references
References 28 publications
0
34
0
Order By: Relevance
“…However, the follow-up papers on these approaches showed that the performance translates to other medical datasets as well [40], [53], [54]. Here, in contrast to the two purely self-supervised proxy task methods, the two other top performing methods use Autoencoder-based methods, which are another main direction in anomaly detection [19], [21], [23] and follow-up and consecutive work has also extended the methods to other datasets with great success [46], [55].…”
Section: Discussionmentioning
confidence: 99%
“…However, the follow-up papers on these approaches showed that the performance translates to other medical datasets as well [40], [53], [54]. Here, in contrast to the two purely self-supervised proxy task methods, the two other top performing methods use Autoencoder-based methods, which are another main direction in anomaly detection [19], [21], [23] and follow-up and consecutive work has also extended the methods to other datasets with great success [46], [55].…”
Section: Discussionmentioning
confidence: 99%
“…Other approaches propose restoration methods (Chen et al, 2020), uncertainty estimation (Sato et al, 2019), adversarial autoencoders (Chen and Konukoglu, 2018) or the use of encoder activation maps (Silva-Rodríguez et al, 2022). Also, vector-quantized VAEs have been proposed (Pinaya et al, 2022b). As an alternative to AE-based architectures, generative adversarial networks (GANs) have been applied to the problem of UAD (Schlegl et al, 2019).…”
Section: Recent Workmentioning
confidence: 99%
“…A Transformer is a DL model that adopts the mechanism of self-attention [28], differently weighting the significance of each part of the input data. Transformers are also used for AD in many different fields such as, aerial videos from a UAV's [29], system logs [30] and brain-scan images [31]. The selfattention mechanism of Transformers is very useful for AD, making it easier for DL models to recognize irregular activities in the input.…”
Section: Introductionmentioning
confidence: 99%