2021
DOI: 10.48550/arxiv.2105.08059
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Abstract: Supervised deep learning has swiftly become a workhorse for accelerated MRI in recent years, offering state-of-the-art performance in image reconstruction from undersampled acquisitions. Training deep supervised models requires large datasets of undersampled and fullysampled acquisitions typically from a matching set of subjects. Given scarce access to large medical datasets, this limitation has sparked interest in unsupervised methods that reduce reliance on fully-sampled ground-truth data. A common framework… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(17 citation statements)
references
References 54 publications
0
16
0
Order By: Relevance
“…We want to highlight one particular work that uses the Transformer-layer architecture to regularize the challenging problem of MRI image reconstruction from under-sampled measurements [38]. This work is inspired by the strong prior induced by the structure of untrained neural networks [46], [47].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We want to highlight one particular work that uses the Transformer-layer architecture to regularize the challenging problem of MRI image reconstruction from under-sampled measurements [38]. This work is inspired by the strong prior induced by the structure of untrained neural networks [46], [47].…”
Section: Discussionmentioning
confidence: 99%
“…Further, unlike deep learning-based approaches, they do not require large annotated medical imaging datasets for training. This reduced reliance on labeled datasets is crucial to the medical research community [34], (b) [35], (c) [36], (d) [37], (e) [38], (f) [39].…”
Section: Hand-crafted Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, using transformers has been shown to be more promising in computer vision (Dosovitskiy et al, 2020; for utilizing long-range dependencies than other, traditional CNN-based methods. In parallel, transformers with powerful global relation modeling abilities have become the standard starting point for training on a wide range of downstream medical imaging analysis tasks, such as image segmentation Cao et al, 2021;Wang et al, 2021b;Valanarasu et al, 2021;Xie et al, 2021b), image synthesis (Kong et al, 2021;Ristea et al, 2021;Dalmaz et al, 2021), and image enhancement (Korkmaz et al, 2021;Luthra et al, 2021;Wang et al, 2021a).…”
Section: Introductionmentioning
confidence: 99%