2020
DOI: 10.1002/mp.14509
|View full text |Cite
|
Sign up to set email alerts
|

Spatial orthogonal attention generative adversarial network for MRI reconstruction

Abstract: Purpose: Recent studies have witnessed that self-attention modules can better solve the vision understanding problems by capturing long-range dependencies. However, there are very few works designing a lightweight self-attention module to improve the quality of MRI reconstruction. Furthermore, it can be observed that several important self-attention modules (e.g., the non-local block) cause high computational complexity and need a huge number of GPU memory when the size of the input feature is large. The purpo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…Compared with the same model without attention modules, this design achieves a higher reconstruction accuracy. However, a key limitation of attention modules is their high computational demand, which is addressed by the memory-efficient selfattention module proposed by Zhou et al [82].…”
Section: E Attentionmentioning
confidence: 99%
“…Compared with the same model without attention modules, this design achieves a higher reconstruction accuracy. However, a key limitation of attention modules is their high computational demand, which is addressed by the memory-efficient selfattention module proposed by Zhou et al [82].…”
Section: E Attentionmentioning
confidence: 99%
“…Tan et al devised a CNN model with residual connections in which channel-and spatial-attention modules were engineered to reconstruct X-ray images of the lung, from a large dataset (>55,000 images) [80]. Other studies focused on MRI reconstruction and demonstrated accurate and generalisable hybrid models by analysing large and diverse imaging data [15,114,129,139].…”
Section: Table IIImentioning
confidence: 99%
“…for MRI reconstruction. Zhu et al [43] showed that spatial attention can be applied to GAN and reconstruct desired MR images. However, the study did not use channel-wise attention.…”
Section: Model Wasmentioning
confidence: 99%