2021
DOI: 10.48550/arxiv.2106.14248
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Modal Transformer for Accelerated MR Imaging

Abstract: Accelerated multi-modal magnetic resonance (MR) imaging is a new and effective solution for fast MR imaging, providing superior performance in restoring the target modality from its undersampled counterpart with guidance from an auxiliary modality. However, existing works simply combine the auxiliary modality as prior information, lacking in-depth investigations on the potential mechanisms for fusing different modalities. Further, they usually rely on the convolutional neural networks (CNNs), which is limited … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…Approaches in this category assume the availability of large MRI training datasets to train the ViT model. Feng et al [296] propose Transformerbased architecture, MTrans, for accelerated multi-modal MR imaging. The main component of MTrans is the cross attention module that extracts and fuses complementary features from the auxiliary modality to the target modality.…”
Section: Undersampled Mri Reconstructionmentioning
confidence: 99%
“…Approaches in this category assume the availability of large MRI training datasets to train the ViT model. Feng et al [296] propose Transformerbased architecture, MTrans, for accelerated multi-modal MR imaging. The main component of MTrans is the cross attention module that extracts and fuses complementary features from the auxiliary modality to the target modality.…”
Section: Undersampled Mri Reconstructionmentioning
confidence: 99%
“…That is because: (1) The similar patches widely exist in MR images, while most methods ignore this property and only adopt CNN to learn the local information, which leads to the failure in reconstructing the high-quality aliasing-free image. Recently, some methods [7,40] have adopted selfattention and transformer to learn global information, while the structure information is still missing in the recovered results. In addition, self-attention and transformer are timeconsuming and memory-consuming.…”
Section: Zp Unet Dudormentioning
confidence: 99%
“…However, these meth- ods separately processed the undersampled MRI in the Spatial and Frequency domains and cannot learn the local and global information, simultaneously. In contrast, to learn the global information, Wu et al [40] proposed a self-attention network for MR imaging and Feng et al [7] designed a transformer network for MR imaging. However, the selfattention [3,14,36] and transformer [21,33] have high computation complexity and occupy a huge number of GPU memory.…”
Section: Mri Reconstructionmentioning
confidence: 99%
“…To further improve the quality of imaging, Feng et al [37] enabled an end-to-end joint reconstruction and super-resolution. Feng et al [38] further advanced the model for these dual tasks by incorporating the model with task-specific novel cross-attention modules.…”
Section: Introductionmentioning
confidence: 99%