2022
DOI: 10.1007/978-3-031-16446-0_45
|View full text |Cite
|
Sign up to set email alerts
|

DuDoCAF: Dual-Domain Cross-Attention Fusion with Recurrent Transformer for Fast Multi-contrast MR Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…Performance evaluation. We compare our methods with other baseline deep learning methods in three conventions of MRI reconstruction: image-domain [24,34,6,9,13], dual-domain [24,34,33] and reference-protocol-guided dual-domain reconstruction [29,6,34,33,19]. All reference-guided methods are self-implemented besides Du-DoRNet and examined without considering multi-modal fusion modules for controlled backbone comparisons.…”
Section: Settings and Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Performance evaluation. We compare our methods with other baseline deep learning methods in three conventions of MRI reconstruction: image-domain [24,34,6,9,13], dual-domain [24,34,33] and reference-protocol-guided dual-domain reconstruction [29,6,34,33,19]. All reference-guided methods are self-implemented besides Du-DoRNet and examined without considering multi-modal fusion modules for controlled backbone comparisons.…”
Section: Settings and Resultsmentioning
confidence: 99%
“…We further unite these methods considering their backbones. [6,29] adopt Dense-Unet; [33,19,13] share similar Swin-Transformer backbones derived from SwinIR [16]. [33] proposed k-space filling using the reference protocol for de-aliasing initially in self-supervised reconstruction, yet it does not improve the model's performance when fully-supervised.…”
Section: Settings and Resultsmentioning
confidence: 99%
See 3 more Smart Citations