2023
DOI: 10.1002/mp.16338
|View full text |Cite
|
Sign up to set email alerts
|

A modality‐collaborative convolution and transformer hybrid network for unpaired multi‐modal medical image segmentation with limited annotations

Abstract: BackgroundMulti‐modal learning is widely adopted to learn the latent complementary information between different modalities in multi‐modal medical image segmentation tasks. Nevertheless, the traditional multi‐modal learning methods require spatially well‐aligned and paired multi‐modal images for supervised training, which cannot leverage unpaired multi‐modal images with spatial misalignment and modality discrepancy. For training accurate multi‐modal segmentation networks using easily accessible and low‐cost un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 54 publications
0
1
0
Order By: Relevance
“…Combining it with convolutional networks and skip connections enables the accurate segmentation of prostate partitions in MRI. Liu et al [40] designed the MCTHNet by integrating convolution and transformer structures for multi-modal medical image segmentation with limited annotation, and their approach achieved the best semi-supervised results on several multi-modal datasets. Furthermore, TransFuse [41], Medical Transformer [42], TransBTS [43], FCT [44], and HiFormer [45] combine self-attention with convolutional networks to achieve excellent results in specific medical image segmentation tasks.…”
Section: Methods Combined Convolution With Self-attention Mechanismmentioning
confidence: 99%
“…Combining it with convolutional networks and skip connections enables the accurate segmentation of prostate partitions in MRI. Liu et al [40] designed the MCTHNet by integrating convolution and transformer structures for multi-modal medical image segmentation with limited annotation, and their approach achieved the best semi-supervised results on several multi-modal datasets. Furthermore, TransFuse [41], Medical Transformer [42], TransBTS [43], FCT [44], and HiFormer [45] combine self-attention with convolutional networks to achieve excellent results in specific medical image segmentation tasks.…”
Section: Methods Combined Convolution With Self-attention Mechanismmentioning
confidence: 99%