2022
DOI: 10.48550/arxiv.2204.02961
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SMU-Net: Style matching U-Net for brain tumor segmentation with missing modalities

Abstract: Gliomas are one of the most prevalent types of primary brain tumors, accounting for more than 30% of all cases and they develop from the glial stem or progenitor cells. In theory, the majority of brain tumors could well be identified exclusively by the use of Magnetic Resonance Imaging (MRI). Each MRI modality delivers distinct information on the soft tissue of the human brain and integrating all of them would provide comprehensive data for the accurate segmentation of the glioma, which is crucial for the pati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Knowledge Distillation: Knowledge distillation (KD; Hinton, Vinyals, and Dean 2015) was originally proposed to compress knowledge from one or more teacher networks (often large complex models or model ensemble) to a student one (often lightweight models). For multimodal segmentation with missing modalities, several works (Hu et al 2020;Wang et al 2021b;Chen et al 2021;Azad, Khosravi, and Merhof 2022) proposed to transfer the 'dark knowledge' of the full-modal network to missing-modal ones via co-training (Blum and Mitchell 1998). Although achieving decent performance, the co-training strategy incurred nonnegligible memory cost for training due to the dual-network architecture.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Knowledge Distillation: Knowledge distillation (KD; Hinton, Vinyals, and Dean 2015) was originally proposed to compress knowledge from one or more teacher networks (often large complex models or model ensemble) to a student one (often lightweight models). For multimodal segmentation with missing modalities, several works (Hu et al 2020;Wang et al 2021b;Chen et al 2021;Azad, Khosravi, and Merhof 2022) proposed to transfer the 'dark knowledge' of the full-modal network to missing-modal ones via co-training (Blum and Mitchell 1998). Although achieving decent performance, the co-training strategy incurred nonnegligible memory cost for training due to the dual-network architecture.…”
Section: Related Workmentioning
confidence: 99%
“…A naive approach is to train a 'dedicated' model for each possible subset of modalities. For better performance, the co-training strategy (Blum and Mitchell 1998) was often incorporated to distill knowledge from full-modal to missing-modal networks (Azad, Khosravi, and Merhof 2022;Chen et al 2021;Hu et al 2020;Wang et al 2021b). Despite their decent performance, the dedicated models were time-costly to train and space-costly to deploy, as 2 N − 1 models were needed for N modalities.…”
Section: Introductionmentioning
confidence: 99%
“…In [46], an over-complete network is augmented with U-net, and in U-net++ [57], the encoder-decoder architecture is re-designed by adding dense skip connection between the modules. This structure has been further improved and utilized in different medical domains [10,30,23,6].…”
Section: Cnn-based Segmentation Networkmentioning
confidence: 99%
“…MCA stands for multi-head cross-attention and LN for LayerNorm. In addition, the impact of the DLF module is examined in Table (6), which demonstrates the proposed module's effectiveness in learning multi-scale feature representations and aids in enhancing the segmentation performance.…”
Section: Double-level Fusion Module (Dlf)mentioning
confidence: 99%
“…Automatic and accurate medical image segmentation, which consists of automated delineation of anatomical structures and other regions of interest (ROIs), plays an integral role in the assessment of computer-aided diagnosis (CAD) [9,23,19,17,3,7]. As a flagship of deep learning, convolutional neural networks (CNNs) have scattered existing contributions in various medical image segmentation tasks for many years [31,28,5,4,6]. Among diverse CNN variants, the widely acknowledged symmetric Encoder-Decoder architecture nomenclature as U-Net [31] has demonstrated eminent segmentation potential.…”
Section: Introductionmentioning
confidence: 99%