2018
DOI: 10.1007/978-3-319-75541-0_20
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
121
0
3

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 124 publications
(130 citation statements)
references
References 10 publications
6
121
0
3
Order By: Relevance
“…Both [9] and [19] utilize independent encoders/decoders for each modality, while we just use modalityspecific BN layers resulting in a more compact model. By further leveraging KD-loss as sort of cross-modality transductive bias, the segmentation performance is boosted to overall Dice of 88.8% (specifically, 91.7% on CT and 86.0% on MRI), exceeding our own implemented "Indivudial" training as well as the MICCAI-MMWHS challenge winner Payer et al [38] (overall Dice of 85.5%) which used single model learning.…”
Section: Segmentation Results and Comparison With State-of-the-artsmentioning
confidence: 97%
See 1 more Smart Citation
“…Both [9] and [19] utilize independent encoders/decoders for each modality, while we just use modalityspecific BN layers resulting in a more compact model. By further leveraging KD-loss as sort of cross-modality transductive bias, the segmentation performance is boosted to overall Dice of 88.8% (specifically, 91.7% on CT and 86.0% on MRI), exceeding our own implemented "Indivudial" training as well as the MICCAI-MMWHS challenge winner Payer et al [38] (overall Dice of 85.5%) which used single model learning.…”
Section: Segmentation Results and Comparison With State-of-the-artsmentioning
confidence: 97%
“…We compare the performance of different multi-modal learning methods, including two state-of-the-art approaches [9], [15]. We also refer to the available winning performance of the challenge [36], [38] to demonstrate effectiveness of multi-modal learning.…”
Section: Segmentation Results and Comparison With State-of-the-artsmentioning
confidence: 99%
“…Our current network architectures follow the practice of CycleGAN [8] by using the Resnet blocks for the generator and decoder, and follow the previous cross-modality adaptation work [7] for the configurations of the segmentation model. To validate the effectiveness of our segmentation backbone, we compare the supervised training performance of our segmentation model on cardiac dataset with Payer et al [54], which obtained the first ranking in the MMWHS Challenge 2017. Table III shows that our model can achieve comparable performance to Payer et al But unlike [7], [44] which prefers network architectures without skip connections, we consider that other network architectures could also be used in our framework, such as the Unet model [1], which is the most common networks for medical image segmentation.…”
Section: Discussionmentioning
confidence: 99%
“…Payer et al implemented a U-Net for substructure segmentation and obtained a DSC of 94% in the aorta as compared to ground truth. 20 Various DNNs have been applied to medical image segmentation, 19 specifically for cardiac substructure segmentation. These include deep convolutional neural networks (CNNs) with adaptive fusion 21 or multi-stage 20 strategies, as well as generative adversarial networks (GANs) 22 .…”
Section: Introductionmentioning
confidence: 99%