2019
DOI: 10.1109/access.2019.2929258
|View full text |Cite
|
Sign up to set email alerts
|

PnP-AdaNet: Plug-and-Play Adversarial Domain Adaptation Network at Unpaired Cross-Modality Cardiac Segmentation

Abstract: Deep convolutional networks have demonstrated the state-of-the-art performance on various medical image computing tasks. Leveraging images from different modalities for the same analysis task holds clinical benefits. However, the generalization capability of deep models on test data with different distributions remain as a major challenge. In this paper, we propose the PnP-AdaNet (plug-and-play adversarial domain adaptation network) for adapting segmentation networks between different modalities of medical ima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
115
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 171 publications
(116 citation statements)
references
References 46 publications
0
115
0
1
Order By: Relevance
“…However, for the abdominal images, the adaptation performance for both directions are equivalently high and close to the supervised training upper bound. This indicates that the difficulty of domain adaptation across modalities might depend more on the task than the adaptation direction, which adds new findings over the previous work [44]. Potential future studies in terms of different segmentation tasks will help further analyze this issue.…”
Section: Discussionmentioning
confidence: 65%
See 3 more Smart Citations
“…However, for the abdominal images, the adaptation performance for both directions are equivalently high and close to the supervised training upper bound. This indicates that the difficulty of domain adaptation across modalities might depend more on the task than the adaptation direction, which adds new findings over the previous work [44]. Potential future studies in terms of different segmentation tasks will help further analyze this issue.…”
Section: Discussionmentioning
confidence: 65%
“…To validate the effectiveness of our segmentation backbone, we compare the supervised training performance of our segmentation model on cardiac dataset with Payer et al [54], which obtained the first ranking in the MMWHS Challenge 2017. Table III shows that our model can achieve comparable performance to Payer et al But unlike [7], [44] which prefers network architectures without skip connections, we consider that other network architectures could also be used in our framework, such as the Unet model [1], which is the most common networks for medical image segmentation. The generator can be directly implemented as a Unet model, which has been demonstrated to be effective and stable during training for unpaired image-to-image transformation 1 .…”
Section: Discussionmentioning
confidence: 94%
See 2 more Smart Citations
“…Several researchers have recently started to investigate the use of unsupervised domain adaptation techniques that aim at optimizing the model performance on unseen datasets without additional labeling costs. Several works have successfully applied adversarial training for cross-modality segmentation tasks, adapting a cardiac segmentation model learned from MR images to CT images and vice versa (Dou et al, 2018(Dou et al, , 2019Ouyang et al, 2019;Chen et al, 2019c). These type of approaches can also be adopted for semi-supervised learning, where the target domain is a new set of unlabeled data of the same modality (Chen et al, 2019d).…”
Section: Model Generalization Across Variousmentioning
confidence: 99%