2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00043
|View full text |Cite
|
Sign up to set email alerts
|

Domain-Agnostic Learning With Anatomy-Consistent Embedding for Cross-Modality Liver Segmentation

Abstract: Domain Adaptation (DA) has the potential to greatly help the generalization of deep learning models. However, the current literature usually assumes to transfer the knowledge from the source domain to a specific known target domain. Domain Agnostic Learning (DAL) proposes a new task of transferring knowledge from the source domain to data from multiple heterogeneous target domains. In this work, we propose the Domain-Agnostic Learning framework with Anatomy-Consistent Embedding (DALACE) that works on both doma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 25 publications
(63 reference statements)
0
11
0
Order By: Relevance
“…few labeled instances in the target domain can be used for joint-training with the source data [19]. The more ambitious UDA strategies [1,3,4,17,23,24] assume no labels are available for the target domain. The core idea of UDA is to go through an adaption phase using a non-linear mapping to find a common domaininvariant representation or a latent space Z.…”
Section: Ct Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…few labeled instances in the target domain can be used for joint-training with the source data [19]. The more ambitious UDA strategies [1,3,4,17,23,24] assume no labels are available for the target domain. The core idea of UDA is to go through an adaption phase using a non-linear mapping to find a common domaininvariant representation or a latent space Z.…”
Section: Ct Predictionmentioning
confidence: 99%
“…Related Work. Recent works on UDA for medical image segmentation rely on Generative Adversarial Networks [3,4,7,17,23,24] to translate the appearance from one modality to the other using multiple discriminators and a pixelwise cycle consistency loss. Despite their success, they: i) suffer from instabilities [2], ii) rely on complex architectures with more than 95 million parameters, iii) are prone to model collapse [16], and iv) may generate images outside the actual target domain [1].…”
Section: Ct Predictionmentioning
confidence: 99%
“…labeled instances in the target domain can be used for joint-training with the source data [19]. The more ambitious UDA strategies [1,3,4,17,23,24] assume no labels are available for the target domain. The core idea of UDA is to go through an adaption phase using a non-linear mapping to find a common domaininvariant representation or a latent space Z.…”
Section: Introductionmentioning
confidence: 99%
“…Related Work. Recent works on UDA for medical image segmentation rely on Generative Adversarial Networks [3,4,7,17,23,24] to translate the appearance from one modality to the other using multiple discriminators and a pixel-wise cycle consistency loss. Despite their success, they: i) suffer from instabilities [2], ii) rely on complex architectures with more than 95 million parameters, iii) are prone to model collapse [16], and iv) may generate images outside the actual target domain [1].…”
Section: Introductionmentioning
confidence: 99%
“…In unsupervised domain adaptation, a model is pre-trained on similar tasks in some other domains with labeled data, and the pre-trained model is then fine-tuned with a limited set of labeled data in the target domain. Domain adaption can also be performed to learn a generic representation where the model is fully-supervised for source data and unsupervised for the target data (Yang et al, 2019;Zhuang et al, 2019). Self-supervised learning is closely related to transfer learning.…”
Section: Introductionmentioning
confidence: 99%