2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) 2018
DOI: 10.1109/isbi.2018.8363790
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial synthesis learning enables segmentation without target modality ground truth

Abstract: A lack of generalizability is one key limitation of deep learning based segmentation. Typically, one manually labels new training images when segmenting organs in different imaging modalities or segmenting abnormal organs from distinct disease cohorts. The manual efforts can be alleviated if one is able to reuse manual labels from one modality (e.g., MRI) to train a segmentation network for a new modality (e.g., CT). Previously, two stage methods have been proposed to use cycle generative adversarial networks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
85
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 119 publications
(86 citation statements)
references
References 18 publications
(39 reference statements)
1
85
0
Order By: Relevance
“…Then, a segmentation network trained on the target-styled images and source masks can be used to make predictions on the target images. Huo et al (2018a) suggest a joint image synthesis and segmentation framework that enables image segmentation for the target domain using unlabeled target images and labeled images from a source domain. The intuition behind this joint optimization is that the training process can benefit from the complementary information between the synthesis and segmentation networks.…”
Section: Domain Adaptation Without Target Labelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, a segmentation network trained on the target-styled images and source masks can be used to make predictions on the target images. Huo et al (2018a) suggest a joint image synthesis and segmentation framework that enables image segmentation for the target domain using unlabeled target images and labeled images from a source domain. The intuition behind this joint optimization is that the training process can benefit from the complementary information between the synthesis and segmentation networks.…”
Section: Domain Adaptation Without Target Labelsmentioning
confidence: 99%
“…Unsupervised task Bai et al (2017) Embedding consistency Zhang et al (2017b) Image classification Sedai et al (2017) Image reconstruction Baur et al (2017) Manifold learning Chartsias et al (2018) Image reconstruction Huo et al (2018a) Image synthesis Zhao et al (2019) Image registration Li et al (2019) Transformation consistency the same-class pixels as close as possible while pushing apart the feature embedding of the pixels from different classes. To identify same-class pixels between labeled and unlabeled images, the authors assume the availability of a noisy label prior for unlabeled images.…”
Section: Publicationmentioning
confidence: 99%
“…Consequently, learning the marginal distributions of the domains alone may not be sufficient. Cross-domain adaptation of highly different modalities, has been applied in medical image analysis for image synthesis using paired images [6] and unpaired images [7], as well as for segmentation [8, 9]. However, all aforementioned approaches aim to only synthesize images that match the marginal but not the structure-specific conditional distribution such as tumors.…”
Section: Introductionmentioning
confidence: 99%
“…Convolutional neural networks for denoising are typically trained on pairs of noiseless and noisy representations of the signal. It is thus crucial to have access to accurate noiseless ground truth signal, which makes it challenging to apply these networks in areas where such ground truth is impossible or expensive to acquire, such as medical imaging [35]. To circumvent this problem, we used synthetic ground truth cryo-EM reconstructions and used a simplistic physical forward model to generate simulated projection images.…”
Section: Conclusion and Discussionmentioning
confidence: 99%