2018
DOI: 10.1007/978-3-030-00934-2_67
|View full text |Cite
|
Sign up to set email alerts
|

Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation

Abstract: Automatic parsing of anatomical objects in X-ray images is critical to many clinical applications in particular towards image-guided invention and workflow automation. Existing deep network models require a large amount of labeled data. However, obtaining accurate pixelwise labeling in X-ray images relies heavily on skilled clinicians due to the large overlaps of anatomy and the complex texture patterns. On the other hand, organs in 3D CT scans preserve clearer structures as well as sharper boundaries and thus… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
142
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 182 publications
(143 citation statements)
references
References 11 publications
1
142
0
Order By: Relevance
“…With the wide success of CycleGAN [8] in unpaired image-to-image transformation, many previous image alignment approaches are based on the CycleGAN framework with additional constrains to further regularize the image transformation process. For example, both [10] and [35] introduce semantic consistency into the CycleGAN to facilitate the transformation of target X-ray images towards the source images for testing with the pre-trained source models. For cross-modality adaptation, Jiang et al [36] first transform CT images to resemble MRI appearance using CycleGAN with tumor-aware loss, then the generated MRI images are combined with a few real MRI data for semi-supervised tumor segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…With the wide success of CycleGAN [8] in unpaired image-to-image transformation, many previous image alignment approaches are based on the CycleGAN framework with additional constrains to further regularize the image transformation process. For example, both [10] and [35] introduce semantic consistency into the CycleGAN to facilitate the transformation of target X-ray images towards the source images for testing with the pre-trained source models. For cross-modality adaptation, Jiang et al [36] first transform CT images to resemble MRI appearance using CycleGAN with tumor-aware loss, then the generated MRI images are combined with a few real MRI data for semi-supervised tumor segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…DSC (std) CycleGAN [26] 0.721 (0.049) TD-GAN [25] 0.793 (0.066) DADR [24] 0.806 (0.035) DALACE 0.847 (0.041) and tested on pre-phase MR to serve as the lowerbound for each task. Please see Table 2 for details.…”
Section: Methodsmentioning
confidence: 99%
“…Mainstream DA methods for semantic segmentation in medical imaging such as CycleGAN [26] and its variant TD-GAN [25] work at the pixel level. However, they assume a one-to-one mapping between source and target, and thus are unable to recover the complex cross-domain relations in the DAL task [1,9].…”
Section: Introductionmentioning
confidence: 99%
“…The most common model combines generative adversarial networks (GANs) with a cycle-consistency constrain for image-to-image translation and two segmentation networks, one for each image domain, trained end-to-end in order to benefit from a combined loss function. This model has been applied for cross-modality segmentation improvement [7,8], domain adaptation across scanners [8] or across modalities [9] and segmentation of an unlabeled target modality using only the source ground truth [10,11]. Alternatively, a GAN can be trained to generate synthetic images from masks according to some conditional value, like the dataset style, as in the case of retinal fundus images for vessel segmentation [12].…”
Section: Introductionmentioning
confidence: 99%