2019
DOI: 10.1109/tmi.2018.2876633
|View full text |Cite
|
Sign up to set email alerts
|

SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth

Abstract: A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation ne… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
133
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 220 publications
(133 citation statements)
references
References 49 publications
(85 reference statements)
0
133
0
Order By: Relevance
“…To alleviate the burden of data annotation, some works focus on cross-modality image synthesis so that the segmentation of multiple modalities can be achieved with synthesized images and one-modality labels [51]- [53]. Recently, some works have explored the feasibility of cross-modality unsupervised domain adaptation to adapt deep models from the label-rich source modality to unlabeled target modality [7], [37], with good results reported. Our work proceeds along this promising direction, by demonstrating that without extra annotation effort, unsupervised domain adaptation can greatly reduce the performance degradation and for some tasks can even achieve very close segmentation performance to supervised training.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…To alleviate the burden of data annotation, some works focus on cross-modality image synthesis so that the segmentation of multiple modalities can be achieved with synthesized images and one-modality labels [51]- [53]. Recently, some works have explored the feasibility of cross-modality unsupervised domain adaptation to adapt deep models from the label-rich source modality to unlabeled target modality [7], [37], with good results reported. Our work proceeds along this promising direction, by demonstrating that without extra annotation effort, unsupervised domain adaptation can greatly reduce the performance degradation and for some tasks can even achieve very close segmentation performance to supervised training.…”
Section: Discussionmentioning
confidence: 99%
“…A higher Dice value and a lower ASD value indicate better segmentation results. The evaluation is performed on the subject-level segmentation volume to be consistent with the MMWHS and CHAOS challenges as well as previous works [37], [44].…”
Section: B Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent studies have proposed ROI-based approaches 79 to better understand the microscopic substrate of pathologies such as amyotrophic lateral 80 sclerosis [28], epilepsy [13] and Alzheimer's disease [29]. The main targets mostly include 81 subcortical structures, including thalamus [30,31], hippocampus [32,33], nucleus accumbens [34], 82 and pedunculopontine nucleus [35], among others [10]. In addition to the practical advantage of 83 dealing with well-defined structures, the focus on the subcortex is due mainly to its implications 84 in neurological diseases and psychiatric disorders.…”
mentioning
confidence: 99%
“…estimating MRI contrast from the histology (or vice versa) to reduce the registration to 517 an easier intra-modality problem. Recent advances with architectures based on generative 518 adversarial networks have shown great potential for this specific problem [80,81], even with 519 specific applications to medical imaging [82]. In this paper we have presented, as a preliminary practical application of our pipeline, the chance 522 of exploring histological sections through the related MRI volume.…”
mentioning
confidence: 99%