2018
DOI: 10.1007/978-3-030-00934-2_86
|View full text |Cite
|
Sign up to set email alerts
|

Tumor-Aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation

Abstract: We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
114
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 145 publications
(115 citation statements)
references
References 15 publications
1
114
0
Order By: Relevance
“…This could aid in the evaluation of sCT images, since GANs create images based on optimized statistical properties of features for a set of images, and they may change critical features in a transformation in a way that does not appear incorrect without close inspection. 37 There are no other HN deep learning studies with dosimetric results to compare against that we are aware of, but the MAEs reported in this study compare well to the most similar prior works (HN multi-atlas based, and brain cancer deep-learning based sCT generation) (Table IV). Although comparison to methods that used brain as their primary sites are not a fair comparison (we expect higher errors for HN due to a larger number of structures, transition regions, and the need for deformable registration), we wanted to assess how our method fared in comparison to other deep learning studies for sCT generation.…”
Section: Discussionsupporting
confidence: 75%
See 1 more Smart Citation
“…This could aid in the evaluation of sCT images, since GANs create images based on optimized statistical properties of features for a set of images, and they may change critical features in a transformation in a way that does not appear incorrect without close inspection. 37 There are no other HN deep learning studies with dosimetric results to compare against that we are aware of, but the MAEs reported in this study compare well to the most similar prior works (HN multi-atlas based, and brain cancer deep-learning based sCT generation) (Table IV). Although comparison to methods that used brain as their primary sites are not a fair comparison (we expect higher errors for HN due to a larger number of structures, transition regions, and the need for deformable registration), we wanted to assess how our method fared in comparison to other deep learning studies for sCT generation.…”
Section: Discussionsupporting
confidence: 75%
“…However, this modeling of overall statistical characteristics can also be a disadvantage as these methods do not necessarily preserve local spatial characteristics in the image, especially if training sets differ from test sets and if sufficiently strong loss criteria (such as L1 losses) are not used in training as shown in other works 38 including work by our group. 37 Also, features that occur infrequently may be ignored if additional weighting factors for them are not added. This defect/limitation was observable in an example independent testing case, case 4, with a very large tumor (Fig.…”
Section: Discussionmentioning
confidence: 99%
“…For cross-modality adaptation, an important question is whether the adaptation is symmetric to modality, i.e., whether both the adaptations from MRI to CT and CT to MRI are feasible and whether the adaptation difficulty depends on the adaptation directions. To investigate that, we conduct bidirectional domain adaptation between MRI and CT images on both datasets, which has not been consistently conducted in current cross-modality works [7], [36], [37]. Our method greatly improves the segmentation performance for both adaptation directions of MRI to CT and CT to MRI, demonstrating that the cross-modality adaptation can be achieved in both directions.…”
Section: Discussionmentioning
confidence: 98%
“…For example, both [10] and [35] introduce semantic consistency into the CycleGAN to facilitate the transformation of target X-ray images towards the source images for testing with the pre-trained source models. For cross-modality adaptation, Jiang et al [36] first transform CT images to resemble MRI appearance using CycleGAN with tumor-aware loss, then the generated MRI images are combined with a few real MRI data for semi-supervised tumor segmentation. In [37] and [38], CycleGAN is combined with a segmentation network to compose an end-to-end framework.…”
Section: Introductionmentioning
confidence: 99%
“…This paper is a significant extension of our work in Ref. [10] that introduced the tumor‐aware loss to use pseudo MRI from CT for MRI segmentation. Extensions include, (a) application of the proposed approach on two additional segmentation networks, the residual fully convolutional networks (Residual‐FCN) and dense fully convolutional networks (Dense‐FCN), (b) comparison to the more recent image translation approach, called, the UNIT, (c) evaluation of longitudinal tumor response monitoring on a subset of patients who had serial weekly imaging during treatment, and finally, (d) benchmarking experiment against a shallow random forest classifier with fully connected random field based tumor segmentation on MRI for assessing performance when learning from a small dataset.…”
Section: Introductionmentioning
confidence: 90%